00:00:00.001 Started by upstream project "autotest-per-patch" build number 126256 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.064 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.064 The recommended git tool is: git 00:00:00.065 using credential 00000000-0000-0000-0000-000000000002 00:00:00.066 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.099 Fetching changes from the remote Git repository 00:00:00.102 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.155 Using shallow fetch with depth 1 00:00:00.155 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.155 > git --version # timeout=10 00:00:00.226 > git --version # 'git version 2.39.2' 00:00:00.226 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.261 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.261 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.379 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.391 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.402 Checking out Revision 7caca6989ac753a10259529aadac5754060382af (FETCH_HEAD) 00:00:04.402 > git config core.sparsecheckout # timeout=10 00:00:04.413 > git read-tree -mu HEAD # timeout=10 00:00:04.428 > git checkout -f 7caca6989ac753a10259529aadac5754060382af # timeout=5 00:00:04.448 Commit message: "jenkins/jjb-config: Purge centos leftovers" 00:00:04.448 > git rev-list --no-walk 7caca6989ac753a10259529aadac5754060382af # timeout=10 00:00:04.598 [Pipeline] Start of Pipeline 00:00:04.612 [Pipeline] library 00:00:04.614 Loading library shm_lib@master 00:00:04.614 Library shm_lib@master is cached. Copying from home. 00:00:04.632 [Pipeline] node 00:00:04.640 Running on WFP8 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:04.642 [Pipeline] { 00:00:04.651 [Pipeline] catchError 00:00:04.653 [Pipeline] { 00:00:04.663 [Pipeline] wrap 00:00:04.671 [Pipeline] { 00:00:04.678 [Pipeline] stage 00:00:04.680 [Pipeline] { (Prologue) 00:00:04.861 [Pipeline] sh 00:00:05.150 + logger -p user.info -t JENKINS-CI 00:00:05.165 [Pipeline] echo 00:00:05.167 Node: WFP8 00:00:05.173 [Pipeline] sh 00:00:05.465 [Pipeline] setCustomBuildProperty 00:00:05.476 [Pipeline] echo 00:00:05.478 Cleanup processes 00:00:05.482 [Pipeline] sh 00:00:05.759 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.759 1218082 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.772 [Pipeline] sh 00:00:06.051 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.051 ++ grep -v 'sudo pgrep' 00:00:06.051 ++ awk '{print $1}' 00:00:06.051 + sudo kill -9 00:00:06.051 + true 00:00:06.067 [Pipeline] cleanWs 00:00:06.076 [WS-CLEANUP] Deleting project workspace... 00:00:06.076 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.082 [WS-CLEANUP] done 00:00:06.088 [Pipeline] setCustomBuildProperty 00:00:06.104 [Pipeline] sh 00:00:06.383 + sudo git config --global --replace-all safe.directory '*' 00:00:06.461 [Pipeline] httpRequest 00:00:06.476 [Pipeline] echo 00:00:06.477 Sorcerer 10.211.164.101 is alive 00:00:06.483 [Pipeline] httpRequest 00:00:06.486 HttpMethod: GET 00:00:06.487 URL: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:06.487 Sending request to url: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:06.489 Response Code: HTTP/1.1 200 OK 00:00:06.490 Success: Status code 200 is in the accepted range: 200,404 00:00:06.490 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:07.688 [Pipeline] sh 00:00:07.968 + tar --no-same-owner -xf jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:07.981 [Pipeline] httpRequest 00:00:08.011 [Pipeline] echo 00:00:08.012 Sorcerer 10.211.164.101 is alive 00:00:08.020 [Pipeline] httpRequest 00:00:08.024 HttpMethod: GET 00:00:08.025 URL: http://10.211.164.101/packages/spdk_ba0567a8216a8d898db1f28be61148d02af09076.tar.gz 00:00:08.025 Sending request to url: http://10.211.164.101/packages/spdk_ba0567a8216a8d898db1f28be61148d02af09076.tar.gz 00:00:08.027 Response Code: HTTP/1.1 200 OK 00:00:08.027 Success: Status code 200 is in the accepted range: 200,404 00:00:08.028 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_ba0567a8216a8d898db1f28be61148d02af09076.tar.gz 00:00:28.072 [Pipeline] sh 00:00:28.358 + tar --no-same-owner -xf spdk_ba0567a8216a8d898db1f28be61148d02af09076.tar.gz 00:00:30.914 [Pipeline] sh 00:00:31.205 + git -C spdk log --oneline -n5 00:00:31.205 ba0567a82 scripts/perf: Include per-node hugepages stats in collect-vmstat 00:00:31.205 a83ad116a scripts/setup.sh: Use HUGE_EVEN_ALLOC logic by default 00:00:31.205 a95bbf233 blob: set parent_id properly on spdk_bs_blob_set_external_parent. 00:00:31.205 248c547d0 nvmf/tcp: add option for selecting a sock impl 00:00:31.205 2d30d9f83 accel: introduce tasks in sequence limit 00:00:31.217 [Pipeline] } 00:00:31.231 [Pipeline] // stage 00:00:31.242 [Pipeline] stage 00:00:31.245 [Pipeline] { (Prepare) 00:00:31.262 [Pipeline] writeFile 00:00:31.276 [Pipeline] sh 00:00:31.604 + logger -p user.info -t JENKINS-CI 00:00:31.619 [Pipeline] sh 00:00:31.904 + logger -p user.info -t JENKINS-CI 00:00:31.921 [Pipeline] sh 00:00:32.208 + cat autorun-spdk.conf 00:00:32.208 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:32.208 SPDK_TEST_NVMF=1 00:00:32.208 SPDK_TEST_NVME_CLI=1 00:00:32.208 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:32.208 SPDK_TEST_NVMF_NICS=e810 00:00:32.208 SPDK_TEST_VFIOUSER=1 00:00:32.208 SPDK_RUN_UBSAN=1 00:00:32.208 NET_TYPE=phy 00:00:32.216 RUN_NIGHTLY=0 00:00:32.222 [Pipeline] readFile 00:00:32.259 [Pipeline] withEnv 00:00:32.261 [Pipeline] { 00:00:32.276 [Pipeline] sh 00:00:32.567 + set -ex 00:00:32.567 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:32.567 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:32.567 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:32.567 ++ SPDK_TEST_NVMF=1 00:00:32.567 ++ SPDK_TEST_NVME_CLI=1 00:00:32.567 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:32.567 ++ SPDK_TEST_NVMF_NICS=e810 00:00:32.567 ++ SPDK_TEST_VFIOUSER=1 00:00:32.567 ++ SPDK_RUN_UBSAN=1 00:00:32.567 ++ NET_TYPE=phy 00:00:32.567 ++ RUN_NIGHTLY=0 00:00:32.567 + case $SPDK_TEST_NVMF_NICS in 00:00:32.567 + DRIVERS=ice 00:00:32.567 + [[ tcp == \r\d\m\a ]] 00:00:32.567 + [[ -n ice ]] 00:00:32.567 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:32.567 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:32.567 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:00:32.567 rmmod: ERROR: Module irdma is not currently loaded 00:00:32.567 rmmod: ERROR: Module i40iw is not currently loaded 00:00:32.567 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:32.567 + true 00:00:32.567 + for D in $DRIVERS 00:00:32.567 + sudo modprobe ice 00:00:32.567 + exit 0 00:00:32.577 [Pipeline] } 00:00:32.597 [Pipeline] // withEnv 00:00:32.602 [Pipeline] } 00:00:32.619 [Pipeline] // stage 00:00:32.631 [Pipeline] catchError 00:00:32.633 [Pipeline] { 00:00:32.650 [Pipeline] timeout 00:00:32.650 Timeout set to expire in 50 min 00:00:32.652 [Pipeline] { 00:00:32.667 [Pipeline] stage 00:00:32.669 [Pipeline] { (Tests) 00:00:32.682 [Pipeline] sh 00:00:32.973 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:32.973 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:32.973 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:32.973 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:00:32.973 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:32.973 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:32.973 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:00:32.973 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:32.973 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:32.973 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:32.973 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:00:32.973 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:32.973 + source /etc/os-release 00:00:32.973 ++ NAME='Fedora Linux' 00:00:32.973 ++ VERSION='38 (Cloud Edition)' 00:00:32.973 ++ ID=fedora 00:00:32.973 ++ VERSION_ID=38 00:00:32.973 ++ VERSION_CODENAME= 00:00:32.973 ++ PLATFORM_ID=platform:f38 00:00:32.973 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:32.973 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:32.973 ++ LOGO=fedora-logo-icon 00:00:32.973 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:32.973 ++ HOME_URL=https://fedoraproject.org/ 00:00:32.973 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:32.973 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:32.973 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:32.973 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:32.973 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:32.973 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:32.973 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:32.973 ++ SUPPORT_END=2024-05-14 00:00:32.973 ++ VARIANT='Cloud Edition' 00:00:32.973 ++ VARIANT_ID=cloud 00:00:32.973 + uname -a 00:00:32.973 Linux spdk-wfp-08 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:32.973 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:00:35.511 Hugepages 00:00:35.511 node hugesize free / total 00:00:35.511 node0 1048576kB 0 / 0 00:00:35.511 node0 2048kB 0 / 0 00:00:35.511 node1 1048576kB 0 / 0 00:00:35.511 node1 2048kB 0 / 0 00:00:35.511 00:00:35.511 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:35.511 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:00:35.511 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:00:35.511 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:00:35.511 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:00:35.511 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:00:35.511 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:00:35.511 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:00:35.511 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:00:35.511 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:00:35.511 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:00:35.511 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:00:35.511 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:00:35.511 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:00:35.511 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:00:35.511 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:00:35.511 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:00:35.511 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:00:35.511 + rm -f /tmp/spdk-ld-path 00:00:35.511 + source autorun-spdk.conf 00:00:35.511 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:35.511 ++ SPDK_TEST_NVMF=1 00:00:35.511 ++ SPDK_TEST_NVME_CLI=1 00:00:35.511 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:35.511 ++ SPDK_TEST_NVMF_NICS=e810 00:00:35.511 ++ SPDK_TEST_VFIOUSER=1 00:00:35.511 ++ SPDK_RUN_UBSAN=1 00:00:35.511 ++ NET_TYPE=phy 00:00:35.511 ++ RUN_NIGHTLY=0 00:00:35.511 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:35.511 + [[ -n '' ]] 00:00:35.511 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:35.511 + for M in /var/spdk/build-*-manifest.txt 00:00:35.511 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:35.511 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:35.511 + for M in /var/spdk/build-*-manifest.txt 00:00:35.511 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:35.511 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:35.511 ++ uname 00:00:35.511 + [[ Linux == \L\i\n\u\x ]] 00:00:35.511 + sudo dmesg -T 00:00:35.511 + sudo dmesg --clear 00:00:35.511 + dmesg_pid=1219002 00:00:35.511 + [[ Fedora Linux == FreeBSD ]] 00:00:35.511 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:35.511 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:35.512 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:35.512 + [[ -x /usr/src/fio-static/fio ]] 00:00:35.512 + export FIO_BIN=/usr/src/fio-static/fio 00:00:35.512 + FIO_BIN=/usr/src/fio-static/fio 00:00:35.512 + sudo dmesg -Tw 00:00:35.512 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:35.512 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:35.512 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:35.512 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:35.512 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:35.512 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:35.512 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:35.512 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:35.512 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:35.512 Test configuration: 00:00:35.512 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:35.512 SPDK_TEST_NVMF=1 00:00:35.512 SPDK_TEST_NVME_CLI=1 00:00:35.512 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:35.512 SPDK_TEST_NVMF_NICS=e810 00:00:35.512 SPDK_TEST_VFIOUSER=1 00:00:35.512 SPDK_RUN_UBSAN=1 00:00:35.512 NET_TYPE=phy 00:00:35.512 RUN_NIGHTLY=0 00:01:54 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:00:35.512 00:01:54 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:35.512 00:01:54 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:35.512 00:01:54 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:35.512 00:01:54 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:35.512 00:01:54 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:35.512 00:01:54 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:35.512 00:01:54 -- paths/export.sh@5 -- $ export PATH 00:00:35.512 00:01:54 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:35.512 00:01:54 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:00:35.512 00:01:54 -- common/autobuild_common.sh@444 -- $ date +%s 00:00:35.512 00:01:54 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721080914.XXXXXX 00:00:35.512 00:01:54 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721080914.ZZmxO6 00:00:35.512 00:01:54 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:00:35.512 00:01:54 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:00:35.512 00:01:54 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:00:35.512 00:01:54 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:35.512 00:01:54 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:35.512 00:01:54 -- common/autobuild_common.sh@460 -- $ get_config_params 00:00:35.512 00:01:54 -- common/autotest_common.sh@390 -- $ xtrace_disable 00:00:35.512 00:01:54 -- common/autotest_common.sh@10 -- $ set +x 00:00:35.512 00:01:54 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:00:35.512 00:01:54 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:00:35.512 00:01:54 -- pm/common@17 -- $ local monitor 00:00:35.512 00:01:54 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:35.512 00:01:54 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:35.512 00:01:54 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:35.512 00:01:54 -- pm/common@21 -- $ date +%s 00:00:35.512 00:01:54 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:35.512 00:01:54 -- pm/common@21 -- $ date +%s 00:00:35.512 00:01:54 -- pm/common@25 -- $ sleep 1 00:00:35.512 00:01:54 -- pm/common@21 -- $ date +%s 00:00:35.512 00:01:54 -- pm/common@21 -- $ date +%s 00:00:35.512 00:01:54 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721080914 00:00:35.512 00:01:54 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721080914 00:00:35.512 00:01:54 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721080914 00:00:35.512 00:01:54 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721080914 00:00:35.512 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721080914_collect-vmstat.pm.log 00:00:35.512 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721080914_collect-cpu-load.pm.log 00:00:35.512 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721080914_collect-cpu-temp.pm.log 00:00:35.512 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721080914_collect-bmc-pm.bmc.pm.log 00:00:36.451 00:01:55 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:00:36.451 00:01:55 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:36.451 00:01:55 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:36.451 00:01:55 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:36.451 00:01:55 -- spdk/autobuild.sh@16 -- $ date -u 00:00:36.451 Mon Jul 15 10:01:55 PM UTC 2024 00:00:36.451 00:01:55 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:36.451 v24.09-pre-211-gba0567a82 00:00:36.451 00:01:55 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:36.451 00:01:55 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:36.451 00:01:55 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:36.451 00:01:55 -- common/autotest_common.sh@1093 -- $ '[' 3 -le 1 ']' 00:00:36.451 00:01:55 -- common/autotest_common.sh@1099 -- $ xtrace_disable 00:00:36.451 00:01:55 -- common/autotest_common.sh@10 -- $ set +x 00:00:36.451 ************************************ 00:00:36.451 START TEST ubsan 00:00:36.451 ************************************ 00:00:36.451 00:01:55 ubsan -- common/autotest_common.sh@1117 -- $ echo 'using ubsan' 00:00:36.451 using ubsan 00:00:36.451 00:00:36.451 real 0m0.000s 00:00:36.451 user 0m0.000s 00:00:36.451 sys 0m0.000s 00:00:36.451 00:01:55 ubsan -- common/autotest_common.sh@1118 -- $ xtrace_disable 00:00:36.451 00:01:55 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:00:36.451 ************************************ 00:00:36.451 END TEST ubsan 00:00:36.451 ************************************ 00:00:36.451 00:01:55 -- common/autotest_common.sh@1136 -- $ return 0 00:00:36.451 00:01:55 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:36.451 00:01:55 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:36.451 00:01:55 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:36.451 00:01:55 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:00:36.451 00:01:55 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:00:36.451 00:01:55 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:00:36.451 00:01:55 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:00:36.451 00:01:55 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:00:36.451 00:01:55 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:00:36.712 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:00:36.712 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:00:36.972 Using 'verbs' RDMA provider 00:00:50.132 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:00.114 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:00.114 Creating mk/config.mk...done. 00:01:00.114 Creating mk/cc.flags.mk...done. 00:01:00.114 Type 'make' to build. 00:01:00.114 00:02:18 -- spdk/autobuild.sh@69 -- $ run_test make make -j96 00:01:00.114 00:02:18 -- common/autotest_common.sh@1093 -- $ '[' 3 -le 1 ']' 00:01:00.114 00:02:18 -- common/autotest_common.sh@1099 -- $ xtrace_disable 00:01:00.114 00:02:18 -- common/autotest_common.sh@10 -- $ set +x 00:01:00.372 ************************************ 00:01:00.372 START TEST make 00:01:00.372 ************************************ 00:01:00.372 00:02:18 make -- common/autotest_common.sh@1117 -- $ make -j96 00:01:00.631 make[1]: Nothing to be done for 'all'. 00:01:02.055 The Meson build system 00:01:02.055 Version: 1.3.1 00:01:02.055 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:02.055 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:02.055 Build type: native build 00:01:02.055 Project name: libvfio-user 00:01:02.055 Project version: 0.0.1 00:01:02.055 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:02.055 C linker for the host machine: cc ld.bfd 2.39-16 00:01:02.055 Host machine cpu family: x86_64 00:01:02.055 Host machine cpu: x86_64 00:01:02.055 Run-time dependency threads found: YES 00:01:02.055 Library dl found: YES 00:01:02.055 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:02.055 Run-time dependency json-c found: YES 0.17 00:01:02.055 Run-time dependency cmocka found: YES 1.1.7 00:01:02.055 Program pytest-3 found: NO 00:01:02.055 Program flake8 found: NO 00:01:02.055 Program misspell-fixer found: NO 00:01:02.055 Program restructuredtext-lint found: NO 00:01:02.055 Program valgrind found: YES (/usr/bin/valgrind) 00:01:02.055 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:02.055 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:02.055 Compiler for C supports arguments -Wwrite-strings: YES 00:01:02.055 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:02.055 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:02.055 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:02.055 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:02.055 Build targets in project: 8 00:01:02.055 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:02.055 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:02.055 00:01:02.055 libvfio-user 0.0.1 00:01:02.055 00:01:02.055 User defined options 00:01:02.055 buildtype : debug 00:01:02.055 default_library: shared 00:01:02.055 libdir : /usr/local/lib 00:01:02.055 00:01:02.055 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:02.312 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:02.312 [1/37] Compiling C object samples/null.p/null.c.o 00:01:02.313 [2/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:02.313 [3/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:02.313 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:02.313 [5/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:02.313 [6/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:02.313 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:02.313 [8/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:02.571 [9/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:02.571 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:02.571 [11/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:02.571 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:02.571 [13/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:02.571 [14/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:02.571 [15/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:02.571 [16/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:02.571 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:02.571 [18/37] Compiling C object samples/server.p/server.c.o 00:01:02.571 [19/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:02.571 [20/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:02.571 [21/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:02.571 [22/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:02.571 [23/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:02.571 [24/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:02.571 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:02.571 [26/37] Compiling C object samples/client.p/client.c.o 00:01:02.571 [27/37] Linking target samples/client 00:01:02.571 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:02.571 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:02.571 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:01:02.571 [31/37] Linking target test/unit_tests 00:01:02.828 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:02.828 [33/37] Linking target samples/shadow_ioeventfd_server 00:01:02.828 [34/37] Linking target samples/gpio-pci-idio-16 00:01:02.828 [35/37] Linking target samples/server 00:01:02.828 [36/37] Linking target samples/null 00:01:02.828 [37/37] Linking target samples/lspci 00:01:02.828 INFO: autodetecting backend as ninja 00:01:02.828 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:02.828 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:03.086 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:03.086 ninja: no work to do. 00:01:08.353 The Meson build system 00:01:08.353 Version: 1.3.1 00:01:08.353 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:08.353 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:08.353 Build type: native build 00:01:08.353 Program cat found: YES (/usr/bin/cat) 00:01:08.353 Project name: DPDK 00:01:08.353 Project version: 24.03.0 00:01:08.353 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:08.353 C linker for the host machine: cc ld.bfd 2.39-16 00:01:08.353 Host machine cpu family: x86_64 00:01:08.353 Host machine cpu: x86_64 00:01:08.353 Message: ## Building in Developer Mode ## 00:01:08.353 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:08.353 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:08.353 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:08.353 Program python3 found: YES (/usr/bin/python3) 00:01:08.353 Program cat found: YES (/usr/bin/cat) 00:01:08.353 Compiler for C supports arguments -march=native: YES 00:01:08.353 Checking for size of "void *" : 8 00:01:08.353 Checking for size of "void *" : 8 (cached) 00:01:08.353 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:08.353 Library m found: YES 00:01:08.353 Library numa found: YES 00:01:08.353 Has header "numaif.h" : YES 00:01:08.353 Library fdt found: NO 00:01:08.353 Library execinfo found: NO 00:01:08.353 Has header "execinfo.h" : YES 00:01:08.353 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:08.353 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:08.353 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:08.353 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:08.353 Run-time dependency openssl found: YES 3.0.9 00:01:08.353 Run-time dependency libpcap found: YES 1.10.4 00:01:08.353 Has header "pcap.h" with dependency libpcap: YES 00:01:08.353 Compiler for C supports arguments -Wcast-qual: YES 00:01:08.353 Compiler for C supports arguments -Wdeprecated: YES 00:01:08.353 Compiler for C supports arguments -Wformat: YES 00:01:08.353 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:08.353 Compiler for C supports arguments -Wformat-security: NO 00:01:08.353 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:08.353 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:08.353 Compiler for C supports arguments -Wnested-externs: YES 00:01:08.353 Compiler for C supports arguments -Wold-style-definition: YES 00:01:08.353 Compiler for C supports arguments -Wpointer-arith: YES 00:01:08.353 Compiler for C supports arguments -Wsign-compare: YES 00:01:08.353 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:08.353 Compiler for C supports arguments -Wundef: YES 00:01:08.353 Compiler for C supports arguments -Wwrite-strings: YES 00:01:08.353 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:08.353 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:08.353 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:08.353 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:08.353 Program objdump found: YES (/usr/bin/objdump) 00:01:08.353 Compiler for C supports arguments -mavx512f: YES 00:01:08.353 Checking if "AVX512 checking" compiles: YES 00:01:08.353 Fetching value of define "__SSE4_2__" : 1 00:01:08.353 Fetching value of define "__AES__" : 1 00:01:08.353 Fetching value of define "__AVX__" : 1 00:01:08.353 Fetching value of define "__AVX2__" : 1 00:01:08.353 Fetching value of define "__AVX512BW__" : 1 00:01:08.353 Fetching value of define "__AVX512CD__" : 1 00:01:08.353 Fetching value of define "__AVX512DQ__" : 1 00:01:08.353 Fetching value of define "__AVX512F__" : 1 00:01:08.353 Fetching value of define "__AVX512VL__" : 1 00:01:08.353 Fetching value of define "__PCLMUL__" : 1 00:01:08.353 Fetching value of define "__RDRND__" : 1 00:01:08.353 Fetching value of define "__RDSEED__" : 1 00:01:08.353 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:08.353 Fetching value of define "__znver1__" : (undefined) 00:01:08.353 Fetching value of define "__znver2__" : (undefined) 00:01:08.353 Fetching value of define "__znver3__" : (undefined) 00:01:08.353 Fetching value of define "__znver4__" : (undefined) 00:01:08.353 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:08.353 Message: lib/log: Defining dependency "log" 00:01:08.353 Message: lib/kvargs: Defining dependency "kvargs" 00:01:08.353 Message: lib/telemetry: Defining dependency "telemetry" 00:01:08.353 Checking for function "getentropy" : NO 00:01:08.353 Message: lib/eal: Defining dependency "eal" 00:01:08.353 Message: lib/ring: Defining dependency "ring" 00:01:08.353 Message: lib/rcu: Defining dependency "rcu" 00:01:08.353 Message: lib/mempool: Defining dependency "mempool" 00:01:08.353 Message: lib/mbuf: Defining dependency "mbuf" 00:01:08.353 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:08.353 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:08.353 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:08.353 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:08.353 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:08.353 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:08.353 Compiler for C supports arguments -mpclmul: YES 00:01:08.353 Compiler for C supports arguments -maes: YES 00:01:08.353 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:08.353 Compiler for C supports arguments -mavx512bw: YES 00:01:08.353 Compiler for C supports arguments -mavx512dq: YES 00:01:08.353 Compiler for C supports arguments -mavx512vl: YES 00:01:08.353 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:08.353 Compiler for C supports arguments -mavx2: YES 00:01:08.353 Compiler for C supports arguments -mavx: YES 00:01:08.353 Message: lib/net: Defining dependency "net" 00:01:08.353 Message: lib/meter: Defining dependency "meter" 00:01:08.353 Message: lib/ethdev: Defining dependency "ethdev" 00:01:08.353 Message: lib/pci: Defining dependency "pci" 00:01:08.353 Message: lib/cmdline: Defining dependency "cmdline" 00:01:08.353 Message: lib/hash: Defining dependency "hash" 00:01:08.353 Message: lib/timer: Defining dependency "timer" 00:01:08.353 Message: lib/compressdev: Defining dependency "compressdev" 00:01:08.353 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:08.353 Message: lib/dmadev: Defining dependency "dmadev" 00:01:08.353 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:08.353 Message: lib/power: Defining dependency "power" 00:01:08.353 Message: lib/reorder: Defining dependency "reorder" 00:01:08.353 Message: lib/security: Defining dependency "security" 00:01:08.353 Has header "linux/userfaultfd.h" : YES 00:01:08.353 Has header "linux/vduse.h" : YES 00:01:08.353 Message: lib/vhost: Defining dependency "vhost" 00:01:08.353 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:08.353 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:08.353 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:08.353 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:08.353 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:08.353 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:08.353 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:08.353 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:08.353 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:08.353 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:08.353 Program doxygen found: YES (/usr/bin/doxygen) 00:01:08.353 Configuring doxy-api-html.conf using configuration 00:01:08.353 Configuring doxy-api-man.conf using configuration 00:01:08.353 Program mandb found: YES (/usr/bin/mandb) 00:01:08.353 Program sphinx-build found: NO 00:01:08.353 Configuring rte_build_config.h using configuration 00:01:08.353 Message: 00:01:08.353 ================= 00:01:08.353 Applications Enabled 00:01:08.353 ================= 00:01:08.353 00:01:08.353 apps: 00:01:08.353 00:01:08.353 00:01:08.353 Message: 00:01:08.353 ================= 00:01:08.353 Libraries Enabled 00:01:08.353 ================= 00:01:08.353 00:01:08.353 libs: 00:01:08.353 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:08.353 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:08.353 cryptodev, dmadev, power, reorder, security, vhost, 00:01:08.353 00:01:08.353 Message: 00:01:08.353 =============== 00:01:08.353 Drivers Enabled 00:01:08.353 =============== 00:01:08.353 00:01:08.353 common: 00:01:08.353 00:01:08.353 bus: 00:01:08.353 pci, vdev, 00:01:08.354 mempool: 00:01:08.354 ring, 00:01:08.354 dma: 00:01:08.354 00:01:08.354 net: 00:01:08.354 00:01:08.354 crypto: 00:01:08.354 00:01:08.354 compress: 00:01:08.354 00:01:08.354 vdpa: 00:01:08.354 00:01:08.354 00:01:08.354 Message: 00:01:08.354 ================= 00:01:08.354 Content Skipped 00:01:08.354 ================= 00:01:08.354 00:01:08.354 apps: 00:01:08.354 dumpcap: explicitly disabled via build config 00:01:08.354 graph: explicitly disabled via build config 00:01:08.354 pdump: explicitly disabled via build config 00:01:08.354 proc-info: explicitly disabled via build config 00:01:08.354 test-acl: explicitly disabled via build config 00:01:08.354 test-bbdev: explicitly disabled via build config 00:01:08.354 test-cmdline: explicitly disabled via build config 00:01:08.354 test-compress-perf: explicitly disabled via build config 00:01:08.354 test-crypto-perf: explicitly disabled via build config 00:01:08.354 test-dma-perf: explicitly disabled via build config 00:01:08.354 test-eventdev: explicitly disabled via build config 00:01:08.354 test-fib: explicitly disabled via build config 00:01:08.354 test-flow-perf: explicitly disabled via build config 00:01:08.354 test-gpudev: explicitly disabled via build config 00:01:08.354 test-mldev: explicitly disabled via build config 00:01:08.354 test-pipeline: explicitly disabled via build config 00:01:08.354 test-pmd: explicitly disabled via build config 00:01:08.354 test-regex: explicitly disabled via build config 00:01:08.354 test-sad: explicitly disabled via build config 00:01:08.354 test-security-perf: explicitly disabled via build config 00:01:08.354 00:01:08.354 libs: 00:01:08.354 argparse: explicitly disabled via build config 00:01:08.354 metrics: explicitly disabled via build config 00:01:08.354 acl: explicitly disabled via build config 00:01:08.354 bbdev: explicitly disabled via build config 00:01:08.354 bitratestats: explicitly disabled via build config 00:01:08.354 bpf: explicitly disabled via build config 00:01:08.354 cfgfile: explicitly disabled via build config 00:01:08.354 distributor: explicitly disabled via build config 00:01:08.354 efd: explicitly disabled via build config 00:01:08.354 eventdev: explicitly disabled via build config 00:01:08.354 dispatcher: explicitly disabled via build config 00:01:08.354 gpudev: explicitly disabled via build config 00:01:08.354 gro: explicitly disabled via build config 00:01:08.354 gso: explicitly disabled via build config 00:01:08.354 ip_frag: explicitly disabled via build config 00:01:08.354 jobstats: explicitly disabled via build config 00:01:08.354 latencystats: explicitly disabled via build config 00:01:08.354 lpm: explicitly disabled via build config 00:01:08.354 member: explicitly disabled via build config 00:01:08.354 pcapng: explicitly disabled via build config 00:01:08.354 rawdev: explicitly disabled via build config 00:01:08.354 regexdev: explicitly disabled via build config 00:01:08.354 mldev: explicitly disabled via build config 00:01:08.354 rib: explicitly disabled via build config 00:01:08.354 sched: explicitly disabled via build config 00:01:08.354 stack: explicitly disabled via build config 00:01:08.354 ipsec: explicitly disabled via build config 00:01:08.354 pdcp: explicitly disabled via build config 00:01:08.354 fib: explicitly disabled via build config 00:01:08.354 port: explicitly disabled via build config 00:01:08.354 pdump: explicitly disabled via build config 00:01:08.354 table: explicitly disabled via build config 00:01:08.354 pipeline: explicitly disabled via build config 00:01:08.354 graph: explicitly disabled via build config 00:01:08.354 node: explicitly disabled via build config 00:01:08.354 00:01:08.354 drivers: 00:01:08.354 common/cpt: not in enabled drivers build config 00:01:08.354 common/dpaax: not in enabled drivers build config 00:01:08.354 common/iavf: not in enabled drivers build config 00:01:08.354 common/idpf: not in enabled drivers build config 00:01:08.354 common/ionic: not in enabled drivers build config 00:01:08.354 common/mvep: not in enabled drivers build config 00:01:08.354 common/octeontx: not in enabled drivers build config 00:01:08.354 bus/auxiliary: not in enabled drivers build config 00:01:08.354 bus/cdx: not in enabled drivers build config 00:01:08.354 bus/dpaa: not in enabled drivers build config 00:01:08.354 bus/fslmc: not in enabled drivers build config 00:01:08.354 bus/ifpga: not in enabled drivers build config 00:01:08.354 bus/platform: not in enabled drivers build config 00:01:08.354 bus/uacce: not in enabled drivers build config 00:01:08.354 bus/vmbus: not in enabled drivers build config 00:01:08.354 common/cnxk: not in enabled drivers build config 00:01:08.354 common/mlx5: not in enabled drivers build config 00:01:08.354 common/nfp: not in enabled drivers build config 00:01:08.354 common/nitrox: not in enabled drivers build config 00:01:08.354 common/qat: not in enabled drivers build config 00:01:08.354 common/sfc_efx: not in enabled drivers build config 00:01:08.354 mempool/bucket: not in enabled drivers build config 00:01:08.354 mempool/cnxk: not in enabled drivers build config 00:01:08.354 mempool/dpaa: not in enabled drivers build config 00:01:08.354 mempool/dpaa2: not in enabled drivers build config 00:01:08.354 mempool/octeontx: not in enabled drivers build config 00:01:08.354 mempool/stack: not in enabled drivers build config 00:01:08.354 dma/cnxk: not in enabled drivers build config 00:01:08.354 dma/dpaa: not in enabled drivers build config 00:01:08.354 dma/dpaa2: not in enabled drivers build config 00:01:08.354 dma/hisilicon: not in enabled drivers build config 00:01:08.354 dma/idxd: not in enabled drivers build config 00:01:08.354 dma/ioat: not in enabled drivers build config 00:01:08.354 dma/skeleton: not in enabled drivers build config 00:01:08.354 net/af_packet: not in enabled drivers build config 00:01:08.354 net/af_xdp: not in enabled drivers build config 00:01:08.354 net/ark: not in enabled drivers build config 00:01:08.354 net/atlantic: not in enabled drivers build config 00:01:08.354 net/avp: not in enabled drivers build config 00:01:08.354 net/axgbe: not in enabled drivers build config 00:01:08.354 net/bnx2x: not in enabled drivers build config 00:01:08.354 net/bnxt: not in enabled drivers build config 00:01:08.354 net/bonding: not in enabled drivers build config 00:01:08.354 net/cnxk: not in enabled drivers build config 00:01:08.354 net/cpfl: not in enabled drivers build config 00:01:08.354 net/cxgbe: not in enabled drivers build config 00:01:08.354 net/dpaa: not in enabled drivers build config 00:01:08.354 net/dpaa2: not in enabled drivers build config 00:01:08.354 net/e1000: not in enabled drivers build config 00:01:08.354 net/ena: not in enabled drivers build config 00:01:08.354 net/enetc: not in enabled drivers build config 00:01:08.354 net/enetfec: not in enabled drivers build config 00:01:08.354 net/enic: not in enabled drivers build config 00:01:08.354 net/failsafe: not in enabled drivers build config 00:01:08.354 net/fm10k: not in enabled drivers build config 00:01:08.354 net/gve: not in enabled drivers build config 00:01:08.354 net/hinic: not in enabled drivers build config 00:01:08.354 net/hns3: not in enabled drivers build config 00:01:08.354 net/i40e: not in enabled drivers build config 00:01:08.354 net/iavf: not in enabled drivers build config 00:01:08.354 net/ice: not in enabled drivers build config 00:01:08.354 net/idpf: not in enabled drivers build config 00:01:08.354 net/igc: not in enabled drivers build config 00:01:08.354 net/ionic: not in enabled drivers build config 00:01:08.354 net/ipn3ke: not in enabled drivers build config 00:01:08.354 net/ixgbe: not in enabled drivers build config 00:01:08.354 net/mana: not in enabled drivers build config 00:01:08.354 net/memif: not in enabled drivers build config 00:01:08.354 net/mlx4: not in enabled drivers build config 00:01:08.354 net/mlx5: not in enabled drivers build config 00:01:08.354 net/mvneta: not in enabled drivers build config 00:01:08.354 net/mvpp2: not in enabled drivers build config 00:01:08.354 net/netvsc: not in enabled drivers build config 00:01:08.354 net/nfb: not in enabled drivers build config 00:01:08.354 net/nfp: not in enabled drivers build config 00:01:08.354 net/ngbe: not in enabled drivers build config 00:01:08.354 net/null: not in enabled drivers build config 00:01:08.354 net/octeontx: not in enabled drivers build config 00:01:08.354 net/octeon_ep: not in enabled drivers build config 00:01:08.354 net/pcap: not in enabled drivers build config 00:01:08.354 net/pfe: not in enabled drivers build config 00:01:08.354 net/qede: not in enabled drivers build config 00:01:08.354 net/ring: not in enabled drivers build config 00:01:08.354 net/sfc: not in enabled drivers build config 00:01:08.354 net/softnic: not in enabled drivers build config 00:01:08.354 net/tap: not in enabled drivers build config 00:01:08.354 net/thunderx: not in enabled drivers build config 00:01:08.354 net/txgbe: not in enabled drivers build config 00:01:08.354 net/vdev_netvsc: not in enabled drivers build config 00:01:08.354 net/vhost: not in enabled drivers build config 00:01:08.354 net/virtio: not in enabled drivers build config 00:01:08.354 net/vmxnet3: not in enabled drivers build config 00:01:08.354 raw/*: missing internal dependency, "rawdev" 00:01:08.354 crypto/armv8: not in enabled drivers build config 00:01:08.354 crypto/bcmfs: not in enabled drivers build config 00:01:08.354 crypto/caam_jr: not in enabled drivers build config 00:01:08.354 crypto/ccp: not in enabled drivers build config 00:01:08.354 crypto/cnxk: not in enabled drivers build config 00:01:08.354 crypto/dpaa_sec: not in enabled drivers build config 00:01:08.354 crypto/dpaa2_sec: not in enabled drivers build config 00:01:08.354 crypto/ipsec_mb: not in enabled drivers build config 00:01:08.354 crypto/mlx5: not in enabled drivers build config 00:01:08.354 crypto/mvsam: not in enabled drivers build config 00:01:08.354 crypto/nitrox: not in enabled drivers build config 00:01:08.354 crypto/null: not in enabled drivers build config 00:01:08.354 crypto/octeontx: not in enabled drivers build config 00:01:08.354 crypto/openssl: not in enabled drivers build config 00:01:08.354 crypto/scheduler: not in enabled drivers build config 00:01:08.354 crypto/uadk: not in enabled drivers build config 00:01:08.354 crypto/virtio: not in enabled drivers build config 00:01:08.354 compress/isal: not in enabled drivers build config 00:01:08.354 compress/mlx5: not in enabled drivers build config 00:01:08.354 compress/nitrox: not in enabled drivers build config 00:01:08.354 compress/octeontx: not in enabled drivers build config 00:01:08.354 compress/zlib: not in enabled drivers build config 00:01:08.354 regex/*: missing internal dependency, "regexdev" 00:01:08.354 ml/*: missing internal dependency, "mldev" 00:01:08.354 vdpa/ifc: not in enabled drivers build config 00:01:08.354 vdpa/mlx5: not in enabled drivers build config 00:01:08.354 vdpa/nfp: not in enabled drivers build config 00:01:08.354 vdpa/sfc: not in enabled drivers build config 00:01:08.354 event/*: missing internal dependency, "eventdev" 00:01:08.354 baseband/*: missing internal dependency, "bbdev" 00:01:08.354 gpu/*: missing internal dependency, "gpudev" 00:01:08.354 00:01:08.354 00:01:08.354 Build targets in project: 85 00:01:08.354 00:01:08.354 DPDK 24.03.0 00:01:08.354 00:01:08.355 User defined options 00:01:08.355 buildtype : debug 00:01:08.355 default_library : shared 00:01:08.355 libdir : lib 00:01:08.355 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:08.355 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:08.355 c_link_args : 00:01:08.355 cpu_instruction_set: native 00:01:08.355 disable_apps : test-dma-perf,test,test-sad,test-acl,test-pmd,test-mldev,test-compress-perf,test-cmdline,test-regex,test-fib,graph,test-bbdev,dumpcap,test-gpudev,proc-info,test-pipeline,test-flow-perf,test-crypto-perf,pdump,test-eventdev,test-security-perf 00:01:08.355 disable_libs : port,lpm,ipsec,regexdev,dispatcher,argparse,bitratestats,rawdev,stack,graph,acl,bbdev,pipeline,member,sched,pcapng,mldev,eventdev,efd,metrics,latencystats,cfgfile,ip_frag,jobstats,pdump,pdcp,rib,node,fib,distributor,gso,table,bpf,gpudev,gro 00:01:08.355 enable_docs : false 00:01:08.355 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:08.355 enable_kmods : false 00:01:08.355 max_lcores : 128 00:01:08.355 tests : false 00:01:08.355 00:01:08.355 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:08.621 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:08.621 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:08.879 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:08.879 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:08.879 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:08.879 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:08.879 [6/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:08.879 [7/268] Linking static target lib/librte_kvargs.a 00:01:08.879 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:08.879 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:08.879 [10/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:08.879 [11/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:08.879 [12/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:08.879 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:08.879 [14/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:08.879 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:08.879 [16/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:08.879 [17/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:08.879 [18/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:08.879 [19/268] Linking static target lib/librte_log.a 00:01:09.138 [20/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:09.138 [21/268] Linking static target lib/librte_pci.a 00:01:09.138 [22/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:09.138 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:09.138 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:09.138 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:09.138 [26/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:09.138 [27/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:09.138 [28/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:09.138 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:09.138 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:09.138 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:09.401 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:09.401 [33/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:09.401 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:09.401 [35/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:09.401 [36/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:09.401 [37/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:09.401 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:09.401 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:09.401 [40/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:09.401 [41/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:09.401 [42/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:09.401 [43/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:09.401 [44/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:09.401 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:09.401 [46/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:09.401 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:09.401 [48/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:09.401 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:09.401 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:09.401 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:09.401 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:09.401 [53/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:09.401 [54/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:09.401 [55/268] Linking static target lib/librte_meter.a 00:01:09.401 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:09.401 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:09.401 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:09.401 [59/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:09.401 [60/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:09.401 [61/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:09.401 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:09.401 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:09.401 [64/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:09.401 [65/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:09.401 [66/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:09.401 [67/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:09.401 [68/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:09.401 [69/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:09.401 [70/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:09.401 [71/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:09.401 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:09.401 [73/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:09.401 [74/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:09.401 [75/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:09.401 [76/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:09.401 [77/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:09.401 [78/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:09.401 [79/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:09.401 [80/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:09.401 [81/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:09.401 [82/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:09.401 [83/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:09.401 [84/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:09.401 [85/268] Linking static target lib/librte_telemetry.a 00:01:09.401 [86/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:09.401 [87/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:09.401 [88/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:09.401 [89/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:09.401 [90/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:09.401 [91/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:09.401 [92/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:09.401 [93/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:09.401 [94/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:09.401 [95/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:09.401 [96/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:09.401 [97/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:09.401 [98/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:09.401 [99/268] Linking static target lib/librte_ring.a 00:01:09.401 [100/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:09.401 [101/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:09.401 [102/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:09.401 [103/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:09.401 [104/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:09.401 [105/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:09.401 [106/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:09.401 [107/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:09.401 [108/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:09.401 [109/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:09.401 [110/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:09.401 [111/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:09.401 [112/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:09.401 [113/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:09.401 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:09.401 [115/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:09.401 [116/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:09.660 [117/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:09.660 [118/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:09.660 [119/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:09.660 [120/268] Linking static target lib/librte_eal.a 00:01:09.660 [121/268] Linking static target lib/librte_mempool.a 00:01:09.660 [122/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:09.660 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:09.660 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:09.660 [125/268] Linking static target lib/librte_rcu.a 00:01:09.660 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:09.660 [127/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:09.660 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:09.660 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:09.660 [130/268] Linking static target lib/librte_net.a 00:01:09.660 [131/268] Linking static target lib/librte_cmdline.a 00:01:09.660 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:09.660 [133/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:09.660 [134/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:09.660 [135/268] Linking target lib/librte_log.so.24.1 00:01:09.660 [136/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:09.660 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:09.660 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:09.660 [139/268] Linking static target lib/librte_mbuf.a 00:01:09.660 [140/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:09.660 [141/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:09.661 [142/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:09.661 [143/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:09.661 [144/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:09.661 [145/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:09.661 [146/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:09.661 [147/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:09.661 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:09.661 [149/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:09.661 [150/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:09.661 [151/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:09.661 [152/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:09.661 [153/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:09.661 [154/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:09.661 [155/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:09.661 [156/268] Linking static target lib/librte_compressdev.a 00:01:09.919 [157/268] Linking static target lib/librte_timer.a 00:01:09.919 [158/268] Linking target lib/librte_kvargs.so.24.1 00:01:09.919 [159/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:09.919 [160/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:09.919 [161/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:09.919 [162/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:09.919 [163/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:09.919 [164/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:09.919 [165/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:09.919 [166/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:09.919 [167/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:09.919 [168/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:09.919 [169/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:09.919 [170/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:09.919 [171/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:09.919 [172/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:09.919 [173/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:09.919 [174/268] Linking target lib/librte_telemetry.so.24.1 00:01:09.919 [175/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:09.919 [176/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:09.919 [177/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:09.919 [178/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:09.919 [179/268] Linking static target lib/librte_power.a 00:01:09.919 [180/268] Linking static target lib/librte_security.a 00:01:09.919 [181/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:09.919 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:09.919 [183/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:09.919 [184/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:09.919 [185/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:09.919 [186/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:09.919 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:09.919 [188/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:09.919 [189/268] Linking static target lib/librte_dmadev.a 00:01:09.919 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:09.919 [191/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:09.919 [192/268] Linking static target lib/librte_hash.a 00:01:09.919 [193/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:09.919 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:09.919 [195/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:09.919 [196/268] Linking static target lib/librte_reorder.a 00:01:09.919 [197/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:09.919 [198/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:10.178 [199/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:10.178 [200/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:10.178 [201/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:10.178 [202/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:10.178 [203/268] Linking static target drivers/librte_bus_vdev.a 00:01:10.178 [204/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:10.178 [205/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:10.178 [206/268] Linking static target drivers/librte_mempool_ring.a 00:01:10.178 [207/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:10.178 [208/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:10.178 [209/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:10.178 [210/268] Linking static target drivers/librte_bus_pci.a 00:01:10.178 [211/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:10.178 [212/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:10.436 [213/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:10.436 [214/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:10.436 [215/268] Linking static target lib/librte_cryptodev.a 00:01:10.436 [216/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:10.436 [217/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:10.436 [218/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:10.436 [219/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:10.436 [220/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:10.436 [221/268] Linking static target lib/librte_ethdev.a 00:01:10.695 [222/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:10.695 [223/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:10.695 [224/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:10.695 [225/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:10.953 [226/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:10.953 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:11.886 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:11.886 [229/268] Linking static target lib/librte_vhost.a 00:01:12.145 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.519 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:18.791 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:19.050 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:19.050 [234/268] Linking target lib/librte_eal.so.24.1 00:01:19.307 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:19.307 [236/268] Linking target lib/librte_ring.so.24.1 00:01:19.307 [237/268] Linking target lib/librte_pci.so.24.1 00:01:19.307 [238/268] Linking target drivers/librte_bus_vdev.so.24.1 00:01:19.307 [239/268] Linking target lib/librte_timer.so.24.1 00:01:19.307 [240/268] Linking target lib/librte_dmadev.so.24.1 00:01:19.307 [241/268] Linking target lib/librte_meter.so.24.1 00:01:19.565 [242/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:19.565 [243/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:19.565 [244/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:19.565 [245/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:19.565 [246/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:19.565 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:01:19.565 [248/268] Linking target lib/librte_mempool.so.24.1 00:01:19.565 [249/268] Linking target lib/librte_rcu.so.24.1 00:01:19.565 [250/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:19.565 [251/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:19.565 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:01:19.565 [253/268] Linking target lib/librte_mbuf.so.24.1 00:01:19.823 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:19.823 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:01:19.823 [256/268] Linking target lib/librte_reorder.so.24.1 00:01:19.823 [257/268] Linking target lib/librte_compressdev.so.24.1 00:01:19.823 [258/268] Linking target lib/librte_net.so.24.1 00:01:19.823 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:20.081 [260/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:20.081 [261/268] Linking target lib/librte_cmdline.so.24.1 00:01:20.081 [262/268] Linking target lib/librte_hash.so.24.1 00:01:20.081 [263/268] Linking target lib/librte_security.so.24.1 00:01:20.081 [264/268] Linking target lib/librte_ethdev.so.24.1 00:01:20.081 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:20.081 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:20.340 [267/268] Linking target lib/librte_power.so.24.1 00:01:20.340 [268/268] Linking target lib/librte_vhost.so.24.1 00:01:20.340 INFO: autodetecting backend as ninja 00:01:20.340 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 96 00:01:21.275 CC lib/log/log.o 00:01:21.275 CC lib/ut/ut.o 00:01:21.275 CC lib/log/log_flags.o 00:01:21.275 CC lib/log/log_deprecated.o 00:01:21.275 CC lib/ut_mock/mock.o 00:01:21.275 LIB libspdk_ut.a 00:01:21.275 LIB libspdk_log.a 00:01:21.275 SO libspdk_ut.so.2.0 00:01:21.275 LIB libspdk_ut_mock.a 00:01:21.275 SO libspdk_log.so.7.0 00:01:21.533 SO libspdk_ut_mock.so.6.0 00:01:21.533 SYMLINK libspdk_ut.so 00:01:21.533 SYMLINK libspdk_log.so 00:01:21.533 SYMLINK libspdk_ut_mock.so 00:01:21.791 CC lib/util/base64.o 00:01:21.791 CC lib/util/cpuset.o 00:01:21.791 CC lib/util/bit_array.o 00:01:21.791 CC lib/util/crc16.o 00:01:21.791 CC lib/util/crc64.o 00:01:21.791 CC lib/util/crc32.o 00:01:21.791 CC lib/util/crc32c.o 00:01:21.791 CC lib/util/dif.o 00:01:21.791 CC lib/dma/dma.o 00:01:21.791 CC lib/util/crc32_ieee.o 00:01:21.791 CC lib/util/fd.o 00:01:21.791 CC lib/util/iov.o 00:01:21.791 CC lib/util/file.o 00:01:21.791 CC lib/util/hexlify.o 00:01:21.791 CC lib/util/math.o 00:01:21.791 CC lib/util/pipe.o 00:01:21.791 CC lib/util/strerror_tls.o 00:01:21.791 CC lib/util/string.o 00:01:21.791 CC lib/util/uuid.o 00:01:21.791 CC lib/util/zipf.o 00:01:21.791 CC lib/util/fd_group.o 00:01:21.791 CC lib/util/xor.o 00:01:21.791 CXX lib/trace_parser/trace.o 00:01:21.791 CC lib/ioat/ioat.o 00:01:21.791 CC lib/vfio_user/host/vfio_user_pci.o 00:01:21.791 CC lib/vfio_user/host/vfio_user.o 00:01:22.049 LIB libspdk_dma.a 00:01:22.049 SO libspdk_dma.so.4.0 00:01:22.049 LIB libspdk_ioat.a 00:01:22.049 SYMLINK libspdk_dma.so 00:01:22.050 SO libspdk_ioat.so.7.0 00:01:22.050 SYMLINK libspdk_ioat.so 00:01:22.050 LIB libspdk_vfio_user.a 00:01:22.050 SO libspdk_vfio_user.so.5.0 00:01:22.050 LIB libspdk_util.a 00:01:22.308 SYMLINK libspdk_vfio_user.so 00:01:22.308 SO libspdk_util.so.9.1 00:01:22.308 SYMLINK libspdk_util.so 00:01:22.308 LIB libspdk_trace_parser.a 00:01:22.566 SO libspdk_trace_parser.so.5.0 00:01:22.566 SYMLINK libspdk_trace_parser.so 00:01:22.566 CC lib/json/json_parse.o 00:01:22.566 CC lib/json/json_util.o 00:01:22.566 CC lib/rdma_provider/common.o 00:01:22.566 CC lib/json/json_write.o 00:01:22.566 CC lib/rdma_provider/rdma_provider_verbs.o 00:01:22.566 CC lib/conf/conf.o 00:01:22.566 CC lib/idxd/idxd.o 00:01:22.567 CC lib/idxd/idxd_user.o 00:01:22.567 CC lib/idxd/idxd_kernel.o 00:01:22.567 CC lib/env_dpdk/env.o 00:01:22.567 CC lib/rdma_utils/rdma_utils.o 00:01:22.567 CC lib/env_dpdk/pci.o 00:01:22.567 CC lib/env_dpdk/memory.o 00:01:22.567 CC lib/env_dpdk/init.o 00:01:22.567 CC lib/env_dpdk/threads.o 00:01:22.567 CC lib/env_dpdk/pci_ioat.o 00:01:22.567 CC lib/env_dpdk/pci_virtio.o 00:01:22.567 CC lib/env_dpdk/pci_vmd.o 00:01:22.567 CC lib/env_dpdk/pci_idxd.o 00:01:22.567 CC lib/env_dpdk/pci_event.o 00:01:22.567 CC lib/vmd/led.o 00:01:22.567 CC lib/vmd/vmd.o 00:01:22.567 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:22.567 CC lib/env_dpdk/sigbus_handler.o 00:01:22.567 CC lib/env_dpdk/pci_dpdk.o 00:01:22.567 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:22.825 LIB libspdk_rdma_provider.a 00:01:22.825 SO libspdk_rdma_provider.so.6.0 00:01:22.825 LIB libspdk_conf.a 00:01:22.825 SO libspdk_conf.so.6.0 00:01:22.825 LIB libspdk_rdma_utils.a 00:01:22.825 SYMLINK libspdk_rdma_provider.so 00:01:22.825 LIB libspdk_json.a 00:01:22.825 SO libspdk_rdma_utils.so.1.0 00:01:22.825 SYMLINK libspdk_conf.so 00:01:22.825 SO libspdk_json.so.6.0 00:01:23.083 SYMLINK libspdk_rdma_utils.so 00:01:23.083 SYMLINK libspdk_json.so 00:01:23.083 LIB libspdk_idxd.a 00:01:23.083 SO libspdk_idxd.so.12.0 00:01:23.083 LIB libspdk_vmd.a 00:01:23.083 SO libspdk_vmd.so.6.0 00:01:23.083 SYMLINK libspdk_idxd.so 00:01:23.341 SYMLINK libspdk_vmd.so 00:01:23.341 CC lib/jsonrpc/jsonrpc_client.o 00:01:23.341 CC lib/jsonrpc/jsonrpc_server.o 00:01:23.341 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:23.341 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:23.599 LIB libspdk_jsonrpc.a 00:01:23.599 SO libspdk_jsonrpc.so.6.0 00:01:23.599 SYMLINK libspdk_jsonrpc.so 00:01:23.599 LIB libspdk_env_dpdk.a 00:01:23.857 SO libspdk_env_dpdk.so.14.1 00:01:23.857 SYMLINK libspdk_env_dpdk.so 00:01:23.857 CC lib/rpc/rpc.o 00:01:24.115 LIB libspdk_rpc.a 00:01:24.115 SO libspdk_rpc.so.6.0 00:01:24.115 SYMLINK libspdk_rpc.so 00:01:24.487 CC lib/keyring/keyring.o 00:01:24.487 CC lib/keyring/keyring_rpc.o 00:01:24.487 CC lib/trace/trace.o 00:01:24.487 CC lib/trace/trace_flags.o 00:01:24.487 CC lib/trace/trace_rpc.o 00:01:24.487 CC lib/notify/notify.o 00:01:24.487 CC lib/notify/notify_rpc.o 00:01:24.746 LIB libspdk_keyring.a 00:01:24.746 LIB libspdk_notify.a 00:01:24.746 SO libspdk_keyring.so.1.0 00:01:24.746 SO libspdk_notify.so.6.0 00:01:24.746 LIB libspdk_trace.a 00:01:24.746 SYMLINK libspdk_keyring.so 00:01:24.746 SO libspdk_trace.so.10.0 00:01:24.746 SYMLINK libspdk_notify.so 00:01:24.746 SYMLINK libspdk_trace.so 00:01:25.005 CC lib/sock/sock.o 00:01:25.005 CC lib/sock/sock_rpc.o 00:01:25.005 CC lib/thread/thread.o 00:01:25.005 CC lib/thread/iobuf.o 00:01:25.264 LIB libspdk_sock.a 00:01:25.523 SO libspdk_sock.so.10.0 00:01:25.523 SYMLINK libspdk_sock.so 00:01:25.781 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:25.781 CC lib/nvme/nvme_ctrlr.o 00:01:25.781 CC lib/nvme/nvme_fabric.o 00:01:25.781 CC lib/nvme/nvme_ns_cmd.o 00:01:25.781 CC lib/nvme/nvme_ns.o 00:01:25.781 CC lib/nvme/nvme_pcie_common.o 00:01:25.781 CC lib/nvme/nvme_pcie.o 00:01:25.781 CC lib/nvme/nvme_qpair.o 00:01:25.781 CC lib/nvme/nvme.o 00:01:25.781 CC lib/nvme/nvme_discovery.o 00:01:25.781 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:25.781 CC lib/nvme/nvme_quirks.o 00:01:25.781 CC lib/nvme/nvme_tcp.o 00:01:25.781 CC lib/nvme/nvme_transport.o 00:01:25.781 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:25.781 CC lib/nvme/nvme_opal.o 00:01:25.781 CC lib/nvme/nvme_io_msg.o 00:01:25.781 CC lib/nvme/nvme_poll_group.o 00:01:25.781 CC lib/nvme/nvme_zns.o 00:01:25.781 CC lib/nvme/nvme_stubs.o 00:01:25.781 CC lib/nvme/nvme_auth.o 00:01:25.781 CC lib/nvme/nvme_cuse.o 00:01:25.781 CC lib/nvme/nvme_vfio_user.o 00:01:25.781 CC lib/nvme/nvme_rdma.o 00:01:26.040 LIB libspdk_thread.a 00:01:26.298 SO libspdk_thread.so.10.1 00:01:26.298 SYMLINK libspdk_thread.so 00:01:26.556 CC lib/vfu_tgt/tgt_endpoint.o 00:01:26.556 CC lib/vfu_tgt/tgt_rpc.o 00:01:26.556 CC lib/init/subsystem.o 00:01:26.556 CC lib/blob/blobstore.o 00:01:26.556 CC lib/init/rpc.o 00:01:26.556 CC lib/init/json_config.o 00:01:26.556 CC lib/blob/request.o 00:01:26.556 CC lib/init/subsystem_rpc.o 00:01:26.556 CC lib/blob/zeroes.o 00:01:26.556 CC lib/blob/blob_bs_dev.o 00:01:26.556 CC lib/accel/accel.o 00:01:26.556 CC lib/accel/accel_rpc.o 00:01:26.556 CC lib/virtio/virtio.o 00:01:26.556 CC lib/virtio/virtio_vfio_user.o 00:01:26.556 CC lib/accel/accel_sw.o 00:01:26.556 CC lib/virtio/virtio_vhost_user.o 00:01:26.556 CC lib/virtio/virtio_pci.o 00:01:26.815 LIB libspdk_init.a 00:01:26.815 SO libspdk_init.so.5.0 00:01:26.815 LIB libspdk_vfu_tgt.a 00:01:26.815 SO libspdk_vfu_tgt.so.3.0 00:01:26.815 LIB libspdk_virtio.a 00:01:26.815 SYMLINK libspdk_init.so 00:01:26.815 SO libspdk_virtio.so.7.0 00:01:26.815 SYMLINK libspdk_vfu_tgt.so 00:01:26.815 SYMLINK libspdk_virtio.so 00:01:27.074 CC lib/event/app.o 00:01:27.074 CC lib/event/reactor.o 00:01:27.074 CC lib/event/app_rpc.o 00:01:27.074 CC lib/event/log_rpc.o 00:01:27.074 CC lib/event/scheduler_static.o 00:01:27.333 LIB libspdk_accel.a 00:01:27.333 SO libspdk_accel.so.15.1 00:01:27.333 SYMLINK libspdk_accel.so 00:01:27.333 LIB libspdk_nvme.a 00:01:27.333 SO libspdk_nvme.so.13.1 00:01:27.333 LIB libspdk_event.a 00:01:27.592 SO libspdk_event.so.14.0 00:01:27.592 SYMLINK libspdk_event.so 00:01:27.592 CC lib/bdev/bdev.o 00:01:27.592 CC lib/bdev/bdev_rpc.o 00:01:27.592 CC lib/bdev/bdev_zone.o 00:01:27.592 CC lib/bdev/scsi_nvme.o 00:01:27.592 CC lib/bdev/part.o 00:01:27.592 SYMLINK libspdk_nvme.so 00:01:28.537 LIB libspdk_blob.a 00:01:28.537 SO libspdk_blob.so.11.0 00:01:28.796 SYMLINK libspdk_blob.so 00:01:29.054 CC lib/lvol/lvol.o 00:01:29.054 CC lib/blobfs/blobfs.o 00:01:29.054 CC lib/blobfs/tree.o 00:01:29.313 LIB libspdk_bdev.a 00:01:29.313 SO libspdk_bdev.so.15.1 00:01:29.572 SYMLINK libspdk_bdev.so 00:01:29.572 LIB libspdk_blobfs.a 00:01:29.572 LIB libspdk_lvol.a 00:01:29.572 SO libspdk_blobfs.so.10.0 00:01:29.572 SO libspdk_lvol.so.10.0 00:01:29.572 SYMLINK libspdk_blobfs.so 00:01:29.832 SYMLINK libspdk_lvol.so 00:01:29.832 CC lib/nbd/nbd.o 00:01:29.832 CC lib/nbd/nbd_rpc.o 00:01:29.832 CC lib/scsi/dev.o 00:01:29.832 CC lib/scsi/lun.o 00:01:29.832 CC lib/scsi/port.o 00:01:29.832 CC lib/scsi/scsi.o 00:01:29.832 CC lib/nvmf/ctrlr.o 00:01:29.832 CC lib/scsi/scsi_bdev.o 00:01:29.832 CC lib/scsi/task.o 00:01:29.832 CC lib/scsi/scsi_pr.o 00:01:29.832 CC lib/scsi/scsi_rpc.o 00:01:29.832 CC lib/nvmf/ctrlr_discovery.o 00:01:29.832 CC lib/nvmf/ctrlr_bdev.o 00:01:29.832 CC lib/ublk/ublk_rpc.o 00:01:29.832 CC lib/ublk/ublk.o 00:01:29.832 CC lib/nvmf/subsystem.o 00:01:29.832 CC lib/nvmf/nvmf.o 00:01:29.832 CC lib/nvmf/nvmf_rpc.o 00:01:29.832 CC lib/nvmf/transport.o 00:01:29.832 CC lib/nvmf/tcp.o 00:01:29.832 CC lib/nvmf/mdns_server.o 00:01:29.832 CC lib/nvmf/stubs.o 00:01:29.832 CC lib/ftl/ftl_core.o 00:01:29.832 CC lib/nvmf/rdma.o 00:01:29.832 CC lib/nvmf/vfio_user.o 00:01:29.832 CC lib/ftl/ftl_init.o 00:01:29.832 CC lib/nvmf/auth.o 00:01:29.832 CC lib/ftl/ftl_layout.o 00:01:29.832 CC lib/ftl/ftl_debug.o 00:01:29.832 CC lib/ftl/ftl_io.o 00:01:29.832 CC lib/ftl/ftl_sb.o 00:01:29.832 CC lib/ftl/ftl_l2p.o 00:01:29.832 CC lib/ftl/ftl_l2p_flat.o 00:01:29.832 CC lib/ftl/ftl_band.o 00:01:29.832 CC lib/ftl/ftl_nv_cache.o 00:01:29.832 CC lib/ftl/ftl_band_ops.o 00:01:29.832 CC lib/ftl/ftl_writer.o 00:01:29.832 CC lib/ftl/ftl_rq.o 00:01:29.832 CC lib/ftl/ftl_reloc.o 00:01:29.832 CC lib/ftl/mngt/ftl_mngt.o 00:01:29.832 CC lib/ftl/ftl_l2p_cache.o 00:01:29.832 CC lib/ftl/ftl_p2l.o 00:01:29.832 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:01:29.832 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:01:29.832 CC lib/ftl/mngt/ftl_mngt_startup.o 00:01:29.832 CC lib/ftl/mngt/ftl_mngt_md.o 00:01:29.832 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:01:29.832 CC lib/ftl/mngt/ftl_mngt_misc.o 00:01:29.832 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:01:29.832 CC lib/ftl/mngt/ftl_mngt_band.o 00:01:29.832 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:01:29.832 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:01:29.832 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:01:29.832 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:01:29.832 CC lib/ftl/utils/ftl_conf.o 00:01:29.832 CC lib/ftl/utils/ftl_md.o 00:01:29.832 CC lib/ftl/utils/ftl_mempool.o 00:01:29.832 CC lib/ftl/utils/ftl_property.o 00:01:29.832 CC lib/ftl/utils/ftl_bitmap.o 00:01:29.832 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:01:29.832 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:01:29.832 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:01:29.832 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:01:29.832 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:01:29.832 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:01:29.832 CC lib/ftl/upgrade/ftl_sb_v3.o 00:01:29.832 CC lib/ftl/upgrade/ftl_sb_v5.o 00:01:29.832 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:01:29.832 CC lib/ftl/nvc/ftl_nvc_dev.o 00:01:29.832 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:01:29.832 CC lib/ftl/base/ftl_base_dev.o 00:01:29.832 CC lib/ftl/ftl_trace.o 00:01:29.832 CC lib/ftl/base/ftl_base_bdev.o 00:01:30.398 LIB libspdk_nbd.a 00:01:30.398 SO libspdk_nbd.so.7.0 00:01:30.398 LIB libspdk_scsi.a 00:01:30.398 SO libspdk_scsi.so.9.0 00:01:30.398 SYMLINK libspdk_nbd.so 00:01:30.398 SYMLINK libspdk_scsi.so 00:01:30.398 LIB libspdk_ublk.a 00:01:30.656 SO libspdk_ublk.so.3.0 00:01:30.657 SYMLINK libspdk_ublk.so 00:01:30.657 CC lib/vhost/vhost.o 00:01:30.657 CC lib/iscsi/conn.o 00:01:30.657 CC lib/vhost/vhost_scsi.o 00:01:30.657 CC lib/iscsi/init_grp.o 00:01:30.657 CC lib/vhost/vhost_rpc.o 00:01:30.657 CC lib/iscsi/iscsi.o 00:01:30.657 CC lib/vhost/rte_vhost_user.o 00:01:30.657 CC lib/iscsi/md5.o 00:01:30.657 CC lib/vhost/vhost_blk.o 00:01:30.657 CC lib/iscsi/param.o 00:01:30.657 CC lib/iscsi/portal_grp.o 00:01:30.657 CC lib/iscsi/tgt_node.o 00:01:30.657 CC lib/iscsi/iscsi_subsystem.o 00:01:30.657 CC lib/iscsi/iscsi_rpc.o 00:01:30.657 CC lib/iscsi/task.o 00:01:30.915 LIB libspdk_ftl.a 00:01:30.915 SO libspdk_ftl.so.9.0 00:01:31.173 SYMLINK libspdk_ftl.so 00:01:31.432 LIB libspdk_nvmf.a 00:01:31.432 SO libspdk_nvmf.so.19.0 00:01:31.432 LIB libspdk_vhost.a 00:01:31.690 SO libspdk_vhost.so.8.0 00:01:31.690 SYMLINK libspdk_vhost.so 00:01:31.690 SYMLINK libspdk_nvmf.so 00:01:31.690 LIB libspdk_iscsi.a 00:01:31.690 SO libspdk_iscsi.so.8.0 00:01:31.949 SYMLINK libspdk_iscsi.so 00:01:32.517 CC module/env_dpdk/env_dpdk_rpc.o 00:01:32.517 CC module/vfu_device/vfu_virtio.o 00:01:32.517 CC module/vfu_device/vfu_virtio_blk.o 00:01:32.517 CC module/vfu_device/vfu_virtio_scsi.o 00:01:32.517 CC module/vfu_device/vfu_virtio_rpc.o 00:01:32.517 CC module/accel/error/accel_error.o 00:01:32.517 CC module/accel/error/accel_error_rpc.o 00:01:32.517 CC module/accel/ioat/accel_ioat_rpc.o 00:01:32.517 CC module/accel/ioat/accel_ioat.o 00:01:32.517 CC module/accel/iaa/accel_iaa_rpc.o 00:01:32.517 CC module/accel/iaa/accel_iaa.o 00:01:32.517 CC module/sock/posix/posix.o 00:01:32.517 CC module/accel/dsa/accel_dsa.o 00:01:32.517 CC module/accel/dsa/accel_dsa_rpc.o 00:01:32.517 CC module/keyring/file/keyring.o 00:01:32.517 CC module/keyring/file/keyring_rpc.o 00:01:32.517 CC module/blob/bdev/blob_bdev.o 00:01:32.517 CC module/scheduler/dynamic/scheduler_dynamic.o 00:01:32.517 CC module/scheduler/gscheduler/gscheduler.o 00:01:32.517 CC module/keyring/linux/keyring.o 00:01:32.517 CC module/keyring/linux/keyring_rpc.o 00:01:32.517 LIB libspdk_env_dpdk_rpc.a 00:01:32.517 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:01:32.517 SO libspdk_env_dpdk_rpc.so.6.0 00:01:32.517 SYMLINK libspdk_env_dpdk_rpc.so 00:01:32.782 LIB libspdk_keyring_file.a 00:01:32.782 LIB libspdk_accel_error.a 00:01:32.782 LIB libspdk_scheduler_gscheduler.a 00:01:32.782 LIB libspdk_keyring_linux.a 00:01:32.782 LIB libspdk_accel_iaa.a 00:01:32.782 SO libspdk_keyring_file.so.1.0 00:01:32.782 LIB libspdk_accel_ioat.a 00:01:32.782 LIB libspdk_scheduler_dpdk_governor.a 00:01:32.782 SO libspdk_accel_error.so.2.0 00:01:32.782 SO libspdk_keyring_linux.so.1.0 00:01:32.782 SO libspdk_scheduler_gscheduler.so.4.0 00:01:32.782 SO libspdk_accel_iaa.so.3.0 00:01:32.782 LIB libspdk_scheduler_dynamic.a 00:01:32.782 SO libspdk_accel_ioat.so.6.0 00:01:32.782 SO libspdk_scheduler_dpdk_governor.so.4.0 00:01:32.782 LIB libspdk_accel_dsa.a 00:01:32.782 SYMLINK libspdk_keyring_file.so 00:01:32.782 SYMLINK libspdk_accel_error.so 00:01:32.782 SO libspdk_scheduler_dynamic.so.4.0 00:01:32.782 SYMLINK libspdk_keyring_linux.so 00:01:32.782 LIB libspdk_blob_bdev.a 00:01:32.782 SYMLINK libspdk_scheduler_gscheduler.so 00:01:32.782 SYMLINK libspdk_accel_iaa.so 00:01:32.782 SO libspdk_accel_dsa.so.5.0 00:01:32.782 SYMLINK libspdk_accel_ioat.so 00:01:32.782 SO libspdk_blob_bdev.so.11.0 00:01:32.782 SYMLINK libspdk_scheduler_dpdk_governor.so 00:01:32.782 SYMLINK libspdk_scheduler_dynamic.so 00:01:32.782 SYMLINK libspdk_accel_dsa.so 00:01:32.782 SYMLINK libspdk_blob_bdev.so 00:01:32.782 LIB libspdk_vfu_device.a 00:01:33.053 SO libspdk_vfu_device.so.3.0 00:01:33.053 SYMLINK libspdk_vfu_device.so 00:01:33.053 LIB libspdk_sock_posix.a 00:01:33.053 SO libspdk_sock_posix.so.6.0 00:01:33.310 SYMLINK libspdk_sock_posix.so 00:01:33.310 CC module/bdev/error/vbdev_error.o 00:01:33.310 CC module/bdev/error/vbdev_error_rpc.o 00:01:33.310 CC module/bdev/delay/vbdev_delay_rpc.o 00:01:33.310 CC module/bdev/delay/vbdev_delay.o 00:01:33.310 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:01:33.310 CC module/bdev/lvol/vbdev_lvol.o 00:01:33.310 CC module/bdev/raid/bdev_raid.o 00:01:33.310 CC module/bdev/raid/bdev_raid_sb.o 00:01:33.310 CC module/bdev/raid/bdev_raid_rpc.o 00:01:33.310 CC module/bdev/zone_block/vbdev_zone_block.o 00:01:33.310 CC module/bdev/raid/raid1.o 00:01:33.310 CC module/bdev/aio/bdev_aio.o 00:01:33.310 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:01:33.310 CC module/bdev/raid/raid0.o 00:01:33.310 CC module/bdev/aio/bdev_aio_rpc.o 00:01:33.310 CC module/bdev/split/vbdev_split.o 00:01:33.310 CC module/bdev/raid/concat.o 00:01:33.310 CC module/bdev/split/vbdev_split_rpc.o 00:01:33.310 CC module/bdev/malloc/bdev_malloc_rpc.o 00:01:33.310 CC module/bdev/malloc/bdev_malloc.o 00:01:33.310 CC module/bdev/ftl/bdev_ftl.o 00:01:33.310 CC module/bdev/ftl/bdev_ftl_rpc.o 00:01:33.310 CC module/bdev/passthru/vbdev_passthru.o 00:01:33.310 CC module/bdev/null/bdev_null.o 00:01:33.310 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:01:33.310 CC module/bdev/null/bdev_null_rpc.o 00:01:33.310 CC module/bdev/nvme/bdev_nvme.o 00:01:33.310 CC module/bdev/nvme/bdev_nvme_rpc.o 00:01:33.310 CC module/bdev/nvme/nvme_rpc.o 00:01:33.310 CC module/bdev/nvme/bdev_mdns_client.o 00:01:33.310 CC module/bdev/nvme/vbdev_opal.o 00:01:33.311 CC module/bdev/gpt/gpt.o 00:01:33.311 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:01:33.311 CC module/bdev/nvme/vbdev_opal_rpc.o 00:01:33.311 CC module/bdev/gpt/vbdev_gpt.o 00:01:33.311 CC module/blobfs/bdev/blobfs_bdev.o 00:01:33.311 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:01:33.311 CC module/bdev/iscsi/bdev_iscsi.o 00:01:33.311 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:01:33.311 CC module/bdev/virtio/bdev_virtio_scsi.o 00:01:33.311 CC module/bdev/virtio/bdev_virtio_rpc.o 00:01:33.311 CC module/bdev/virtio/bdev_virtio_blk.o 00:01:33.569 LIB libspdk_bdev_split.a 00:01:33.569 LIB libspdk_blobfs_bdev.a 00:01:33.569 LIB libspdk_bdev_error.a 00:01:33.569 SO libspdk_bdev_split.so.6.0 00:01:33.569 SO libspdk_bdev_error.so.6.0 00:01:33.569 SO libspdk_blobfs_bdev.so.6.0 00:01:33.569 LIB libspdk_bdev_null.a 00:01:33.569 LIB libspdk_bdev_gpt.a 00:01:33.569 LIB libspdk_bdev_passthru.a 00:01:33.569 LIB libspdk_bdev_zone_block.a 00:01:33.569 LIB libspdk_bdev_ftl.a 00:01:33.569 SO libspdk_bdev_null.so.6.0 00:01:33.569 SO libspdk_bdev_gpt.so.6.0 00:01:33.569 SO libspdk_bdev_passthru.so.6.0 00:01:33.569 SYMLINK libspdk_blobfs_bdev.so 00:01:33.569 SYMLINK libspdk_bdev_split.so 00:01:33.569 SO libspdk_bdev_zone_block.so.6.0 00:01:33.569 SO libspdk_bdev_ftl.so.6.0 00:01:33.569 LIB libspdk_bdev_delay.a 00:01:33.569 SYMLINK libspdk_bdev_error.so 00:01:33.569 LIB libspdk_bdev_malloc.a 00:01:33.569 LIB libspdk_bdev_aio.a 00:01:33.569 LIB libspdk_bdev_iscsi.a 00:01:33.569 SO libspdk_bdev_malloc.so.6.0 00:01:33.828 SYMLINK libspdk_bdev_null.so 00:01:33.828 SO libspdk_bdev_delay.so.6.0 00:01:33.828 SYMLINK libspdk_bdev_gpt.so 00:01:33.828 SO libspdk_bdev_aio.so.6.0 00:01:33.828 SYMLINK libspdk_bdev_zone_block.so 00:01:33.828 SYMLINK libspdk_bdev_passthru.so 00:01:33.828 SO libspdk_bdev_iscsi.so.6.0 00:01:33.828 SYMLINK libspdk_bdev_ftl.so 00:01:33.828 LIB libspdk_bdev_lvol.a 00:01:33.828 SYMLINK libspdk_bdev_malloc.so 00:01:33.828 SO libspdk_bdev_lvol.so.6.0 00:01:33.828 SYMLINK libspdk_bdev_delay.so 00:01:33.828 SYMLINK libspdk_bdev_iscsi.so 00:01:33.828 SYMLINK libspdk_bdev_aio.so 00:01:33.828 LIB libspdk_bdev_virtio.a 00:01:33.828 SYMLINK libspdk_bdev_lvol.so 00:01:33.828 SO libspdk_bdev_virtio.so.6.0 00:01:33.828 SYMLINK libspdk_bdev_virtio.so 00:01:34.088 LIB libspdk_bdev_raid.a 00:01:34.088 SO libspdk_bdev_raid.so.6.0 00:01:34.088 SYMLINK libspdk_bdev_raid.so 00:01:35.032 LIB libspdk_bdev_nvme.a 00:01:35.032 SO libspdk_bdev_nvme.so.7.0 00:01:35.032 SYMLINK libspdk_bdev_nvme.so 00:01:35.600 CC module/event/subsystems/scheduler/scheduler.o 00:01:35.600 CC module/event/subsystems/vmd/vmd.o 00:01:35.600 CC module/event/subsystems/vmd/vmd_rpc.o 00:01:35.600 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:01:35.600 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:01:35.600 CC module/event/subsystems/sock/sock.o 00:01:35.600 CC module/event/subsystems/keyring/keyring.o 00:01:35.600 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:01:35.600 CC module/event/subsystems/iobuf/iobuf.o 00:01:35.860 LIB libspdk_event_scheduler.a 00:01:35.860 SO libspdk_event_scheduler.so.4.0 00:01:35.860 LIB libspdk_event_vmd.a 00:01:35.860 LIB libspdk_event_keyring.a 00:01:35.860 LIB libspdk_event_sock.a 00:01:35.860 LIB libspdk_event_vhost_blk.a 00:01:35.860 LIB libspdk_event_iobuf.a 00:01:35.860 LIB libspdk_event_vfu_tgt.a 00:01:35.860 SO libspdk_event_keyring.so.1.0 00:01:35.860 SO libspdk_event_sock.so.5.0 00:01:35.860 SO libspdk_event_vmd.so.6.0 00:01:35.860 SYMLINK libspdk_event_scheduler.so 00:01:35.860 SO libspdk_event_vhost_blk.so.3.0 00:01:35.860 SO libspdk_event_vfu_tgt.so.3.0 00:01:35.860 SO libspdk_event_iobuf.so.3.0 00:01:35.860 SYMLINK libspdk_event_keyring.so 00:01:35.860 SYMLINK libspdk_event_sock.so 00:01:35.860 SYMLINK libspdk_event_vfu_tgt.so 00:01:35.860 SYMLINK libspdk_event_vmd.so 00:01:35.860 SYMLINK libspdk_event_vhost_blk.so 00:01:35.860 SYMLINK libspdk_event_iobuf.so 00:01:36.119 CC module/event/subsystems/accel/accel.o 00:01:36.378 LIB libspdk_event_accel.a 00:01:36.378 SO libspdk_event_accel.so.6.0 00:01:36.378 SYMLINK libspdk_event_accel.so 00:01:36.638 CC module/event/subsystems/bdev/bdev.o 00:01:36.898 LIB libspdk_event_bdev.a 00:01:36.898 SO libspdk_event_bdev.so.6.0 00:01:36.898 SYMLINK libspdk_event_bdev.so 00:01:37.157 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:01:37.157 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:01:37.157 CC module/event/subsystems/ublk/ublk.o 00:01:37.157 CC module/event/subsystems/nbd/nbd.o 00:01:37.157 CC module/event/subsystems/scsi/scsi.o 00:01:37.416 LIB libspdk_event_ublk.a 00:01:37.416 SO libspdk_event_ublk.so.3.0 00:01:37.416 LIB libspdk_event_nbd.a 00:01:37.416 LIB libspdk_event_scsi.a 00:01:37.416 LIB libspdk_event_nvmf.a 00:01:37.416 SO libspdk_event_nbd.so.6.0 00:01:37.416 SYMLINK libspdk_event_ublk.so 00:01:37.416 SO libspdk_event_scsi.so.6.0 00:01:37.416 SO libspdk_event_nvmf.so.6.0 00:01:37.416 SYMLINK libspdk_event_nbd.so 00:01:37.416 SYMLINK libspdk_event_scsi.so 00:01:37.416 SYMLINK libspdk_event_nvmf.so 00:01:37.676 CC module/event/subsystems/iscsi/iscsi.o 00:01:37.676 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:01:37.935 LIB libspdk_event_vhost_scsi.a 00:01:37.935 LIB libspdk_event_iscsi.a 00:01:37.935 SO libspdk_event_vhost_scsi.so.3.0 00:01:37.935 SO libspdk_event_iscsi.so.6.0 00:01:37.935 SYMLINK libspdk_event_vhost_scsi.so 00:01:37.935 SYMLINK libspdk_event_iscsi.so 00:01:38.195 SO libspdk.so.6.0 00:01:38.195 SYMLINK libspdk.so 00:01:38.454 CC app/spdk_lspci/spdk_lspci.o 00:01:38.454 CXX app/trace/trace.o 00:01:38.454 CC app/trace_record/trace_record.o 00:01:38.454 CC app/spdk_nvme_discover/discovery_aer.o 00:01:38.454 CC app/spdk_nvme_perf/perf.o 00:01:38.454 CC app/spdk_top/spdk_top.o 00:01:38.454 CC app/spdk_nvme_identify/identify.o 00:01:38.454 TEST_HEADER include/spdk/accel.h 00:01:38.454 TEST_HEADER include/spdk/assert.h 00:01:38.454 TEST_HEADER include/spdk/accel_module.h 00:01:38.454 TEST_HEADER include/spdk/barrier.h 00:01:38.454 CC test/rpc_client/rpc_client_test.o 00:01:38.454 TEST_HEADER include/spdk/bdev.h 00:01:38.454 TEST_HEADER include/spdk/base64.h 00:01:38.454 TEST_HEADER include/spdk/bdev_module.h 00:01:38.454 TEST_HEADER include/spdk/bdev_zone.h 00:01:38.454 TEST_HEADER include/spdk/bit_array.h 00:01:38.454 TEST_HEADER include/spdk/bit_pool.h 00:01:38.454 TEST_HEADER include/spdk/blob_bdev.h 00:01:38.454 TEST_HEADER include/spdk/blob.h 00:01:38.454 TEST_HEADER include/spdk/blobfs.h 00:01:38.454 TEST_HEADER include/spdk/conf.h 00:01:38.454 TEST_HEADER include/spdk/blobfs_bdev.h 00:01:38.454 TEST_HEADER include/spdk/config.h 00:01:38.454 TEST_HEADER include/spdk/cpuset.h 00:01:38.454 TEST_HEADER include/spdk/crc16.h 00:01:38.454 TEST_HEADER include/spdk/crc64.h 00:01:38.454 TEST_HEADER include/spdk/dif.h 00:01:38.454 TEST_HEADER include/spdk/crc32.h 00:01:38.455 TEST_HEADER include/spdk/endian.h 00:01:38.455 TEST_HEADER include/spdk/env_dpdk.h 00:01:38.455 TEST_HEADER include/spdk/dma.h 00:01:38.455 TEST_HEADER include/spdk/env.h 00:01:38.455 CC examples/interrupt_tgt/interrupt_tgt.o 00:01:38.455 CC app/nvmf_tgt/nvmf_main.o 00:01:38.455 TEST_HEADER include/spdk/event.h 00:01:38.455 TEST_HEADER include/spdk/file.h 00:01:38.455 TEST_HEADER include/spdk/fd.h 00:01:38.455 TEST_HEADER include/spdk/ftl.h 00:01:38.455 TEST_HEADER include/spdk/fd_group.h 00:01:38.455 CC app/spdk_dd/spdk_dd.o 00:01:38.455 TEST_HEADER include/spdk/gpt_spec.h 00:01:38.455 TEST_HEADER include/spdk/hexlify.h 00:01:38.455 TEST_HEADER include/spdk/histogram_data.h 00:01:38.455 TEST_HEADER include/spdk/idxd_spec.h 00:01:38.455 TEST_HEADER include/spdk/idxd.h 00:01:38.455 TEST_HEADER include/spdk/init.h 00:01:38.455 TEST_HEADER include/spdk/ioat_spec.h 00:01:38.455 TEST_HEADER include/spdk/ioat.h 00:01:38.455 TEST_HEADER include/spdk/jsonrpc.h 00:01:38.455 TEST_HEADER include/spdk/json.h 00:01:38.455 CC app/iscsi_tgt/iscsi_tgt.o 00:01:38.455 TEST_HEADER include/spdk/iscsi_spec.h 00:01:38.455 TEST_HEADER include/spdk/keyring.h 00:01:38.455 TEST_HEADER include/spdk/likely.h 00:01:38.455 TEST_HEADER include/spdk/log.h 00:01:38.455 TEST_HEADER include/spdk/keyring_module.h 00:01:38.455 TEST_HEADER include/spdk/lvol.h 00:01:38.455 TEST_HEADER include/spdk/memory.h 00:01:38.455 TEST_HEADER include/spdk/mmio.h 00:01:38.455 TEST_HEADER include/spdk/notify.h 00:01:38.455 TEST_HEADER include/spdk/nvme.h 00:01:38.455 TEST_HEADER include/spdk/nbd.h 00:01:38.455 TEST_HEADER include/spdk/nvme_ocssd.h 00:01:38.455 TEST_HEADER include/spdk/nvme_intel.h 00:01:38.455 TEST_HEADER include/spdk/nvme_spec.h 00:01:38.455 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:01:38.455 TEST_HEADER include/spdk/nvme_zns.h 00:01:38.455 TEST_HEADER include/spdk/nvmf_cmd.h 00:01:38.455 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:01:38.455 TEST_HEADER include/spdk/nvmf.h 00:01:38.455 TEST_HEADER include/spdk/nvmf_spec.h 00:01:38.455 CC app/spdk_tgt/spdk_tgt.o 00:01:38.455 TEST_HEADER include/spdk/opal.h 00:01:38.455 TEST_HEADER include/spdk/nvmf_transport.h 00:01:38.455 TEST_HEADER include/spdk/pci_ids.h 00:01:38.455 TEST_HEADER include/spdk/opal_spec.h 00:01:38.455 TEST_HEADER include/spdk/pipe.h 00:01:38.455 TEST_HEADER include/spdk/queue.h 00:01:38.455 TEST_HEADER include/spdk/reduce.h 00:01:38.455 TEST_HEADER include/spdk/rpc.h 00:01:38.455 TEST_HEADER include/spdk/scsi.h 00:01:38.455 TEST_HEADER include/spdk/scheduler.h 00:01:38.455 TEST_HEADER include/spdk/scsi_spec.h 00:01:38.455 TEST_HEADER include/spdk/sock.h 00:01:38.725 TEST_HEADER include/spdk/string.h 00:01:38.725 TEST_HEADER include/spdk/stdinc.h 00:01:38.725 TEST_HEADER include/spdk/thread.h 00:01:38.725 TEST_HEADER include/spdk/trace.h 00:01:38.725 TEST_HEADER include/spdk/trace_parser.h 00:01:38.725 TEST_HEADER include/spdk/tree.h 00:01:38.725 TEST_HEADER include/spdk/ublk.h 00:01:38.725 TEST_HEADER include/spdk/util.h 00:01:38.725 TEST_HEADER include/spdk/uuid.h 00:01:38.725 TEST_HEADER include/spdk/version.h 00:01:38.725 TEST_HEADER include/spdk/vfio_user_pci.h 00:01:38.725 TEST_HEADER include/spdk/vfio_user_spec.h 00:01:38.725 TEST_HEADER include/spdk/vhost.h 00:01:38.725 TEST_HEADER include/spdk/vmd.h 00:01:38.725 TEST_HEADER include/spdk/zipf.h 00:01:38.725 CXX test/cpp_headers/accel.o 00:01:38.725 CXX test/cpp_headers/accel_module.o 00:01:38.725 TEST_HEADER include/spdk/xor.h 00:01:38.725 CXX test/cpp_headers/assert.o 00:01:38.725 CXX test/cpp_headers/base64.o 00:01:38.725 CXX test/cpp_headers/bdev.o 00:01:38.725 CXX test/cpp_headers/barrier.o 00:01:38.725 CXX test/cpp_headers/bdev_zone.o 00:01:38.725 CXX test/cpp_headers/bdev_module.o 00:01:38.725 CXX test/cpp_headers/bit_pool.o 00:01:38.725 CXX test/cpp_headers/blob_bdev.o 00:01:38.725 CXX test/cpp_headers/bit_array.o 00:01:38.725 CXX test/cpp_headers/blob.o 00:01:38.725 CXX test/cpp_headers/blobfs.o 00:01:38.725 CXX test/cpp_headers/conf.o 00:01:38.725 CXX test/cpp_headers/blobfs_bdev.o 00:01:38.725 CXX test/cpp_headers/config.o 00:01:38.725 CXX test/cpp_headers/crc32.o 00:01:38.725 CXX test/cpp_headers/cpuset.o 00:01:38.725 CXX test/cpp_headers/crc16.o 00:01:38.725 CXX test/cpp_headers/crc64.o 00:01:38.725 CXX test/cpp_headers/dma.o 00:01:38.725 CXX test/cpp_headers/endian.o 00:01:38.725 CXX test/cpp_headers/dif.o 00:01:38.725 CXX test/cpp_headers/env.o 00:01:38.725 CXX test/cpp_headers/event.o 00:01:38.725 CXX test/cpp_headers/fd_group.o 00:01:38.725 CXX test/cpp_headers/env_dpdk.o 00:01:38.725 CXX test/cpp_headers/fd.o 00:01:38.725 CXX test/cpp_headers/file.o 00:01:38.725 CXX test/cpp_headers/hexlify.o 00:01:38.725 CXX test/cpp_headers/gpt_spec.o 00:01:38.725 CXX test/cpp_headers/ftl.o 00:01:38.725 CXX test/cpp_headers/idxd.o 00:01:38.725 CXX test/cpp_headers/init.o 00:01:38.725 CXX test/cpp_headers/ioat.o 00:01:38.725 CXX test/cpp_headers/histogram_data.o 00:01:38.725 CXX test/cpp_headers/ioat_spec.o 00:01:38.725 CXX test/cpp_headers/idxd_spec.o 00:01:38.725 CXX test/cpp_headers/json.o 00:01:38.725 CXX test/cpp_headers/keyring.o 00:01:38.725 CXX test/cpp_headers/jsonrpc.o 00:01:38.725 CXX test/cpp_headers/iscsi_spec.o 00:01:38.725 CXX test/cpp_headers/keyring_module.o 00:01:38.725 CXX test/cpp_headers/likely.o 00:01:38.725 CXX test/cpp_headers/log.o 00:01:38.725 CXX test/cpp_headers/memory.o 00:01:38.725 CXX test/cpp_headers/lvol.o 00:01:38.725 CXX test/cpp_headers/mmio.o 00:01:38.725 CXX test/cpp_headers/notify.o 00:01:38.725 CXX test/cpp_headers/nvme.o 00:01:38.725 CXX test/cpp_headers/nbd.o 00:01:38.725 CXX test/cpp_headers/nvme_ocssd.o 00:01:38.725 CXX test/cpp_headers/nvme_ocssd_spec.o 00:01:38.725 CXX test/cpp_headers/nvme_intel.o 00:01:38.725 CXX test/cpp_headers/nvme_spec.o 00:01:38.725 CXX test/cpp_headers/nvme_zns.o 00:01:38.725 CXX test/cpp_headers/nvmf_cmd.o 00:01:38.725 CXX test/cpp_headers/nvmf_fc_spec.o 00:01:38.725 CXX test/cpp_headers/nvmf.o 00:01:38.725 CXX test/cpp_headers/nvmf_spec.o 00:01:38.725 CXX test/cpp_headers/nvmf_transport.o 00:01:38.725 CXX test/cpp_headers/opal.o 00:01:38.725 CXX test/cpp_headers/opal_spec.o 00:01:38.725 CXX test/cpp_headers/pci_ids.o 00:01:38.725 CXX test/cpp_headers/pipe.o 00:01:38.725 CXX test/cpp_headers/queue.o 00:01:38.725 CXX test/cpp_headers/reduce.o 00:01:38.725 CC examples/ioat/perf/perf.o 00:01:38.725 CC test/env/memory/memory_ut.o 00:01:38.725 CC app/fio/nvme/fio_plugin.o 00:01:38.725 CC examples/ioat/verify/verify.o 00:01:38.725 CC test/env/pci/pci_ut.o 00:01:38.725 CC examples/util/zipf/zipf.o 00:01:38.725 CC test/app/jsoncat/jsoncat.o 00:01:38.725 CXX test/cpp_headers/rpc.o 00:01:38.725 CC test/env/vtophys/vtophys.o 00:01:38.725 CC test/dma/test_dma/test_dma.o 00:01:38.726 CC test/app/stub/stub.o 00:01:38.726 CC test/thread/poller_perf/poller_perf.o 00:01:38.726 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:01:38.726 CC test/app/histogram_perf/histogram_perf.o 00:01:38.726 LINK spdk_lspci 00:01:38.998 CC app/fio/bdev/fio_plugin.o 00:01:38.998 CC test/app/bdev_svc/bdev_svc.o 00:01:38.998 LINK spdk_nvme_discover 00:01:38.998 LINK nvmf_tgt 00:01:38.998 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:01:38.998 LINK spdk_tgt 00:01:39.259 LINK rpc_client_test 00:01:39.259 LINK interrupt_tgt 00:01:39.259 LINK iscsi_tgt 00:01:39.259 CC test/env/mem_callbacks/mem_callbacks.o 00:01:39.259 LINK spdk_trace_record 00:01:39.259 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:01:39.259 CXX test/cpp_headers/scheduler.o 00:01:39.259 CXX test/cpp_headers/scsi.o 00:01:39.259 CXX test/cpp_headers/scsi_spec.o 00:01:39.259 CXX test/cpp_headers/sock.o 00:01:39.259 CXX test/cpp_headers/stdinc.o 00:01:39.259 LINK zipf 00:01:39.259 CXX test/cpp_headers/string.o 00:01:39.259 CXX test/cpp_headers/trace.o 00:01:39.259 CXX test/cpp_headers/trace_parser.o 00:01:39.259 CXX test/cpp_headers/thread.o 00:01:39.259 CXX test/cpp_headers/tree.o 00:01:39.259 CXX test/cpp_headers/ublk.o 00:01:39.259 CXX test/cpp_headers/util.o 00:01:39.259 CXX test/cpp_headers/uuid.o 00:01:39.259 CXX test/cpp_headers/version.o 00:01:39.259 CXX test/cpp_headers/vfio_user_spec.o 00:01:39.259 CXX test/cpp_headers/vfio_user_pci.o 00:01:39.259 LINK histogram_perf 00:01:39.259 CXX test/cpp_headers/vhost.o 00:01:39.259 CXX test/cpp_headers/xor.o 00:01:39.259 CXX test/cpp_headers/vmd.o 00:01:39.259 CXX test/cpp_headers/zipf.o 00:01:39.259 LINK ioat_perf 00:01:39.259 LINK verify 00:01:39.259 LINK jsoncat 00:01:39.259 LINK spdk_dd 00:01:39.259 LINK vtophys 00:01:39.259 LINK poller_perf 00:01:39.259 LINK env_dpdk_post_init 00:01:39.259 LINK spdk_trace 00:01:39.259 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:01:39.259 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:01:39.518 LINK stub 00:01:39.518 LINK bdev_svc 00:01:39.518 LINK pci_ut 00:01:39.518 LINK test_dma 00:01:39.518 LINK spdk_bdev 00:01:39.777 LINK nvme_fuzz 00:01:39.777 LINK spdk_nvme_perf 00:01:39.777 LINK spdk_nvme 00:01:39.777 LINK spdk_nvme_identify 00:01:39.777 CC examples/vmd/led/led.o 00:01:39.777 CC examples/vmd/lsvmd/lsvmd.o 00:01:39.777 CC examples/idxd/perf/perf.o 00:01:39.777 CC app/vhost/vhost.o 00:01:39.777 LINK vhost_fuzz 00:01:39.777 CC examples/sock/hello_world/hello_sock.o 00:01:39.777 CC examples/thread/thread/thread_ex.o 00:01:39.777 CC test/event/event_perf/event_perf.o 00:01:39.777 CC test/event/reactor/reactor.o 00:01:39.777 CC test/event/reactor_perf/reactor_perf.o 00:01:39.777 CC test/event/app_repeat/app_repeat.o 00:01:39.777 CC test/event/scheduler/scheduler.o 00:01:39.777 LINK spdk_top 00:01:39.777 LINK led 00:01:39.777 LINK lsvmd 00:01:39.777 LINK mem_callbacks 00:01:39.777 LINK reactor 00:01:40.035 LINK reactor_perf 00:01:40.035 LINK event_perf 00:01:40.035 LINK vhost 00:01:40.035 LINK hello_sock 00:01:40.035 LINK app_repeat 00:01:40.035 LINK thread 00:01:40.035 LINK idxd_perf 00:01:40.035 LINK scheduler 00:01:40.035 CC test/nvme/cuse/cuse.o 00:01:40.035 CC test/nvme/reset/reset.o 00:01:40.035 CC test/nvme/fdp/fdp.o 00:01:40.035 CC test/nvme/startup/startup.o 00:01:40.035 CC test/nvme/aer/aer.o 00:01:40.035 CC test/nvme/fused_ordering/fused_ordering.o 00:01:40.035 CC test/nvme/boot_partition/boot_partition.o 00:01:40.035 CC test/nvme/sgl/sgl.o 00:01:40.035 CC test/nvme/err_injection/err_injection.o 00:01:40.035 CC test/nvme/overhead/overhead.o 00:01:40.035 CC test/nvme/e2edp/nvme_dp.o 00:01:40.035 CC test/nvme/simple_copy/simple_copy.o 00:01:40.035 CC test/nvme/reserve/reserve.o 00:01:40.035 CC test/nvme/doorbell_aers/doorbell_aers.o 00:01:40.035 CC test/nvme/compliance/nvme_compliance.o 00:01:40.035 CC test/nvme/connect_stress/connect_stress.o 00:01:40.035 CC test/accel/dif/dif.o 00:01:40.035 CC test/blobfs/mkfs/mkfs.o 00:01:40.035 LINK memory_ut 00:01:40.035 CC test/lvol/esnap/esnap.o 00:01:40.293 LINK startup 00:01:40.293 LINK boot_partition 00:01:40.293 LINK err_injection 00:01:40.293 LINK fused_ordering 00:01:40.293 LINK doorbell_aers 00:01:40.293 LINK reserve 00:01:40.293 LINK connect_stress 00:01:40.293 LINK simple_copy 00:01:40.293 LINK reset 00:01:40.293 LINK aer 00:01:40.293 LINK nvme_dp 00:01:40.293 LINK sgl 00:01:40.293 LINK overhead 00:01:40.293 LINK mkfs 00:01:40.293 LINK fdp 00:01:40.293 LINK nvme_compliance 00:01:40.293 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:01:40.293 CC examples/nvme/hotplug/hotplug.o 00:01:40.293 CC examples/nvme/hello_world/hello_world.o 00:01:40.293 CC examples/nvme/cmb_copy/cmb_copy.o 00:01:40.293 CC examples/nvme/nvme_manage/nvme_manage.o 00:01:40.293 CC examples/nvme/arbitration/arbitration.o 00:01:40.293 CC examples/nvme/reconnect/reconnect.o 00:01:40.293 CC examples/nvme/abort/abort.o 00:01:40.551 CC examples/accel/perf/accel_perf.o 00:01:40.551 CC examples/blob/cli/blobcli.o 00:01:40.551 CC examples/blob/hello_world/hello_blob.o 00:01:40.551 LINK dif 00:01:40.551 LINK pmr_persistence 00:01:40.551 LINK cmb_copy 00:01:40.551 LINK hello_world 00:01:40.551 LINK hotplug 00:01:40.551 LINK iscsi_fuzz 00:01:40.551 LINK reconnect 00:01:40.551 LINK arbitration 00:01:40.810 LINK hello_blob 00:01:40.810 LINK abort 00:01:40.810 LINK nvme_manage 00:01:40.810 LINK accel_perf 00:01:40.810 LINK blobcli 00:01:41.117 CC test/bdev/bdevio/bdevio.o 00:01:41.117 LINK cuse 00:01:41.385 CC examples/bdev/bdevperf/bdevperf.o 00:01:41.385 CC examples/bdev/hello_world/hello_bdev.o 00:01:41.385 LINK bdevio 00:01:41.385 LINK hello_bdev 00:01:41.951 LINK bdevperf 00:01:42.210 CC examples/nvmf/nvmf/nvmf.o 00:01:42.468 LINK nvmf 00:01:43.404 LINK esnap 00:01:43.972 00:01:43.972 real 0m43.578s 00:01:43.972 user 6m28.928s 00:01:43.972 sys 3m20.664s 00:01:43.972 00:03:02 make -- common/autotest_common.sh@1118 -- $ xtrace_disable 00:01:43.972 00:03:02 make -- common/autotest_common.sh@10 -- $ set +x 00:01:43.972 ************************************ 00:01:43.972 END TEST make 00:01:43.972 ************************************ 00:01:43.972 00:03:02 -- common/autotest_common.sh@1136 -- $ return 0 00:01:43.972 00:03:02 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:01:43.972 00:03:02 -- pm/common@29 -- $ signal_monitor_resources TERM 00:01:43.972 00:03:02 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:01:43.972 00:03:02 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:43.972 00:03:02 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:01:43.972 00:03:02 -- pm/common@44 -- $ pid=1219037 00:01:43.972 00:03:02 -- pm/common@50 -- $ kill -TERM 1219037 00:01:43.972 00:03:02 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:43.972 00:03:02 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:01:43.972 00:03:02 -- pm/common@44 -- $ pid=1219039 00:01:43.972 00:03:02 -- pm/common@50 -- $ kill -TERM 1219039 00:01:43.972 00:03:02 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:43.972 00:03:02 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:01:43.972 00:03:02 -- pm/common@44 -- $ pid=1219041 00:01:43.972 00:03:02 -- pm/common@50 -- $ kill -TERM 1219041 00:01:43.972 00:03:02 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:43.972 00:03:02 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:01:43.972 00:03:02 -- pm/common@44 -- $ pid=1219065 00:01:43.972 00:03:02 -- pm/common@50 -- $ sudo -E kill -TERM 1219065 00:01:43.972 00:03:02 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:01:43.972 00:03:02 -- nvmf/common.sh@7 -- # uname -s 00:01:43.972 00:03:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:01:43.972 00:03:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:01:43.972 00:03:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:01:43.972 00:03:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:01:43.972 00:03:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:01:43.972 00:03:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:01:43.972 00:03:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:01:43.972 00:03:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:01:43.972 00:03:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:01:43.972 00:03:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:01:43.972 00:03:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:01:43.972 00:03:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:01:43.972 00:03:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:01:43.972 00:03:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:01:43.972 00:03:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:01:43.972 00:03:02 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:01:43.972 00:03:02 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:43.972 00:03:02 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:01:43.972 00:03:02 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:43.972 00:03:02 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:43.972 00:03:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:43.972 00:03:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:43.972 00:03:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:43.972 00:03:02 -- paths/export.sh@5 -- # export PATH 00:01:43.972 00:03:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:43.972 00:03:02 -- nvmf/common.sh@47 -- # : 0 00:01:43.972 00:03:02 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:01:43.972 00:03:02 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:01:43.972 00:03:02 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:01:43.972 00:03:02 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:01:43.972 00:03:02 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:01:43.972 00:03:02 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:01:43.972 00:03:02 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:01:43.972 00:03:02 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:01:43.972 00:03:02 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:01:43.972 00:03:02 -- spdk/autotest.sh@32 -- # uname -s 00:01:43.972 00:03:02 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:01:43.972 00:03:02 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:01:43.972 00:03:02 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:01:43.972 00:03:02 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:01:43.972 00:03:02 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:01:43.972 00:03:02 -- spdk/autotest.sh@44 -- # modprobe nbd 00:01:43.972 00:03:02 -- spdk/autotest.sh@46 -- # type -P udevadm 00:01:43.972 00:03:02 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:01:43.972 00:03:02 -- spdk/autotest.sh@48 -- # udevadm_pid=1278234 00:01:43.972 00:03:02 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:01:43.972 00:03:02 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:01:43.972 00:03:02 -- pm/common@17 -- # local monitor 00:01:43.972 00:03:02 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:43.972 00:03:02 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:43.972 00:03:02 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:43.972 00:03:02 -- pm/common@21 -- # date +%s 00:01:43.972 00:03:02 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:43.972 00:03:02 -- pm/common@21 -- # date +%s 00:01:43.972 00:03:02 -- pm/common@25 -- # sleep 1 00:01:43.972 00:03:02 -- pm/common@21 -- # date +%s 00:01:43.972 00:03:02 -- pm/common@21 -- # date +%s 00:01:43.972 00:03:02 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721080982 00:01:43.972 00:03:02 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721080982 00:01:43.972 00:03:02 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721080982 00:01:43.972 00:03:02 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721080982 00:01:43.972 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721080982_collect-cpu-load.pm.log 00:01:43.972 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721080982_collect-vmstat.pm.log 00:01:43.972 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721080982_collect-cpu-temp.pm.log 00:01:43.972 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721080982_collect-bmc-pm.bmc.pm.log 00:01:44.908 00:03:03 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:01:44.908 00:03:03 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:01:44.908 00:03:03 -- common/autotest_common.sh@716 -- # xtrace_disable 00:01:44.908 00:03:03 -- common/autotest_common.sh@10 -- # set +x 00:01:44.908 00:03:03 -- spdk/autotest.sh@59 -- # create_test_list 00:01:44.908 00:03:03 -- common/autotest_common.sh@740 -- # xtrace_disable 00:01:44.908 00:03:03 -- common/autotest_common.sh@10 -- # set +x 00:01:45.165 00:03:03 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:01:45.165 00:03:03 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:45.165 00:03:03 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:45.165 00:03:03 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:45.165 00:03:03 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:45.165 00:03:03 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:01:45.165 00:03:03 -- common/autotest_common.sh@1449 -- # uname 00:01:45.165 00:03:03 -- common/autotest_common.sh@1449 -- # '[' Linux = FreeBSD ']' 00:01:45.165 00:03:03 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:01:45.165 00:03:03 -- common/autotest_common.sh@1469 -- # uname 00:01:45.165 00:03:03 -- common/autotest_common.sh@1469 -- # [[ Linux = FreeBSD ]] 00:01:45.165 00:03:03 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:01:45.165 00:03:03 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:01:45.165 00:03:03 -- spdk/autotest.sh@72 -- # hash lcov 00:01:45.165 00:03:03 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:01:45.165 00:03:03 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:01:45.165 --rc lcov_branch_coverage=1 00:01:45.165 --rc lcov_function_coverage=1 00:01:45.165 --rc genhtml_branch_coverage=1 00:01:45.165 --rc genhtml_function_coverage=1 00:01:45.165 --rc genhtml_legend=1 00:01:45.165 --rc geninfo_all_blocks=1 00:01:45.165 ' 00:01:45.165 00:03:03 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:01:45.166 --rc lcov_branch_coverage=1 00:01:45.166 --rc lcov_function_coverage=1 00:01:45.166 --rc genhtml_branch_coverage=1 00:01:45.166 --rc genhtml_function_coverage=1 00:01:45.166 --rc genhtml_legend=1 00:01:45.166 --rc geninfo_all_blocks=1 00:01:45.166 ' 00:01:45.166 00:03:03 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:01:45.166 --rc lcov_branch_coverage=1 00:01:45.166 --rc lcov_function_coverage=1 00:01:45.166 --rc genhtml_branch_coverage=1 00:01:45.166 --rc genhtml_function_coverage=1 00:01:45.166 --rc genhtml_legend=1 00:01:45.166 --rc geninfo_all_blocks=1 00:01:45.166 --no-external' 00:01:45.166 00:03:03 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:01:45.166 --rc lcov_branch_coverage=1 00:01:45.166 --rc lcov_function_coverage=1 00:01:45.166 --rc genhtml_branch_coverage=1 00:01:45.166 --rc genhtml_function_coverage=1 00:01:45.166 --rc genhtml_legend=1 00:01:45.166 --rc geninfo_all_blocks=1 00:01:45.166 --no-external' 00:01:45.166 00:03:03 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:01:45.166 lcov: LCOV version 1.14 00:01:45.166 00:03:03 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:01:49.348 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:01:49.348 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:01:49.348 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:01:49.348 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:01:49.348 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:01:49.348 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:01:49.348 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:01:49.348 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:01:49.348 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:01:49.348 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:01:49.348 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:01:49.348 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:01:49.348 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:01:49.348 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:01:49.348 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:01:49.348 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:01:49.348 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:01:49.348 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:01:49.348 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:01:49.348 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:01:49.348 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:01:49.348 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:01:49.348 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:01:49.348 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:01:49.348 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:01:49.348 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:01:49.348 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:01:49.348 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:01:49.348 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:01:49.348 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:01:49.348 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:01:49.348 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:01:49.348 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:01:49.348 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:01:49.348 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:01:49.348 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:01:49.348 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:01:49.348 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:01:49.348 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:01:49.348 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:01:49.348 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:01:49.348 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:01:49.348 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:01:49.348 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:01:49.348 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:01:49.348 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:01:49.348 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:01:49.348 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:01:49.348 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:01:49.348 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:01:49.348 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:01:49.348 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:01:49.348 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:01:49.348 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:01:49.348 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:01:49.348 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:01:49.348 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:01:49.348 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:01:49.348 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:01:49.348 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:01:49.348 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:01:49.348 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:01:49.348 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:01:49.348 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:01:49.348 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:01:49.348 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:01:49.348 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:01:49.348 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:01:49.348 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:01:49.348 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:01:49.348 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:01:49.348 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:01:49.348 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:01:49.348 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:01:49.349 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:01:49.349 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:01:49.349 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:01:49.349 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:01:49.349 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:01:49.349 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:01:49.349 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:01:49.349 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:01:49.349 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:01:49.349 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:01:49.349 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:01:49.349 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:01:49.349 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:01:49.349 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:01:49.349 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:01:49.349 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:01:49.349 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:01:49.349 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:01:49.349 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:01:49.349 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:01:49.349 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:01:49.349 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:01:49.349 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:01:49.349 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:01:49.349 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:01:49.349 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:01:49.349 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:01:49.349 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:01:49.349 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:01:49.349 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:01:49.349 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:01:49.349 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:01:49.349 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:01:49.349 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:01:49.349 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:01:49.349 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:01:49.349 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:01:49.349 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:01:49.349 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:01:49.349 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:01:49.349 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:01:49.349 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:01:49.349 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:01:49.349 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:01:49.349 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:01:49.349 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:01:49.349 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:01:49.349 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:01:49.349 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:01:49.349 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:01:49.349 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:01:49.349 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:01:49.349 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:01:49.349 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:01:49.349 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:01:49.349 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:01:49.349 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:01:49.349 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:01:49.349 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:01:49.349 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:01:49.349 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:01:49.349 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:01:49.349 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:01:49.349 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:01:49.349 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:01:49.349 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:01:49.349 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:01:49.349 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:01:49.349 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:01:49.349 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:01:49.349 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:01:49.349 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:01:49.349 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:01:49.349 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:01:49.349 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:01:49.349 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:01:49.349 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:01:49.349 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:01:49.349 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:01:49.349 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:01:49.349 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:01:49.349 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:01:49.349 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:01:49.349 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:01:49.349 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:01:49.349 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:01:49.349 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:01:49.349 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:01:49.349 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:01:49.349 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:01:49.349 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:01:49.349 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:01:49.349 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:01:49.349 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:01:49.349 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:01:49.349 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:01:49.349 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:01:49.349 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:01:49.349 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:01:49.349 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:01:49.349 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:01:49.349 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:04.219 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:04.219 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:09.484 00:03:27 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:09.484 00:03:27 -- common/autotest_common.sh@716 -- # xtrace_disable 00:02:09.484 00:03:27 -- common/autotest_common.sh@10 -- # set +x 00:02:09.484 00:03:27 -- spdk/autotest.sh@91 -- # rm -f 00:02:09.484 00:03:27 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:12.018 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:02:12.018 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:02:12.018 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:02:12.018 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:02:12.018 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:02:12.018 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:02:12.018 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:02:12.018 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:02:12.018 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:02:12.018 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:02:12.018 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:02:12.018 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:02:12.018 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:02:12.018 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:02:12.018 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:02:12.018 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:02:12.018 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:02:12.018 00:03:30 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:12.018 00:03:30 -- common/autotest_common.sh@1663 -- # zoned_devs=() 00:02:12.018 00:03:30 -- common/autotest_common.sh@1663 -- # local -gA zoned_devs 00:02:12.018 00:03:30 -- common/autotest_common.sh@1664 -- # local nvme bdf 00:02:12.018 00:03:30 -- common/autotest_common.sh@1666 -- # for nvme in /sys/block/nvme* 00:02:12.018 00:03:30 -- common/autotest_common.sh@1667 -- # is_block_zoned nvme0n1 00:02:12.018 00:03:30 -- common/autotest_common.sh@1656 -- # local device=nvme0n1 00:02:12.018 00:03:30 -- common/autotest_common.sh@1658 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:12.018 00:03:30 -- common/autotest_common.sh@1659 -- # [[ none != none ]] 00:02:12.018 00:03:30 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:12.018 00:03:30 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:12.018 00:03:30 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:12.018 00:03:30 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:12.018 00:03:30 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:12.018 00:03:30 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:12.018 No valid GPT data, bailing 00:02:12.018 00:03:30 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:12.018 00:03:30 -- scripts/common.sh@391 -- # pt= 00:02:12.018 00:03:30 -- scripts/common.sh@392 -- # return 1 00:02:12.018 00:03:30 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:12.018 1+0 records in 00:02:12.018 1+0 records out 00:02:12.018 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00177353 s, 591 MB/s 00:02:12.018 00:03:30 -- spdk/autotest.sh@118 -- # sync 00:02:12.018 00:03:30 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:12.018 00:03:30 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:12.018 00:03:30 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:17.290 00:03:36 -- spdk/autotest.sh@124 -- # uname -s 00:02:17.290 00:03:36 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:02:17.290 00:03:36 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:17.290 00:03:36 -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:02:17.290 00:03:36 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:02:17.290 00:03:36 -- common/autotest_common.sh@10 -- # set +x 00:02:17.290 ************************************ 00:02:17.290 START TEST setup.sh 00:02:17.290 ************************************ 00:02:17.290 00:03:36 setup.sh -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:17.290 * Looking for test storage... 00:02:17.290 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:17.290 00:03:36 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:02:17.549 00:03:36 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:17.549 00:03:36 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:17.549 00:03:36 setup.sh -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:02:17.549 00:03:36 setup.sh -- common/autotest_common.sh@1099 -- # xtrace_disable 00:02:17.549 00:03:36 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:17.549 ************************************ 00:02:17.549 START TEST acl 00:02:17.549 ************************************ 00:02:17.549 00:03:36 setup.sh.acl -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:17.549 * Looking for test storage... 00:02:17.549 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:17.549 00:03:36 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:02:17.549 00:03:36 setup.sh.acl -- common/autotest_common.sh@1663 -- # zoned_devs=() 00:02:17.549 00:03:36 setup.sh.acl -- common/autotest_common.sh@1663 -- # local -gA zoned_devs 00:02:17.549 00:03:36 setup.sh.acl -- common/autotest_common.sh@1664 -- # local nvme bdf 00:02:17.549 00:03:36 setup.sh.acl -- common/autotest_common.sh@1666 -- # for nvme in /sys/block/nvme* 00:02:17.549 00:03:36 setup.sh.acl -- common/autotest_common.sh@1667 -- # is_block_zoned nvme0n1 00:02:17.549 00:03:36 setup.sh.acl -- common/autotest_common.sh@1656 -- # local device=nvme0n1 00:02:17.549 00:03:36 setup.sh.acl -- common/autotest_common.sh@1658 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:17.549 00:03:36 setup.sh.acl -- common/autotest_common.sh@1659 -- # [[ none != none ]] 00:02:17.549 00:03:36 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:02:17.549 00:03:36 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:02:17.549 00:03:36 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:02:17.549 00:03:36 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:02:17.549 00:03:36 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:02:17.549 00:03:36 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:17.549 00:03:36 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:20.911 00:03:39 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:02:20.911 00:03:39 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:02:20.911 00:03:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:20.911 00:03:39 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:02:20.911 00:03:39 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:02:20.911 00:03:39 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:22.814 Hugepages 00:02:22.814 node hugesize free / total 00:02:22.814 00:03:41 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:22.814 00:03:41 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:22.814 00:03:41 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:22.814 00:03:41 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:22.814 00:03:41 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:22.814 00:03:41 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:22.814 00:03:41 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:22.814 00:03:41 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:22.814 00:03:41 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:22.814 00:02:22.814 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:22.814 00:03:41 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:22.814 00:03:41 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:22.814 00:03:41 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:22.814 00:03:41 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:02:22.814 00:03:41 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:22.814 00:03:41 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:22.814 00:03:41 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:22.814 00:03:41 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:02:22.814 00:03:41 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:22.814 00:03:41 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:22.814 00:03:41 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:22.814 00:03:41 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:02:22.814 00:03:41 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:22.814 00:03:41 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:22.814 00:03:41 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:22.814 00:03:41 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:02:22.814 00:03:41 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:22.814 00:03:41 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:22.814 00:03:41 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:22.814 00:03:41 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:02:22.814 00:03:41 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:22.814 00:03:41 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:22.814 00:03:41 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:22.814 00:03:41 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:02:22.814 00:03:41 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:22.814 00:03:41 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:22.814 00:03:41 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:22.814 00:03:41 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:02:22.814 00:03:41 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:22.814 00:03:41 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:22.814 00:03:41 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:22.814 00:03:41 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:02:22.814 00:03:41 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:22.814 00:03:41 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:22.814 00:03:41 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:23.075 00:03:41 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:5e:00.0 == *:*:*.* ]] 00:02:23.075 00:03:41 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:23.075 00:03:41 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:02:23.075 00:03:41 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:23.075 00:03:41 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:23.075 00:03:41 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:23.075 00:03:41 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:02:23.075 00:03:41 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:23.075 00:03:41 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:23.075 00:03:41 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:23.075 00:03:41 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:02:23.075 00:03:41 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:23.075 00:03:41 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:23.075 00:03:41 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:23.075 00:03:41 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:02:23.075 00:03:41 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:23.075 00:03:41 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:23.075 00:03:41 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:23.075 00:03:41 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:02:23.075 00:03:41 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:23.075 00:03:41 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:23.075 00:03:41 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:23.075 00:03:41 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:02:23.075 00:03:41 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:23.075 00:03:41 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:23.075 00:03:41 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:23.075 00:03:41 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:02:23.075 00:03:41 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:23.075 00:03:41 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:23.075 00:03:41 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:23.075 00:03:41 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:02:23.075 00:03:41 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:23.075 00:03:41 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:23.075 00:03:41 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:23.075 00:03:41 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:02:23.075 00:03:41 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:23.075 00:03:41 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:23.075 00:03:41 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:23.075 00:03:41 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:02:23.075 00:03:41 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:02:23.075 00:03:41 setup.sh.acl -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:02:23.075 00:03:41 setup.sh.acl -- common/autotest_common.sh@1099 -- # xtrace_disable 00:02:23.075 00:03:41 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:23.075 ************************************ 00:02:23.075 START TEST denied 00:02:23.075 ************************************ 00:02:23.075 00:03:41 setup.sh.acl.denied -- common/autotest_common.sh@1117 -- # denied 00:02:23.075 00:03:41 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:5e:00.0' 00:02:23.075 00:03:41 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:02:23.075 00:03:41 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:5e:00.0' 00:02:23.075 00:03:41 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:02:23.075 00:03:41 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:26.366 0000:5e:00.0 (8086 0a54): Skipping denied controller at 0000:5e:00.0 00:02:26.366 00:03:44 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:5e:00.0 00:02:26.366 00:03:44 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:02:26.366 00:03:44 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:02:26.366 00:03:44 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:5e:00.0 ]] 00:02:26.366 00:03:44 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:5e:00.0/driver 00:02:26.366 00:03:44 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:02:26.366 00:03:44 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:02:26.366 00:03:44 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:02:26.366 00:03:44 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:26.366 00:03:44 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:30.558 00:02:30.558 real 0m6.739s 00:02:30.558 user 0m2.143s 00:02:30.558 sys 0m3.906s 00:02:30.558 00:03:48 setup.sh.acl.denied -- common/autotest_common.sh@1118 -- # xtrace_disable 00:02:30.558 00:03:48 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:02:30.558 ************************************ 00:02:30.558 END TEST denied 00:02:30.558 ************************************ 00:02:30.558 00:03:48 setup.sh.acl -- common/autotest_common.sh@1136 -- # return 0 00:02:30.558 00:03:48 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:02:30.558 00:03:48 setup.sh.acl -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:02:30.558 00:03:48 setup.sh.acl -- common/autotest_common.sh@1099 -- # xtrace_disable 00:02:30.558 00:03:48 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:30.558 ************************************ 00:02:30.558 START TEST allowed 00:02:30.558 ************************************ 00:02:30.558 00:03:48 setup.sh.acl.allowed -- common/autotest_common.sh@1117 -- # allowed 00:02:30.558 00:03:48 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:5e:00.0 00:02:30.558 00:03:48 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:02:30.558 00:03:48 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:5e:00.0 .*: nvme -> .*' 00:02:30.558 00:03:48 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:02:30.558 00:03:48 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:33.850 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:02:33.851 00:03:52 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:02:33.851 00:03:52 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:02:33.851 00:03:52 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:02:33.851 00:03:52 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:33.851 00:03:52 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:36.385 00:02:36.385 real 0m6.261s 00:02:36.385 user 0m1.889s 00:02:36.385 sys 0m3.444s 00:02:36.385 00:03:54 setup.sh.acl.allowed -- common/autotest_common.sh@1118 -- # xtrace_disable 00:02:36.385 00:03:54 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:02:36.385 ************************************ 00:02:36.385 END TEST allowed 00:02:36.385 ************************************ 00:02:36.385 00:03:54 setup.sh.acl -- common/autotest_common.sh@1136 -- # return 0 00:02:36.385 00:02:36.385 real 0m18.738s 00:02:36.385 user 0m6.151s 00:02:36.385 sys 0m11.115s 00:02:36.385 00:03:54 setup.sh.acl -- common/autotest_common.sh@1118 -- # xtrace_disable 00:02:36.385 00:03:54 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:36.385 ************************************ 00:02:36.385 END TEST acl 00:02:36.385 ************************************ 00:02:36.385 00:03:54 setup.sh -- common/autotest_common.sh@1136 -- # return 0 00:02:36.385 00:03:54 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:02:36.385 00:03:54 setup.sh -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:02:36.385 00:03:54 setup.sh -- common/autotest_common.sh@1099 -- # xtrace_disable 00:02:36.385 00:03:54 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:36.385 ************************************ 00:02:36.385 START TEST hugepages 00:02:36.385 ************************************ 00:02:36.385 00:03:54 setup.sh.hugepages -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:02:36.385 * Looking for test storage... 00:02:36.385 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:36.385 00:03:55 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:02:36.385 00:03:55 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:02:36.385 00:03:55 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:02:36.385 00:03:55 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:02:36.385 00:03:55 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:02:36.385 00:03:55 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:02:36.385 00:03:55 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:02:36.385 00:03:55 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:02:36.385 00:03:55 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:02:36.385 00:03:55 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:02:36.385 00:03:55 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:36.385 00:03:55 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:36.385 00:03:55 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:36.385 00:03:55 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:02:36.385 00:03:55 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:36.385 00:03:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:36.385 00:03:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:36.385 00:03:55 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 173324452 kB' 'MemAvailable: 176197508 kB' 'Buffers: 3896 kB' 'Cached: 10163696 kB' 'SwapCached: 0 kB' 'Active: 7182060 kB' 'Inactive: 3507524 kB' 'Active(anon): 6790052 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 525424 kB' 'Mapped: 202460 kB' 'Shmem: 6268060 kB' 'KReclaimable: 235952 kB' 'Slab: 827796 kB' 'SReclaimable: 235952 kB' 'SUnreclaim: 591844 kB' 'KernelStack: 20816 kB' 'PageTables: 9432 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 101982028 kB' 'Committed_AS: 8324296 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315676 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 3062740 kB' 'DirectMap2M: 16539648 kB' 'DirectMap1G: 182452224 kB' 00:02:36.385 00:03:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:36.385 00:03:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:36.385 00:03:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:36.385 00:03:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:36.385 00:03:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:36.385 00:03:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:36.385 00:03:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:36.385 00:03:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:36.385 00:03:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:36.385 00:03:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:36.385 00:03:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:36.385 00:03:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:36.386 00:03:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:36.387 00:03:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:36.387 00:03:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:36.387 00:03:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:36.387 00:03:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:36.387 00:03:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:36.387 00:03:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:36.387 00:03:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:36.387 00:03:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:36.387 00:03:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:36.387 00:03:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:36.387 00:03:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:36.387 00:03:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:36.387 00:03:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:36.387 00:03:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:36.387 00:03:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:36.387 00:03:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:36.387 00:03:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:36.387 00:03:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:36.387 00:03:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:36.387 00:03:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:36.387 00:03:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:36.387 00:03:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:36.387 00:03:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:36.387 00:03:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:36.387 00:03:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:36.387 00:03:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:36.387 00:03:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:36.387 00:03:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:36.387 00:03:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:36.387 00:03:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:36.387 00:03:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:36.387 00:03:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:36.387 00:03:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:36.387 00:03:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:36.387 00:03:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:36.387 00:03:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:36.387 00:03:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:36.387 00:03:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:36.387 00:03:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:36.387 00:03:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:36.387 00:03:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:36.387 00:03:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:36.387 00:03:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:36.387 00:03:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:36.387 00:03:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:36.387 00:03:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:36.387 00:03:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:36.387 00:03:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:36.387 00:03:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:36.387 00:03:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:36.387 00:03:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:36.387 00:03:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:36.387 00:03:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:36.387 00:03:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:36.387 00:03:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:36.387 00:03:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:36.387 00:03:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:36.387 00:03:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:36.387 00:03:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:36.387 00:03:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:36.387 00:03:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:36.387 00:03:55 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:02:36.387 00:03:55 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:02:36.387 00:03:55 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:02:36.387 00:03:55 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:02:36.387 00:03:55 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:02:36.387 00:03:55 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGEMEM 00:02:36.387 00:03:55 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGENODE 00:02:36.387 00:03:55 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v NRHUGE 00:02:36.387 00:03:55 setup.sh.hugepages -- setup/hugepages.sh@197 -- # get_nodes 00:02:36.387 00:03:55 setup.sh.hugepages -- setup/hugepages.sh@26 -- # local node 00:02:36.387 00:03:55 setup.sh.hugepages -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:36.387 00:03:55 setup.sh.hugepages -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:02:36.387 00:03:55 setup.sh.hugepages -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:36.387 00:03:55 setup.sh.hugepages -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:02:36.387 00:03:55 setup.sh.hugepages -- setup/hugepages.sh@31 -- # no_nodes=2 00:02:36.387 00:03:55 setup.sh.hugepages -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:02:36.387 00:03:55 setup.sh.hugepages -- setup/hugepages.sh@198 -- # clear_hp 00:02:36.387 00:03:55 setup.sh.hugepages -- setup/hugepages.sh@36 -- # local node hp 00:02:36.387 00:03:55 setup.sh.hugepages -- setup/hugepages.sh@38 -- # for node in "${!nodes_sys[@]}" 00:02:36.387 00:03:55 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:36.387 00:03:55 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:02:36.387 00:03:55 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:36.387 00:03:55 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:02:36.387 00:03:55 setup.sh.hugepages -- setup/hugepages.sh@38 -- # for node in "${!nodes_sys[@]}" 00:02:36.387 00:03:55 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:36.387 00:03:55 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:02:36.387 00:03:55 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:36.387 00:03:55 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:02:36.387 00:03:55 setup.sh.hugepages -- setup/hugepages.sh@44 -- # export CLEAR_HUGE=yes 00:02:36.387 00:03:55 setup.sh.hugepages -- setup/hugepages.sh@44 -- # CLEAR_HUGE=yes 00:02:36.387 00:03:55 setup.sh.hugepages -- setup/hugepages.sh@200 -- # run_test single_node_setup single_node_setup 00:02:36.387 00:03:55 setup.sh.hugepages -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:02:36.387 00:03:55 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # xtrace_disable 00:02:36.387 00:03:55 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:36.387 ************************************ 00:02:36.387 START TEST single_node_setup 00:02:36.387 ************************************ 00:02:36.387 00:03:55 setup.sh.hugepages.single_node_setup -- common/autotest_common.sh@1117 -- # single_node_setup 00:02:36.387 00:03:55 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@135 -- # get_test_nr_hugepages 2097152 0 00:02:36.387 00:03:55 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@48 -- # local size=2097152 00:02:36.387 00:03:55 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@49 -- # (( 2 > 1 )) 00:02:36.387 00:03:55 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@50 -- # shift 00:02:36.387 00:03:55 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@51 -- # node_ids=('0') 00:02:36.387 00:03:55 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@51 -- # local node_ids 00:02:36.387 00:03:55 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:02:36.387 00:03:55 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@56 -- # nr_hugepages=1024 00:02:36.387 00:03:55 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 0 00:02:36.387 00:03:55 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@61 -- # user_nodes=('0') 00:02:36.387 00:03:55 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@61 -- # local user_nodes 00:02:36.387 00:03:55 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:02:36.387 00:03:55 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:02:36.387 00:03:55 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@66 -- # nodes_test=() 00:02:36.387 00:03:55 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@66 -- # local -g nodes_test 00:02:36.387 00:03:55 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@68 -- # (( 1 > 0 )) 00:02:36.387 00:03:55 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@69 -- # for _no_nodes in "${user_nodes[@]}" 00:02:36.387 00:03:55 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@70 -- # nodes_test[_no_nodes]=1024 00:02:36.387 00:03:55 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@72 -- # return 0 00:02:36.387 00:03:55 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@136 -- # NRHUGE=1024 00:02:36.387 00:03:55 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@136 -- # HUGENODE=0 00:02:36.387 00:03:55 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@136 -- # setup output 00:02:36.387 00:03:55 setup.sh.hugepages.single_node_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:02:36.387 00:03:55 setup.sh.hugepages.single_node_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:38.919 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:02:38.919 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:02:38.919 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:02:38.919 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:02:38.919 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:02:38.919 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:02:38.919 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:02:38.919 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:02:38.919 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:02:38.919 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:02:38.919 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:02:38.919 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:02:38.919 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:02:38.919 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:02:39.178 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:02:39.178 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:02:39.746 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:02:40.012 00:03:58 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@137 -- # verify_nr_hugepages 00:02:40.012 00:03:58 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@88 -- # local node 00:02:40.012 00:03:58 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@89 -- # local sorted_t 00:02:40.012 00:03:58 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@90 -- # local sorted_s 00:02:40.012 00:03:58 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@91 -- # local surp 00:02:40.012 00:03:58 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@92 -- # local resv 00:02:40.012 00:03:58 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@93 -- # local anon 00:02:40.012 00:03:58 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:40.012 00:03:58 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:02:40.012 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:40.012 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node= 00:02:40.012 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:02:40.012 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:40.012 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:40.012 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:40.012 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:40.012 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:40.012 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:40.012 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.012 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.012 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175465648 kB' 'MemAvailable: 178338704 kB' 'Buffers: 3896 kB' 'Cached: 10163792 kB' 'SwapCached: 0 kB' 'Active: 7198592 kB' 'Inactive: 3507524 kB' 'Active(anon): 6806584 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 541288 kB' 'Mapped: 202364 kB' 'Shmem: 6268156 kB' 'KReclaimable: 235952 kB' 'Slab: 826684 kB' 'SReclaimable: 235952 kB' 'SUnreclaim: 590732 kB' 'KernelStack: 20544 kB' 'PageTables: 9268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8338876 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315468 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3062740 kB' 'DirectMap2M: 16539648 kB' 'DirectMap1G: 182452224 kB' 00:02:40.012 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.012 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.012 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.012 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.012 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.012 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.012 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.012 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.012 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.012 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.012 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.012 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.012 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.012 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.012 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.012 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.012 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.012 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.012 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.012 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.012 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.012 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.012 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.012 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.012 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.012 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.012 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.012 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.012 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.012 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.012 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.012 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.012 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.012 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.012 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.012 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.012 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.012 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.012 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.012 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.012 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.012 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.012 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.012 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.012 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.012 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.012 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.012 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.012 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.012 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.012 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.012 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.012 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.012 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.012 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.012 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.012 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.012 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.012 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.012 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.012 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.012 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.012 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.012 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.012 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.012 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.013 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.013 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.013 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.013 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.013 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.013 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.013 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.013 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.013 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.013 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.013 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.013 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.013 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.013 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.013 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.013 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.013 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.013 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.013 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.013 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.013 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.013 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.013 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.013 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.013 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.013 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.013 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.013 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.013 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.013 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.013 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.013 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.013 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.013 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.013 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.013 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.013 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.013 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.013 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.013 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.013 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.013 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.013 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.013 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.013 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.013 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.013 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.013 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.013 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.013 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.013 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.013 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.013 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.013 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.013 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.013 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.013 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.013 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.013 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.013 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.013 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.013 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.013 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.013 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.013 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.013 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.013 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.013 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.013 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.013 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.013 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.013 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.013 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.013 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.013 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.013 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.013 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.013 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.013 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.013 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.013 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.013 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.013 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.013 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.013 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.013 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.013 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.013 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.013 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.013 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.013 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.013 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.013 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.013 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.013 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.013 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 0 00:02:40.013 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:02:40.013 00:03:58 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@96 -- # anon=0 00:02:40.013 00:03:58 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:02:40.013 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:40.013 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node= 00:02:40.013 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:02:40.013 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:40.013 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:40.013 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:40.013 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:40.013 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:40.013 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:40.013 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.013 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.014 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175466560 kB' 'MemAvailable: 178339616 kB' 'Buffers: 3896 kB' 'Cached: 10163804 kB' 'SwapCached: 0 kB' 'Active: 7198764 kB' 'Inactive: 3507524 kB' 'Active(anon): 6806756 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 542008 kB' 'Mapped: 202224 kB' 'Shmem: 6268168 kB' 'KReclaimable: 235952 kB' 'Slab: 826664 kB' 'SReclaimable: 235952 kB' 'SUnreclaim: 590712 kB' 'KernelStack: 20576 kB' 'PageTables: 9384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8349228 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315468 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3062740 kB' 'DirectMap2M: 16539648 kB' 'DirectMap1G: 182452224 kB' 00:02:40.014 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.014 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.014 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.014 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.014 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.014 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.014 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.014 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.014 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.014 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.014 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.014 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.014 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.014 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.014 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.014 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.014 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.014 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.014 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.014 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.014 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.014 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.014 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.014 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.014 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.014 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.014 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.014 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.014 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.014 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.014 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.014 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.014 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.014 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.014 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.014 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.014 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.014 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.014 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.014 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.014 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.014 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.014 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.014 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.014 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.014 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.014 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.014 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.014 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.014 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.014 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.014 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.014 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.014 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.014 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.014 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.014 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.014 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.014 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.014 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.014 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.014 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.014 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.014 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.014 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.014 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.014 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.014 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.014 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.014 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.014 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.014 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.014 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.014 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.014 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.014 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.014 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.014 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.014 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.014 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.014 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.014 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.014 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.014 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.014 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.014 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.014 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.014 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.014 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.014 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.014 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.014 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.014 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.014 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.014 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.014 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.014 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.014 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.014 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.014 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.014 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.014 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.014 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.014 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.014 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.014 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.014 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.014 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.014 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.014 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.015 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.015 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.015 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.015 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.015 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.015 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.015 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.015 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.015 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.015 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.015 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.015 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.015 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.015 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.015 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.015 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.015 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.015 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.015 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.015 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.015 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.015 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.015 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.015 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.015 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.015 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.015 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.015 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.015 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.015 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.015 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.015 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.015 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.015 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.015 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.015 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.015 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.015 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.015 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.015 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.015 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.015 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.015 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.015 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.015 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.015 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.015 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.015 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.015 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.015 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.015 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.015 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.015 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.015 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.015 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.015 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.015 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.015 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.015 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.015 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.015 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.015 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.015 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.015 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.015 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.015 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.015 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.015 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.015 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.015 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.015 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.015 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.015 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.015 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.015 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.015 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.015 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.015 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.015 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.015 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.015 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.015 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.015 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.015 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.015 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.015 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.015 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.015 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.015 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.015 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.015 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.015 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.015 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.015 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.015 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.015 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 0 00:02:40.015 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:02:40.015 00:03:58 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@98 -- # surp=0 00:02:40.015 00:03:58 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:02:40.015 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:40.015 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node= 00:02:40.015 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:02:40.015 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:40.015 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:40.015 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:40.015 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:40.015 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:40.015 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:40.015 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.015 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.016 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175467208 kB' 'MemAvailable: 178340264 kB' 'Buffers: 3896 kB' 'Cached: 10163820 kB' 'SwapCached: 0 kB' 'Active: 7198540 kB' 'Inactive: 3507524 kB' 'Active(anon): 6806532 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 541748 kB' 'Mapped: 202224 kB' 'Shmem: 6268184 kB' 'KReclaimable: 235952 kB' 'Slab: 826656 kB' 'SReclaimable: 235952 kB' 'SUnreclaim: 590704 kB' 'KernelStack: 20576 kB' 'PageTables: 9372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8339292 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315468 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3062740 kB' 'DirectMap2M: 16539648 kB' 'DirectMap1G: 182452224 kB' 00:02:40.016 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.016 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.016 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.016 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.016 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.016 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.016 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.016 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.016 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.016 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.016 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.016 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.016 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.016 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.016 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.016 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.016 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.016 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.016 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.016 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.016 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.016 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.016 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.016 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.016 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.016 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.016 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.016 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.016 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.016 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.016 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.016 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.016 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.016 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.016 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.016 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.016 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.016 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.016 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.016 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.016 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.016 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.016 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.016 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.016 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.016 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.016 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.016 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.016 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.016 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.016 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.016 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.016 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.016 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.016 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.016 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.016 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.016 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.016 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.016 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.016 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.016 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.016 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.016 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.016 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.016 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.016 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.016 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.016 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.016 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.016 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.016 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.016 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.016 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.016 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.016 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.016 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.016 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.016 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.016 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.016 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.016 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.016 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.016 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.016 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.016 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.016 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.016 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.016 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.016 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.016 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.017 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.017 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.017 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.017 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.017 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.017 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.017 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.017 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.017 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.017 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.017 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.017 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.017 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.017 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.017 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.017 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.017 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.017 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.017 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.017 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.017 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.017 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.017 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.017 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.017 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.017 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.017 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.017 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.017 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.017 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.017 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.017 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.017 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.017 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.017 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.017 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.017 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.017 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.017 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.017 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.017 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.017 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.017 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.017 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.017 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.017 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.017 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.017 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.017 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.017 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.017 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.017 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.017 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.017 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.017 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.017 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.017 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.017 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.017 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.017 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.017 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.017 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.017 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.017 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.017 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.017 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.017 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.017 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.017 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.017 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.017 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.017 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.017 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.017 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.017 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.017 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.017 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.017 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.017 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.017 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.017 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.017 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.017 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.017 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.017 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.017 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.017 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.017 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.017 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.017 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.017 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.017 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.017 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.017 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.017 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.017 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.017 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.017 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.017 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.017 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.017 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.017 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.017 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.017 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.017 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.017 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.017 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.017 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.017 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.017 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.017 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 0 00:02:40.017 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:02:40.017 00:03:58 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@99 -- # resv=0 00:02:40.017 00:03:58 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@101 -- # echo nr_hugepages=1024 00:02:40.017 nr_hugepages=1024 00:02:40.017 00:03:58 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:02:40.017 resv_hugepages=0 00:02:40.018 00:03:58 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:02:40.018 surplus_hugepages=0 00:02:40.018 00:03:58 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:02:40.018 anon_hugepages=0 00:02:40.018 00:03:58 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@106 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:40.018 00:03:58 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@108 -- # (( 1024 == nr_hugepages )) 00:02:40.018 00:03:58 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:02:40.018 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:40.018 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node= 00:02:40.018 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:02:40.018 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:40.018 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:40.018 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:40.018 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:40.018 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:40.018 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:40.018 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.018 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.018 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175467208 kB' 'MemAvailable: 178340264 kB' 'Buffers: 3896 kB' 'Cached: 10163844 kB' 'SwapCached: 0 kB' 'Active: 7198552 kB' 'Inactive: 3507524 kB' 'Active(anon): 6806544 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 541740 kB' 'Mapped: 202224 kB' 'Shmem: 6268208 kB' 'KReclaimable: 235952 kB' 'Slab: 826656 kB' 'SReclaimable: 235952 kB' 'SUnreclaim: 590704 kB' 'KernelStack: 20576 kB' 'PageTables: 9372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8339312 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315484 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3062740 kB' 'DirectMap2M: 16539648 kB' 'DirectMap1G: 182452224 kB' 00:02:40.018 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.018 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.018 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.018 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.018 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.018 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.018 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.018 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.018 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.018 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.018 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.018 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.018 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.018 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.018 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.018 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.018 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.018 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.018 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.018 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.018 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.018 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.018 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.018 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.018 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.018 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.018 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.018 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.018 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.018 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.018 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.018 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.018 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.018 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.018 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.018 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.018 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.018 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.018 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.018 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.018 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.018 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.018 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.018 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.018 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.018 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.018 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.018 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.018 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.018 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.018 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.018 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.018 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.018 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.018 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.018 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.018 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.018 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.018 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.018 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.018 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.018 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.018 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.018 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.018 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.018 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.018 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.018 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.018 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.018 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.018 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.018 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.018 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.018 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.018 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.018 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.018 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.018 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.018 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.018 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.018 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.018 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.018 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.018 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.018 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.018 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.018 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.018 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.018 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.018 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.019 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.019 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.019 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.019 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.019 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.019 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.019 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.019 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.019 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.019 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.019 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.019 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.019 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.019 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.019 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.019 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.019 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.019 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.019 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.019 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.019 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.019 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.019 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.019 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.019 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.019 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.019 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.019 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.019 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.019 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.019 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.019 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.019 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.019 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.019 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.019 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.019 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.019 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.019 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.019 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.019 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.019 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.019 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.019 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.019 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.019 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.019 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.019 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.019 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.019 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.019 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.019 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.019 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.019 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.019 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.019 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.019 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.019 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.019 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.019 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.019 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.019 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.019 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.019 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.019 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.019 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.019 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.019 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.019 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.019 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.019 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.019 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.019 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.019 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.019 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.019 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.019 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.019 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.019 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.019 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.019 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.019 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.019 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.019 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.019 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.019 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.019 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.019 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.019 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.019 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.019 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.019 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.019 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.019 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.019 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.019 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.019 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.019 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.019 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.019 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.019 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.019 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.019 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.019 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 1024 00:02:40.019 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:02:40.019 00:03:58 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:40.019 00:03:58 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@111 -- # get_nodes 00:02:40.019 00:03:58 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@26 -- # local node 00:02:40.019 00:03:58 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:40.019 00:03:58 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:02:40.019 00:03:58 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:40.019 00:03:58 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=0 00:02:40.019 00:03:58 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@31 -- # no_nodes=2 00:02:40.020 00:03:58 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:02:40.020 00:03:58 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:02:40.020 00:03:58 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:02:40.020 00:03:58 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:02:40.020 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:40.020 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node=0 00:02:40.020 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:02:40.020 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:40.020 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:40.020 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:40.020 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:40.020 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:40.020 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:40.020 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.020 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.020 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 85633756 kB' 'MemUsed: 12028928 kB' 'SwapCached: 0 kB' 'Active: 5053692 kB' 'Inactive: 3335888 kB' 'Active(anon): 4896152 kB' 'Inactive(anon): 0 kB' 'Active(file): 157540 kB' 'Inactive(file): 3335888 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8208472 kB' 'Mapped: 75280 kB' 'AnonPages: 184344 kB' 'Shmem: 4715044 kB' 'KernelStack: 10920 kB' 'PageTables: 4740 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 125720 kB' 'Slab: 401332 kB' 'SReclaimable: 125720 kB' 'SUnreclaim: 275612 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:40.020 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.020 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.020 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.020 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.020 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.020 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.020 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.020 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.020 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.020 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.020 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.020 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.020 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.020 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.020 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.020 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.020 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.020 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.020 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.020 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.020 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.020 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.020 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.020 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.020 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.020 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.020 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.020 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.020 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.020 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.020 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.020 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.020 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.020 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.020 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.020 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.020 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.020 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.020 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.020 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.020 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.020 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.020 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.020 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.020 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.020 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.020 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.020 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.020 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.020 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.020 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.020 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.020 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.020 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.020 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.020 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.020 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.020 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.020 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.020 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.020 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.020 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.020 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.020 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.020 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.020 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.020 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.020 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.020 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.021 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.021 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.021 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.021 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.021 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.021 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.021 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.021 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.021 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.021 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.021 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.021 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.021 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.021 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.021 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.021 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.021 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.021 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.021 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.021 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.021 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.021 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.021 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.021 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.021 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.021 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.021 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.021 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.021 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.021 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.021 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.021 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.021 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.021 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.021 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.021 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.021 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.021 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.021 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.021 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.021 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.021 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.021 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.021 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.021 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.021 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.021 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.021 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.021 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.021 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.021 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.021 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.021 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.021 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.021 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.021 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.021 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.021 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.021 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.021 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.021 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.021 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.021 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.021 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.021 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.021 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.021 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.021 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.021 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.021 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.021 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.021 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.021 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:02:40.021 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.021 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.021 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.021 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 0 00:02:40.021 00:03:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:02:40.021 00:03:58 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:02:40.021 00:03:58 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:02:40.021 00:03:58 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:02:40.021 00:03:58 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:02:40.021 00:03:58 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@127 -- # echo 'node0=1024 expecting 1024' 00:02:40.021 node0=1024 expecting 1024 00:02:40.021 00:03:58 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@129 -- # [[ 1024 == \1\0\2\4 ]] 00:02:40.021 00:02:40.021 real 0m3.676s 00:02:40.021 user 0m1.188s 00:02:40.021 sys 0m1.763s 00:02:40.021 00:03:58 setup.sh.hugepages.single_node_setup -- common/autotest_common.sh@1118 -- # xtrace_disable 00:02:40.021 00:03:58 setup.sh.hugepages.single_node_setup -- common/autotest_common.sh@10 -- # set +x 00:02:40.021 ************************************ 00:02:40.021 END TEST single_node_setup 00:02:40.021 ************************************ 00:02:40.021 00:03:58 setup.sh.hugepages -- common/autotest_common.sh@1136 -- # return 0 00:02:40.021 00:03:58 setup.sh.hugepages -- setup/hugepages.sh@201 -- # run_test even_2G_alloc even_2G_alloc 00:02:40.021 00:03:58 setup.sh.hugepages -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:02:40.022 00:03:58 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # xtrace_disable 00:02:40.022 00:03:58 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:40.280 ************************************ 00:02:40.280 START TEST even_2G_alloc 00:02:40.280 ************************************ 00:02:40.280 00:03:58 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1117 -- # even_2G_alloc 00:02:40.280 00:03:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@142 -- # get_test_nr_hugepages 2097152 00:02:40.281 00:03:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@48 -- # local size=2097152 00:02:40.281 00:03:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # (( 1 > 1 )) 00:02:40.281 00:03:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:02:40.281 00:03:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=1024 00:02:40.281 00:03:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 00:02:40.281 00:03:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:02:40.281 00:03:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:02:40.281 00:03:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:02:40.281 00:03:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:02:40.281 00:03:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:02:40.281 00:03:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:02:40.281 00:03:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:02:40.281 00:03:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@73 -- # (( 0 > 0 )) 00:02:40.281 00:03:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:02:40.281 00:03:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=512 00:02:40.281 00:03:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # : 512 00:02:40.281 00:03:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 1 00:02:40.281 00:03:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:02:40.281 00:03:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=512 00:02:40.281 00:03:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # : 0 00:02:40.281 00:03:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:02:40.281 00:03:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:02:40.281 00:03:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@143 -- # NRHUGE=1024 00:02:40.281 00:03:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@143 -- # setup output 00:02:40.281 00:03:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:40.281 00:03:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:42.818 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:02:42.818 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:42.818 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:02:42.818 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:02:42.818 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:02:42.818 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:02:42.818 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:02:42.818 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:02:42.818 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:02:42.818 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:02:42.818 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:02:42.818 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:02:42.818 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:02:42.818 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:02:42.818 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:02:42.818 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:02:42.818 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:02:42.818 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@144 -- # verify_nr_hugepages 00:02:42.818 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@88 -- # local node 00:02:42.818 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:02:42.818 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:02:42.818 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local surp 00:02:42.818 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local resv 00:02:42.818 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local anon 00:02:42.818 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:42.818 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:02:42.818 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:42.818 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:42.818 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:42.818 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:42.818 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:42.818 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:42.818 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:42.818 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:42.818 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:42.818 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.818 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.818 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175463360 kB' 'MemAvailable: 178336384 kB' 'Buffers: 3896 kB' 'Cached: 10163940 kB' 'SwapCached: 0 kB' 'Active: 7196872 kB' 'Inactive: 3507524 kB' 'Active(anon): 6804864 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 539228 kB' 'Mapped: 202256 kB' 'Shmem: 6268304 kB' 'KReclaimable: 235888 kB' 'Slab: 826632 kB' 'SReclaimable: 235888 kB' 'SUnreclaim: 590744 kB' 'KernelStack: 20624 kB' 'PageTables: 9072 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8339776 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315660 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3062740 kB' 'DirectMap2M: 16539648 kB' 'DirectMap1G: 182452224 kB' 00:02:42.818 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.819 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.820 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.820 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.820 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.820 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.820 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.820 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.820 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.820 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.820 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.820 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.820 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.820 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.820 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.820 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.820 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.820 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.820 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.820 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.820 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.820 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.820 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.820 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.820 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.820 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.820 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.820 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.820 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.820 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.820 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.820 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:42.820 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:42.820 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # anon=0 00:02:42.820 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:02:42.820 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:42.820 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:42.820 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:42.820 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:42.820 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:42.820 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:42.820 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:42.820 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:42.820 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:42.820 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.820 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.820 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175464424 kB' 'MemAvailable: 178337448 kB' 'Buffers: 3896 kB' 'Cached: 10163944 kB' 'SwapCached: 0 kB' 'Active: 7196360 kB' 'Inactive: 3507524 kB' 'Active(anon): 6804352 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 539272 kB' 'Mapped: 202236 kB' 'Shmem: 6268308 kB' 'KReclaimable: 235888 kB' 'Slab: 826588 kB' 'SReclaimable: 235888 kB' 'SUnreclaim: 590700 kB' 'KernelStack: 20576 kB' 'PageTables: 8940 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8339796 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315628 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3062740 kB' 'DirectMap2M: 16539648 kB' 'DirectMap1G: 182452224 kB' 00:02:42.820 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.820 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.820 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.820 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.820 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.820 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.820 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.820 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.820 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.820 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.820 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.820 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.820 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.820 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.820 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.820 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.820 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.820 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.820 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.820 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.820 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.820 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.820 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.820 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.820 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.820 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.820 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.820 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.820 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.820 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.820 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.820 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.820 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.820 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.820 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.820 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.820 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.820 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.820 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.820 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.820 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.820 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.820 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.820 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.820 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.820 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.820 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.820 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.820 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.820 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.820 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.820 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.820 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.820 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.820 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.820 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.820 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.820 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.820 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.820 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.820 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.820 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.820 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.820 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.820 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.820 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.820 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.820 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.820 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.820 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.820 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.820 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.820 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.821 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.822 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.822 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:42.822 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:42.822 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@98 -- # surp=0 00:02:42.822 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:02:42.822 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:42.822 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:42.822 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:42.822 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:42.822 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:42.822 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:42.822 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:42.822 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:42.822 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:42.822 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175464152 kB' 'MemAvailable: 178337176 kB' 'Buffers: 3896 kB' 'Cached: 10163960 kB' 'SwapCached: 0 kB' 'Active: 7196356 kB' 'Inactive: 3507524 kB' 'Active(anon): 6804348 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 539240 kB' 'Mapped: 202236 kB' 'Shmem: 6268324 kB' 'KReclaimable: 235888 kB' 'Slab: 826672 kB' 'SReclaimable: 235888 kB' 'SUnreclaim: 590784 kB' 'KernelStack: 20576 kB' 'PageTables: 8960 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8339816 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315628 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3062740 kB' 'DirectMap2M: 16539648 kB' 'DirectMap1G: 182452224 kB' 00:02:42.822 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.822 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.822 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.822 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.822 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.822 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.822 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.822 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.822 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.822 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.822 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.822 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.822 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.822 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.822 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.822 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.822 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.822 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.822 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.822 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.822 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.822 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.822 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.822 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.822 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.822 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.822 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.822 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.822 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.822 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.822 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.822 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.822 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.822 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.822 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.822 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.822 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.822 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.822 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.822 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.822 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.822 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.822 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.822 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.822 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.822 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.822 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.822 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.822 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.822 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.822 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.822 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.822 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.822 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.822 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.822 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.822 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.822 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.822 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.822 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.822 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.822 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.822 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.822 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.822 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.822 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.822 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.822 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.822 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.822 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.822 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.822 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.822 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.822 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.822 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.822 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.822 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.822 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.822 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # resv=0 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1024 00:02:42.823 nr_hugepages=1024 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:02:42.823 resv_hugepages=0 00:02:42.823 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:02:42.824 surplus_hugepages=0 00:02:42.824 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:02:42.824 anon_hugepages=0 00:02:42.824 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@106 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:42.824 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@108 -- # (( 1024 == nr_hugepages )) 00:02:42.824 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:02:42.824 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:42.824 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:42.824 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:42.824 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:42.824 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:42.824 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:42.824 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:42.824 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:42.824 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:42.824 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.824 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.824 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175464152 kB' 'MemAvailable: 178337176 kB' 'Buffers: 3896 kB' 'Cached: 10163984 kB' 'SwapCached: 0 kB' 'Active: 7196372 kB' 'Inactive: 3507524 kB' 'Active(anon): 6804364 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 539228 kB' 'Mapped: 202236 kB' 'Shmem: 6268348 kB' 'KReclaimable: 235888 kB' 'Slab: 826672 kB' 'SReclaimable: 235888 kB' 'SUnreclaim: 590784 kB' 'KernelStack: 20560 kB' 'PageTables: 8908 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8339840 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315628 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3062740 kB' 'DirectMap2M: 16539648 kB' 'DirectMap1G: 182452224 kB' 00:02:42.824 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.824 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.824 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.824 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.824 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.824 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.824 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.824 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.824 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.824 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.824 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.824 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.824 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.824 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.824 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.824 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.824 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.824 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.824 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.824 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.824 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.824 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.824 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.824 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.824 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.824 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.824 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.824 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.824 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.824 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.824 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.824 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.824 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.824 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.824 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.824 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.824 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.824 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.824 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.824 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.824 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.824 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.824 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.824 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.824 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.824 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.824 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.824 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.824 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.824 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.824 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.824 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.824 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.824 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.824 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.824 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.824 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.824 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.824 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.824 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.824 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.824 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.824 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.824 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.824 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.824 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.824 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.824 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.824 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.824 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.824 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.824 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.824 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.824 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.824 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.824 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.824 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.824 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.824 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.824 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.824 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.824 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.824 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.824 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.824 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.824 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.824 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.824 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.824 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.824 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.824 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.824 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.824 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.824 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.824 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.824 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.824 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.824 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.824 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.824 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.825 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.825 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.825 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.825 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.825 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.825 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.825 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.825 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.825 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.825 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.825 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.825 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.825 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.825 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.825 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.825 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.825 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.825 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.825 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.825 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.825 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.825 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.825 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.825 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.825 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.825 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.825 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.825 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.825 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.825 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.825 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.825 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.825 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.825 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.825 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.825 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.825 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.825 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.825 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.825 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.825 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.825 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.825 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.825 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.825 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.825 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.825 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.825 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.825 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.825 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.825 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.825 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.825 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.825 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.825 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.825 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.825 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.825 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.825 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.825 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.825 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.825 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.825 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.825 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.825 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.825 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.825 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.825 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.825 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.825 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.825 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.825 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.825 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.825 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.825 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.825 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.825 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.825 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.825 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.825 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.825 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.825 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.825 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.825 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.825 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.825 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.825 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.825 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.825 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.825 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.825 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.825 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.825 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.825 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:02:42.825 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:42.825 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:42.825 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:02:42.825 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@26 -- # local node 00:02:42.825 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:42.825 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=512 00:02:42.825 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:42.825 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=512 00:02:42.825 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:02:42.825 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:02:42.825 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:02:42.825 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:02:42.825 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:02:42.825 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:42.825 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:02:42.825 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:42.825 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:42.825 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:42.825 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:42.825 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:42.825 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:42.825 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:42.825 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.825 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.826 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 86679308 kB' 'MemUsed: 10983376 kB' 'SwapCached: 0 kB' 'Active: 5051708 kB' 'Inactive: 3335888 kB' 'Active(anon): 4894168 kB' 'Inactive(anon): 0 kB' 'Active(file): 157540 kB' 'Inactive(file): 3335888 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8208476 kB' 'Mapped: 75280 kB' 'AnonPages: 182264 kB' 'Shmem: 4715048 kB' 'KernelStack: 10904 kB' 'PageTables: 4384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 125720 kB' 'Slab: 401524 kB' 'SReclaimable: 125720 kB' 'SUnreclaim: 275804 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:42.826 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.826 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.826 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.826 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.826 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.826 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.826 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.826 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.826 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.826 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.826 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.826 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.826 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.826 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.826 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.826 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.826 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.826 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.826 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.826 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.826 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.826 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.826 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.826 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.826 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.826 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.826 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.826 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.826 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.826 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.826 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.826 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.826 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.826 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.826 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.826 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.826 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.826 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.826 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.826 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.826 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.826 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.826 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.826 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.826 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.826 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.826 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.826 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.826 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.826 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.826 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.826 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.826 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.826 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.826 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.826 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.826 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.826 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.826 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.826 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.826 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.826 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.826 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.826 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.826 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.826 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.826 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.826 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.826 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.826 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.826 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.826 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.826 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.826 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.826 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.826 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.826 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.826 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.826 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.826 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.826 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.826 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.826 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.826 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.826 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.826 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.826 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.826 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.826 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.826 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.826 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.826 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.826 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.826 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.826 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.826 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.826 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.826 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:42.826 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.826 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.085 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.085 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.085 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.085 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.085 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.085 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.085 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.085 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.085 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.085 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.085 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.085 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.085 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.085 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.085 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.085 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.085 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.085 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.085 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.085 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.085 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.085 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.085 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.085 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.085 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.085 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.085 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.085 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.085 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.085 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.086 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.086 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.086 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.086 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.086 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.086 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.086 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.086 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.086 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.086 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.086 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.086 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.086 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.086 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.086 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.086 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:43.086 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:43.086 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:02:43.086 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:02:43.086 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:02:43.086 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 1 00:02:43.086 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:43.086 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:02:43.086 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:43.086 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:43.086 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:43.086 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:43.086 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:43.086 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:43.086 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:43.086 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.086 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.086 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93718468 kB' 'MemFree: 88785592 kB' 'MemUsed: 4932876 kB' 'SwapCached: 0 kB' 'Active: 2144576 kB' 'Inactive: 171636 kB' 'Active(anon): 1910108 kB' 'Inactive(anon): 0 kB' 'Active(file): 234468 kB' 'Inactive(file): 171636 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1959444 kB' 'Mapped: 127460 kB' 'AnonPages: 356872 kB' 'Shmem: 1553340 kB' 'KernelStack: 9672 kB' 'PageTables: 4536 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 110168 kB' 'Slab: 425148 kB' 'SReclaimable: 110168 kB' 'SUnreclaim: 314980 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:43.086 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.086 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.086 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.086 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.086 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.086 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.086 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.086 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.086 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.086 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.086 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.086 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.086 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.086 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.086 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.086 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.086 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.086 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.086 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.086 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.086 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.086 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.086 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.086 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.086 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.086 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.086 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.086 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.086 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.086 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.086 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.086 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.086 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.086 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.086 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.086 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.086 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.086 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.086 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.086 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.086 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.086 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.086 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.086 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.086 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.086 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.086 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.086 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.086 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.086 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.086 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.086 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.086 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.086 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.086 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.086 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.086 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.086 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.086 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.086 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.086 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.086 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.086 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.086 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.086 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.086 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.086 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.086 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.086 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.086 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.086 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.086 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.086 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.086 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.086 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.086 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.086 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.086 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.086 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.086 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.086 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.086 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.086 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.086 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.086 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.086 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.086 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.086 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.087 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.087 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.087 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.087 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.087 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.087 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.087 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.087 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.087 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.087 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.087 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.087 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.087 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.087 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.087 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.087 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.087 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.087 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.087 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.087 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.087 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.087 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.087 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.087 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.087 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.087 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.087 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.087 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.087 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.087 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.087 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.087 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.087 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.087 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.087 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.087 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.087 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.087 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.087 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.087 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.087 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.087 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.087 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.087 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.087 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.087 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.087 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.087 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.087 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.087 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.087 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.087 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.087 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.087 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.087 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.087 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.087 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.087 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:43.087 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:43.087 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:02:43.087 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:02:43.087 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:02:43.087 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:02:43.087 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # echo 'node0=512 expecting 512' 00:02:43.087 node0=512 expecting 512 00:02:43.087 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:02:43.087 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:02:43.087 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:02:43.087 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # echo 'node1=512 expecting 512' 00:02:43.087 node1=512 expecting 512 00:02:43.087 00:04:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@129 -- # [[ 512 == \5\1\2 ]] 00:02:43.087 00:02:43.087 real 0m2.843s 00:02:43.087 user 0m1.173s 00:02:43.087 sys 0m1.721s 00:02:43.087 00:04:01 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1118 -- # xtrace_disable 00:02:43.087 00:04:01 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:43.087 ************************************ 00:02:43.087 END TEST even_2G_alloc 00:02:43.087 ************************************ 00:02:43.087 00:04:01 setup.sh.hugepages -- common/autotest_common.sh@1136 -- # return 0 00:02:43.087 00:04:01 setup.sh.hugepages -- setup/hugepages.sh@202 -- # run_test odd_alloc odd_alloc 00:02:43.087 00:04:01 setup.sh.hugepages -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:02:43.087 00:04:01 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # xtrace_disable 00:02:43.087 00:04:01 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:43.087 ************************************ 00:02:43.087 START TEST odd_alloc 00:02:43.087 ************************************ 00:02:43.087 00:04:01 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1117 -- # odd_alloc 00:02:43.087 00:04:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@149 -- # get_test_nr_hugepages 2098176 00:02:43.087 00:04:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@48 -- # local size=2098176 00:02:43.087 00:04:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # (( 1 > 1 )) 00:02:43.087 00:04:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:02:43.087 00:04:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=1025 00:02:43.087 00:04:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 00:02:43.087 00:04:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:02:43.087 00:04:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:02:43.087 00:04:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1025 00:02:43.087 00:04:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:02:43.087 00:04:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:02:43.087 00:04:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:02:43.087 00:04:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:02:43.087 00:04:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@73 -- # (( 0 > 0 )) 00:02:43.087 00:04:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:02:43.087 00:04:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=512 00:02:43.087 00:04:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # : 513 00:02:43.087 00:04:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 1 00:02:43.087 00:04:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:02:43.087 00:04:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=513 00:02:43.087 00:04:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # : 0 00:02:43.087 00:04:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:02:43.087 00:04:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:02:43.087 00:04:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@150 -- # HUGEMEM=2049 00:02:43.087 00:04:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@150 -- # setup output 00:02:43.087 00:04:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:43.087 00:04:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:45.705 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:02:45.705 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:45.705 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:02:45.705 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:02:45.705 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:02:45.705 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:02:45.705 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:02:45.705 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:02:45.705 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:02:45.705 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:02:45.705 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:02:45.705 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:02:45.705 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:02:45.705 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:02:45.705 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:02:45.705 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:02:45.705 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:02:45.705 00:04:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@151 -- # verify_nr_hugepages 00:02:45.705 00:04:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@88 -- # local node 00:02:45.705 00:04:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:02:45.705 00:04:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:02:45.705 00:04:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local surp 00:02:45.705 00:04:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local resv 00:02:45.705 00:04:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local anon 00:02:45.705 00:04:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:45.705 00:04:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:02:45.705 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:45.705 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:02:45.705 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:45.705 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:45.705 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:45.705 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:45.705 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:45.705 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:45.705 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:45.705 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.705 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.705 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175449064 kB' 'MemAvailable: 178322088 kB' 'Buffers: 3896 kB' 'Cached: 10164100 kB' 'SwapCached: 0 kB' 'Active: 7194564 kB' 'Inactive: 3507524 kB' 'Active(anon): 6802556 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 537396 kB' 'Mapped: 201216 kB' 'Shmem: 6268464 kB' 'KReclaimable: 235888 kB' 'Slab: 826052 kB' 'SReclaimable: 235888 kB' 'SUnreclaim: 590164 kB' 'KernelStack: 20512 kB' 'PageTables: 8668 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029580 kB' 'Committed_AS: 8328904 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315532 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3062740 kB' 'DirectMap2M: 16539648 kB' 'DirectMap1G: 182452224 kB' 00:02:45.705 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.705 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.705 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.705 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.705 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.705 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.705 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.705 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.706 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.707 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.707 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.707 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.707 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.707 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.707 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.707 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.707 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.707 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.707 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.707 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.707 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.707 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.707 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.707 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.707 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.707 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.707 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.707 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.707 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.707 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:45.707 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:45.707 00:04:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # anon=0 00:02:45.707 00:04:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:02:45.707 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:45.707 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:02:45.707 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:45.707 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:45.707 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:45.707 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:45.707 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:45.707 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:45.707 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:45.707 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.707 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.707 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175449264 kB' 'MemAvailable: 178322288 kB' 'Buffers: 3896 kB' 'Cached: 10164104 kB' 'SwapCached: 0 kB' 'Active: 7194748 kB' 'Inactive: 3507524 kB' 'Active(anon): 6802740 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 537592 kB' 'Mapped: 201188 kB' 'Shmem: 6268468 kB' 'KReclaimable: 235888 kB' 'Slab: 826044 kB' 'SReclaimable: 235888 kB' 'SUnreclaim: 590156 kB' 'KernelStack: 20544 kB' 'PageTables: 8724 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029580 kB' 'Committed_AS: 8330048 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315516 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3062740 kB' 'DirectMap2M: 16539648 kB' 'DirectMap1G: 182452224 kB' 00:02:45.707 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.707 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.707 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.707 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.707 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.707 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.707 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.707 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.707 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.707 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.707 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.707 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.707 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.707 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.707 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.707 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.707 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.707 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.707 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.707 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.707 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.707 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.707 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.707 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.707 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.707 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.707 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.707 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.707 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.707 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.707 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.707 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.707 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.707 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.707 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.707 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.707 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.707 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.707 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.707 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.707 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.707 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.707 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.707 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.707 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.707 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.707 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.707 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.707 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.707 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.707 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.707 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.707 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.707 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.707 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.707 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.707 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.707 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.707 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.707 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.707 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.707 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.707 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.707 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.707 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.707 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.707 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.707 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.707 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.707 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.707 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.707 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.707 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.707 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.707 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.707 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.707 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.707 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.707 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.707 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.707 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.707 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.707 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.707 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.707 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.707 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.707 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@98 -- # surp=0 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:45.708 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.709 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.709 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175450988 kB' 'MemAvailable: 178324012 kB' 'Buffers: 3896 kB' 'Cached: 10164116 kB' 'SwapCached: 0 kB' 'Active: 7194860 kB' 'Inactive: 3507524 kB' 'Active(anon): 6802852 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 537664 kB' 'Mapped: 201188 kB' 'Shmem: 6268480 kB' 'KReclaimable: 235888 kB' 'Slab: 826048 kB' 'SReclaimable: 235888 kB' 'SUnreclaim: 590160 kB' 'KernelStack: 20688 kB' 'PageTables: 8792 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029580 kB' 'Committed_AS: 8331568 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315532 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3062740 kB' 'DirectMap2M: 16539648 kB' 'DirectMap1G: 182452224 kB' 00:02:45.709 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.709 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.709 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.709 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.709 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.709 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.709 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.709 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.709 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.709 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.709 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.709 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.709 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.709 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.709 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.709 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.709 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.709 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.709 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.709 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.709 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.709 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.709 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.709 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.709 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.709 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.709 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.709 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.709 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.709 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.709 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.709 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.709 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.709 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.709 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.709 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.709 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.709 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.709 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.709 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.709 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.709 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.709 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.709 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.709 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.709 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.709 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.709 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.709 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.709 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.709 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.709 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.709 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.709 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.709 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.709 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.709 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.709 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.709 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.709 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.709 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.709 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.709 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.709 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.709 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.709 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.709 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.709 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.709 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.709 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.709 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.709 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.709 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.709 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.709 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.709 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.709 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.709 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.709 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.709 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.709 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.709 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.709 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.709 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.709 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.709 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.709 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.709 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.709 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.709 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.709 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.709 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.709 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.709 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # resv=0 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1025 00:02:45.710 nr_hugepages=1025 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:02:45.710 resv_hugepages=0 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:02:45.710 surplus_hugepages=0 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:02:45.710 anon_hugepages=0 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@106 -- # (( 1025 == nr_hugepages + surp + resv )) 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@108 -- # (( 1025 == nr_hugepages )) 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:45.710 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175451144 kB' 'MemAvailable: 178324168 kB' 'Buffers: 3896 kB' 'Cached: 10164116 kB' 'SwapCached: 0 kB' 'Active: 7194824 kB' 'Inactive: 3507524 kB' 'Active(anon): 6802816 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 537184 kB' 'Mapped: 201188 kB' 'Shmem: 6268480 kB' 'KReclaimable: 235888 kB' 'Slab: 826048 kB' 'SReclaimable: 235888 kB' 'SUnreclaim: 590160 kB' 'KernelStack: 20720 kB' 'PageTables: 8528 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029580 kB' 'Committed_AS: 8330092 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315548 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3062740 kB' 'DirectMap2M: 16539648 kB' 'DirectMap1G: 182452224 kB' 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.711 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.712 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.712 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.712 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.712 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.712 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.712 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.712 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.712 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.712 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.712 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.712 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.712 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.712 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.712 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.712 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.712 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.712 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.712 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.712 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.712 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.712 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.712 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.712 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.712 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.712 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.712 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.712 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.712 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.712 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.712 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.712 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.712 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.712 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.712 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.712 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.712 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.712 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.712 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.712 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.712 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.712 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.712 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.712 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.712 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.712 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.712 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.712 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.712 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.712 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.712 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.712 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.712 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.712 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.712 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.712 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.712 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.712 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.712 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.712 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.712 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.712 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.712 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.712 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.712 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.712 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.712 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.712 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.712 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.712 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:02:45.712 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:45.712 00:04:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages + surp + resv )) 00:02:45.712 00:04:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:02:45.712 00:04:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@26 -- # local node 00:02:45.712 00:04:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:45.712 00:04:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=513 00:02:45.712 00:04:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:45.712 00:04:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=512 00:02:45.712 00:04:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:02:45.712 00:04:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:02:45.712 00:04:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:02:45.712 00:04:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:02:45.712 00:04:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:02:45.712 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:45.712 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:02:45.974 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:45.974 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:45.974 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:45.974 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:45.974 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:45.974 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:45.974 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:45.974 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.974 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.974 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 86669000 kB' 'MemUsed: 10993684 kB' 'SwapCached: 0 kB' 'Active: 5049236 kB' 'Inactive: 3335888 kB' 'Active(anon): 4891696 kB' 'Inactive(anon): 0 kB' 'Active(file): 157540 kB' 'Inactive(file): 3335888 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8208476 kB' 'Mapped: 74972 kB' 'AnonPages: 179724 kB' 'Shmem: 4715048 kB' 'KernelStack: 10920 kB' 'PageTables: 4380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 125720 kB' 'Slab: 400968 kB' 'SReclaimable: 125720 kB' 'SUnreclaim: 275248 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:02:45.974 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.974 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.974 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.974 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.974 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.974 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.974 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.974 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.974 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.974 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.974 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.974 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.974 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.974 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.974 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.974 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.974 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.974 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.974 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.974 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.974 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.974 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.974 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.974 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.974 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.974 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.974 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.974 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.974 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.974 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.974 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.974 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.974 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.974 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.974 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.974 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.974 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.974 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.974 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.974 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.974 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.974 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.974 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.974 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.974 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.974 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.974 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.974 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.974 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.974 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.974 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.975 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.975 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.975 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.975 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.975 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.975 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.975 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.975 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.975 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.975 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.975 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.975 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.975 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.975 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.975 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.975 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.975 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.975 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.975 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.975 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.975 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.975 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.975 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.975 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.975 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.975 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.975 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.975 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.975 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.975 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.975 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.975 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.975 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.975 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.975 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.975 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.975 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.975 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.975 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.975 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.975 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.975 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.975 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.975 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.975 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.975 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.975 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.975 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.975 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.975 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.975 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.975 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.975 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.975 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.975 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.975 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.975 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.975 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.975 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.975 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.975 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.975 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.975 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.975 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.975 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.975 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.975 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.975 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.975 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.975 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.975 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.975 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.975 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.975 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.975 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.975 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.975 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.975 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.975 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.975 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.975 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.975 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.975 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.975 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.975 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.975 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.975 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.975 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.975 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.975 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.975 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.975 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.975 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.975 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.975 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:45.975 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:45.975 00:04:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:02:45.975 00:04:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:02:45.975 00:04:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:02:45.975 00:04:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 1 00:02:45.975 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:45.975 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:02:45.975 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:45.975 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:45.975 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:45.975 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:45.975 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:45.975 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:45.975 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:45.975 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.975 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.975 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93718468 kB' 'MemFree: 88780820 kB' 'MemUsed: 4937648 kB' 'SwapCached: 0 kB' 'Active: 2146116 kB' 'Inactive: 171636 kB' 'Active(anon): 1911648 kB' 'Inactive(anon): 0 kB' 'Active(file): 234468 kB' 'Inactive(file): 171636 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1959600 kB' 'Mapped: 126216 kB' 'AnonPages: 358368 kB' 'Shmem: 1553496 kB' 'KernelStack: 9720 kB' 'PageTables: 4736 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 110168 kB' 'Slab: 425080 kB' 'SReclaimable: 110168 kB' 'SUnreclaim: 314912 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:45.975 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.975 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.975 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.975 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.975 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.975 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.975 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.975 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.975 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.975 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.975 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.976 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.977 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.977 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.977 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:45.977 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.977 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.977 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.977 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:45.977 00:04:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:45.977 00:04:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:02:45.977 00:04:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:02:45.977 00:04:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:02:45.977 00:04:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:02:45.977 00:04:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # echo 'node0=513 expecting 513' 00:02:45.977 node0=513 expecting 513 00:02:45.977 00:04:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:02:45.977 00:04:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:02:45.977 00:04:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:02:45.977 00:04:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # echo 'node1=512 expecting 512' 00:02:45.977 node1=512 expecting 512 00:02:45.977 00:04:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@129 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:02:45.977 00:02:45.977 real 0m2.854s 00:02:45.977 user 0m1.211s 00:02:45.977 sys 0m1.711s 00:02:45.977 00:04:04 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1118 -- # xtrace_disable 00:02:45.977 00:04:04 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:45.977 ************************************ 00:02:45.977 END TEST odd_alloc 00:02:45.977 ************************************ 00:02:45.977 00:04:04 setup.sh.hugepages -- common/autotest_common.sh@1136 -- # return 0 00:02:45.977 00:04:04 setup.sh.hugepages -- setup/hugepages.sh@203 -- # run_test custom_alloc custom_alloc 00:02:45.977 00:04:04 setup.sh.hugepages -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:02:45.977 00:04:04 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # xtrace_disable 00:02:45.977 00:04:04 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:45.977 ************************************ 00:02:45.977 START TEST custom_alloc 00:02:45.977 ************************************ 00:02:45.977 00:04:04 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1117 -- # custom_alloc 00:02:45.977 00:04:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@157 -- # local IFS=, 00:02:45.977 00:04:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@159 -- # local node 00:02:45.977 00:04:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@160 -- # nodes_hp=() 00:02:45.977 00:04:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@160 -- # local nodes_hp 00:02:45.977 00:04:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@162 -- # local nr_hugepages=0 _nr_hugepages=0 00:02:45.977 00:04:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@164 -- # get_test_nr_hugepages 1048576 00:02:45.977 00:04:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@48 -- # local size=1048576 00:02:45.977 00:04:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # (( 1 > 1 )) 00:02:45.977 00:04:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:02:45.977 00:04:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=512 00:02:45.977 00:04:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 00:02:45.977 00:04:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:02:45.977 00:04:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:02:45.977 00:04:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=512 00:02:45.977 00:04:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:02:45.977 00:04:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:02:45.977 00:04:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:02:45.977 00:04:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:02:45.977 00:04:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@73 -- # (( 0 > 0 )) 00:02:45.977 00:04:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:02:45.977 00:04:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=256 00:02:45.977 00:04:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # : 256 00:02:45.977 00:04:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 1 00:02:45.977 00:04:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:02:45.977 00:04:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=256 00:02:45.977 00:04:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # : 0 00:02:45.977 00:04:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:02:45.977 00:04:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:02:45.977 00:04:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@165 -- # nodes_hp[0]=512 00:02:45.977 00:04:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@166 -- # (( 2 > 1 )) 00:02:45.977 00:04:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # get_test_nr_hugepages 2097152 00:02:45.977 00:04:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@48 -- # local size=2097152 00:02:45.977 00:04:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # (( 1 > 1 )) 00:02:45.977 00:04:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:02:45.977 00:04:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=1024 00:02:45.977 00:04:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 00:02:45.977 00:04:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:02:45.977 00:04:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:02:45.977 00:04:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:02:45.977 00:04:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:02:45.977 00:04:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:02:45.977 00:04:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:02:45.977 00:04:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:02:45.977 00:04:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@73 -- # (( 1 > 0 )) 00:02:45.977 00:04:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # for _no_nodes in "${!nodes_hp[@]}" 00:02:45.977 00:04:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # nodes_test[_no_nodes]=512 00:02:45.977 00:04:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@77 -- # return 0 00:02:45.977 00:04:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@168 -- # nodes_hp[1]=1024 00:02:45.977 00:04:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@171 -- # for node in "${!nodes_hp[@]}" 00:02:45.977 00:04:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:02:45.977 00:04:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@173 -- # (( _nr_hugepages += nodes_hp[node] )) 00:02:45.977 00:04:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@171 -- # for node in "${!nodes_hp[@]}" 00:02:45.977 00:04:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:02:45.977 00:04:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@173 -- # (( _nr_hugepages += nodes_hp[node] )) 00:02:45.977 00:04:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # get_test_nr_hugepages_per_node 00:02:45.977 00:04:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:02:45.977 00:04:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:02:45.977 00:04:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:02:45.977 00:04:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:02:45.977 00:04:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:02:45.977 00:04:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:02:45.977 00:04:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:02:45.977 00:04:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@73 -- # (( 2 > 0 )) 00:02:45.977 00:04:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # for _no_nodes in "${!nodes_hp[@]}" 00:02:45.977 00:04:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # nodes_test[_no_nodes]=512 00:02:45.977 00:04:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # for _no_nodes in "${!nodes_hp[@]}" 00:02:45.977 00:04:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # nodes_test[_no_nodes]=1024 00:02:45.977 00:04:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@77 -- # return 0 00:02:45.977 00:04:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:02:45.977 00:04:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # setup output 00:02:45.977 00:04:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:45.977 00:04:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:48.516 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:02:48.516 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:48.516 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:02:48.516 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:02:48.778 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:02:48.778 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:02:48.778 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:02:48.779 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:02:48.779 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:02:48.779 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:02:48.779 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:02:48.779 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:02:48.779 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:02:48.779 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:02:48.779 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:02:48.779 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:02:48.779 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:02:48.779 00:04:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nr_hugepages=1536 00:02:48.779 00:04:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # verify_nr_hugepages 00:02:48.779 00:04:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@88 -- # local node 00:02:48.779 00:04:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:02:48.779 00:04:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:02:48.779 00:04:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local surp 00:02:48.779 00:04:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local resv 00:02:48.779 00:04:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local anon 00:02:48.779 00:04:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:48.779 00:04:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:02:48.779 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:48.779 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:02:48.779 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:48.779 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:48.779 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:48.779 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:48.779 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:48.779 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:48.779 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:48.779 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.779 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.779 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 174405020 kB' 'MemAvailable: 177278044 kB' 'Buffers: 3896 kB' 'Cached: 10164240 kB' 'SwapCached: 0 kB' 'Active: 7196084 kB' 'Inactive: 3507524 kB' 'Active(anon): 6804076 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538596 kB' 'Mapped: 201324 kB' 'Shmem: 6268604 kB' 'KReclaimable: 235888 kB' 'Slab: 825908 kB' 'SReclaimable: 235888 kB' 'SUnreclaim: 590020 kB' 'KernelStack: 20592 kB' 'PageTables: 8680 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506316 kB' 'Committed_AS: 8328952 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315660 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3062740 kB' 'DirectMap2M: 16539648 kB' 'DirectMap1G: 182452224 kB' 00:02:48.779 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.779 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.779 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.779 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.779 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.779 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.779 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.779 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.779 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.779 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.779 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.779 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.779 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.779 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.779 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.779 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.779 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.779 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.779 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.779 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.779 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.779 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.779 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.779 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.779 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.779 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.779 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.779 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.779 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.779 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.779 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.779 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.779 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.779 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.779 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.779 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.779 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.779 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.779 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.779 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.779 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.779 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.779 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.779 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.779 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.779 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.779 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.779 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.779 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.779 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.779 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.779 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.779 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.779 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.779 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.779 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.779 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.779 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.779 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.779 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.779 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.779 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.779 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.779 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.779 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.779 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.779 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.779 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.779 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.779 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.779 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.779 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.779 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.779 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.779 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.779 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.779 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.779 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.779 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.779 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.779 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.780 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.780 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.780 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.780 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.780 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.780 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.780 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.780 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.780 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.780 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.780 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.780 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.780 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.780 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.780 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.780 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.780 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.780 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.780 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.780 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.780 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.780 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.780 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.780 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.780 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.780 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.780 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.780 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.780 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.780 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.780 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.780 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.780 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.780 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.780 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.780 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.780 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.780 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.780 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.780 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.780 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.780 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.780 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.780 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.780 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.780 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.780 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.780 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.780 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.780 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.780 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.780 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.780 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.780 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.780 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.780 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.780 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.780 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.780 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.780 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.780 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.780 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.780 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.780 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.780 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.780 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.780 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.780 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.780 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.780 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.780 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.780 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.780 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.780 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.780 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.780 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.780 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.780 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.780 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.780 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.780 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:48.780 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:48.780 00:04:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # anon=0 00:02:48.780 00:04:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:02:48.780 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:48.780 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:02:48.780 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:48.780 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:48.780 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:48.780 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:48.780 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:48.780 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:48.780 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:48.780 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.780 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.780 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 174406336 kB' 'MemAvailable: 177279360 kB' 'Buffers: 3896 kB' 'Cached: 10164244 kB' 'SwapCached: 0 kB' 'Active: 7195212 kB' 'Inactive: 3507524 kB' 'Active(anon): 6803204 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 537888 kB' 'Mapped: 201312 kB' 'Shmem: 6268608 kB' 'KReclaimable: 235888 kB' 'Slab: 825872 kB' 'SReclaimable: 235888 kB' 'SUnreclaim: 589984 kB' 'KernelStack: 20576 kB' 'PageTables: 8800 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506316 kB' 'Committed_AS: 8329108 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315532 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3062740 kB' 'DirectMap2M: 16539648 kB' 'DirectMap1G: 182452224 kB' 00:02:48.780 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.780 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.780 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.780 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.780 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.780 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.781 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.782 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.782 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.782 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.782 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.782 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.782 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.782 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.782 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.782 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.782 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.782 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.782 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.782 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.782 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.782 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.782 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.782 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.782 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.782 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.782 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.782 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.782 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.782 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.782 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.782 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.782 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.782 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.782 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.782 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.782 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.782 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.782 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.782 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.782 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.782 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.782 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.782 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.782 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.782 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.782 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.782 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.782 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.782 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.782 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.782 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.782 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.782 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.782 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.782 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.782 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.782 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.782 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.782 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.782 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.782 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.782 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.782 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.782 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.782 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.782 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.782 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.782 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.782 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.782 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.782 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.782 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.782 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.782 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.782 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.782 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:48.782 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:48.782 00:04:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@98 -- # surp=0 00:02:48.782 00:04:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:02:48.782 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:48.782 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:02:48.782 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:48.782 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:48.782 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:48.782 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:48.782 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:48.782 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:48.782 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:48.782 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.782 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.782 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 174406588 kB' 'MemAvailable: 177279612 kB' 'Buffers: 3896 kB' 'Cached: 10164260 kB' 'SwapCached: 0 kB' 'Active: 7195076 kB' 'Inactive: 3507524 kB' 'Active(anon): 6803068 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 537740 kB' 'Mapped: 201200 kB' 'Shmem: 6268624 kB' 'KReclaimable: 235888 kB' 'Slab: 825872 kB' 'SReclaimable: 235888 kB' 'SUnreclaim: 589984 kB' 'KernelStack: 20560 kB' 'PageTables: 8752 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506316 kB' 'Committed_AS: 8329136 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315532 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3062740 kB' 'DirectMap2M: 16539648 kB' 'DirectMap1G: 182452224 kB' 00:02:48.782 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.782 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.782 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.782 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.782 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.782 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.782 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.782 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.782 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.782 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.782 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.782 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.782 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.782 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.782 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.782 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.782 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.782 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.782 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.782 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.782 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.782 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.782 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.782 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.782 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.782 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.782 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.782 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.782 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.782 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.782 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.782 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.783 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.783 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.783 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.783 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.783 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.783 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.783 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.783 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.783 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.783 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.783 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.783 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.783 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.783 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.783 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.783 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.783 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.783 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.783 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.783 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.783 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.783 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.783 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.783 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.783 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.783 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.783 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.783 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.783 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.783 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.783 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.783 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.783 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.783 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.783 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.783 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.783 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.783 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.783 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.783 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.783 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.783 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.783 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.783 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.783 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.783 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.783 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.783 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.783 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.783 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.783 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.783 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.783 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.783 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.783 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.783 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.783 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.783 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.783 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.783 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.783 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.783 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.783 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.783 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.783 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.783 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.783 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.783 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.783 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.783 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.783 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.783 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.783 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.783 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.783 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.783 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.783 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.783 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.783 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.783 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.783 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.783 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.783 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.783 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.783 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.783 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.783 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.783 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.783 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.783 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.783 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.783 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.783 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.783 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.783 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.783 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.783 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.783 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.783 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.783 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.783 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.783 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.784 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.784 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.784 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.784 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.784 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.784 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.784 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.784 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.784 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.784 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.784 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.784 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.784 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.784 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.784 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.784 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.784 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.784 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.784 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.784 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.784 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.784 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.784 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.784 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.784 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.784 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.784 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.784 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.784 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.784 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.784 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.784 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.784 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.784 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.784 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.784 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:48.784 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.047 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.047 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.047 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.047 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.047 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.047 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.047 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.047 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.047 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.047 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.047 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.047 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.047 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.047 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.047 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.047 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.047 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.047 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.047 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.047 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.047 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.047 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.047 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.047 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.047 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.047 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.047 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.047 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.047 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.047 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.047 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:49.047 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:49.047 00:04:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # resv=0 00:02:49.047 00:04:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1536 00:02:49.047 nr_hugepages=1536 00:02:49.047 00:04:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:02:49.047 resv_hugepages=0 00:02:49.047 00:04:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:02:49.047 surplus_hugepages=0 00:02:49.047 00:04:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:02:49.047 anon_hugepages=0 00:02:49.047 00:04:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@106 -- # (( 1536 == nr_hugepages + surp + resv )) 00:02:49.047 00:04:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@108 -- # (( 1536 == nr_hugepages )) 00:02:49.047 00:04:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:02:49.047 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:49.047 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:02:49.047 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:49.047 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:49.047 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:49.047 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:49.047 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:49.047 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:49.047 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:49.047 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.047 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.047 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 174406336 kB' 'MemAvailable: 177279360 kB' 'Buffers: 3896 kB' 'Cached: 10164312 kB' 'SwapCached: 0 kB' 'Active: 7194864 kB' 'Inactive: 3507524 kB' 'Active(anon): 6802856 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 537460 kB' 'Mapped: 201200 kB' 'Shmem: 6268676 kB' 'KReclaimable: 235888 kB' 'Slab: 825872 kB' 'SReclaimable: 235888 kB' 'SUnreclaim: 589984 kB' 'KernelStack: 20544 kB' 'PageTables: 8704 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506316 kB' 'Committed_AS: 8329524 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315532 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3062740 kB' 'DirectMap2M: 16539648 kB' 'DirectMap1G: 182452224 kB' 00:02:49.047 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.047 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.047 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.047 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.047 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.047 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.047 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.047 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.047 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.047 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.047 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.047 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.047 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.047 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.047 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.048 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.048 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.048 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.048 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.048 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.048 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.048 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.048 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.048 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.048 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.048 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.048 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.048 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.048 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.048 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.048 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.048 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.048 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.048 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.048 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.048 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.048 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.048 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.048 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.048 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.048 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.048 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.048 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.048 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.048 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.048 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.048 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.048 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.048 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.048 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.048 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.048 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.048 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.048 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.048 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.048 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.048 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.048 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.048 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.048 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.048 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.048 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.048 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.048 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.048 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.048 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.048 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.048 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.048 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.048 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.048 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.048 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.048 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.048 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.048 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.048 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.048 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.048 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.048 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.048 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.048 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.048 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.048 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.048 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.048 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.048 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.048 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.048 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.048 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.048 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.048 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.048 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.048 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.048 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.048 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.048 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.048 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.048 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.048 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.048 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.048 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.048 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.048 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.048 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.048 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.048 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.048 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.048 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.048 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.048 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.048 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.048 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.048 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.049 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.049 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.049 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.049 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.049 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.049 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.049 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.049 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.049 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.049 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.049 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.049 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.049 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.049 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.049 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.049 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.049 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.049 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.049 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.049 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.049 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.049 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.049 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.049 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.049 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.049 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.049 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.049 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.049 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.049 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.049 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.049 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.049 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.049 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.049 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.049 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.049 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.049 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.049 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.049 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.049 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.049 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.049 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.049 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.049 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.049 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.049 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.049 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.049 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.049 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.049 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.049 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.049 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.049 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.049 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.049 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.049 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.049 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.049 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.049 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.049 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.049 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.049 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.049 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.049 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.049 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.049 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.049 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.049 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.049 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.049 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.049 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.049 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.049 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.049 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.049 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.049 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.049 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.049 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.049 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.049 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:02:49.049 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:49.049 00:04:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages + surp + resv )) 00:02:49.049 00:04:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:02:49.049 00:04:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@26 -- # local node 00:02:49.049 00:04:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:49.049 00:04:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=512 00:02:49.049 00:04:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:49.049 00:04:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:02:49.049 00:04:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:02:49.049 00:04:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:02:49.049 00:04:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:02:49.049 00:04:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:02:49.049 00:04:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:02:49.049 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:49.050 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:02:49.050 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:49.050 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:49.050 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:49.050 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:49.050 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:49.050 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:49.050 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:49.050 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.050 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.050 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 86678680 kB' 'MemUsed: 10984004 kB' 'SwapCached: 0 kB' 'Active: 5050744 kB' 'Inactive: 3335888 kB' 'Active(anon): 4893204 kB' 'Inactive(anon): 0 kB' 'Active(file): 157540 kB' 'Inactive(file): 3335888 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8208512 kB' 'Mapped: 74972 kB' 'AnonPages: 181292 kB' 'Shmem: 4715084 kB' 'KernelStack: 10872 kB' 'PageTables: 4160 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 125720 kB' 'Slab: 400724 kB' 'SReclaimable: 125720 kB' 'SUnreclaim: 275004 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:49.050 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.050 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.050 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.050 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.050 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.050 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.050 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.050 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.050 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.050 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.050 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.050 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.050 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.050 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.050 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.050 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.050 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.050 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.050 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.050 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.050 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.050 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.050 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.050 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.050 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.050 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.050 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.050 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.050 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.050 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.050 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.050 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.050 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.050 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.050 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.050 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.050 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.050 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.050 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.050 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.050 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.050 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.050 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.050 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.050 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.050 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.050 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.050 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.050 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.050 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.050 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.050 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.050 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.050 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.050 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.050 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.050 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.050 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.050 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.050 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.050 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.050 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.050 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.050 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.050 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.050 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.050 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.050 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.050 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.050 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.050 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.050 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.050 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.050 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.050 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.050 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.050 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.050 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.050 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.051 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.051 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.051 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.051 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.051 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.051 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.051 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.051 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.051 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.051 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.051 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.051 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.051 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.051 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.051 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.051 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.051 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.051 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.051 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.051 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.051 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.051 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.051 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.051 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.051 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.051 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.051 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.051 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.051 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.051 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.051 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.051 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.051 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.051 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.051 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.051 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.051 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.051 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.051 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.051 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.051 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.051 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.051 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.051 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.051 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.051 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.051 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.051 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.051 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.051 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.051 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.051 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.051 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.051 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.051 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.051 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.051 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.051 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.051 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.051 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.051 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.051 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.051 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.051 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.051 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.051 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.051 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:49.051 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:49.051 00:04:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:02:49.051 00:04:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:02:49.051 00:04:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:02:49.051 00:04:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 1 00:02:49.051 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:49.051 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:02:49.051 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:49.051 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:49.051 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:49.051 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:49.051 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:49.051 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:49.051 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:49.051 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.051 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.051 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93718468 kB' 'MemFree: 87727812 kB' 'MemUsed: 5990656 kB' 'SwapCached: 0 kB' 'Active: 2144132 kB' 'Inactive: 171636 kB' 'Active(anon): 1909664 kB' 'Inactive(anon): 0 kB' 'Active(file): 234468 kB' 'Inactive(file): 171636 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1959700 kB' 'Mapped: 126228 kB' 'AnonPages: 356136 kB' 'Shmem: 1553596 kB' 'KernelStack: 9672 kB' 'PageTables: 4544 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 110168 kB' 'Slab: 425148 kB' 'SReclaimable: 110168 kB' 'SUnreclaim: 314980 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:49.051 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.051 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.051 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.052 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.052 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.052 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.052 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.052 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.052 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.052 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.052 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.052 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.052 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.052 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.052 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.052 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.052 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.052 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.052 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.052 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.052 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.052 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.052 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.052 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.052 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.052 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.052 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.052 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.052 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.052 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.052 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.052 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.052 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.052 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.052 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.052 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.052 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.052 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.052 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.052 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.052 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.052 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.052 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.052 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.052 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.052 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.052 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.052 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.052 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.052 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.052 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.052 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.052 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.052 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.052 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.052 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.052 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.052 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.052 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.052 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.052 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.052 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.052 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.052 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.052 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.052 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.052 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.052 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.052 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.052 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.052 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.052 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.052 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.052 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.052 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.052 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.052 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.052 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.052 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.052 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.052 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.052 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.052 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.053 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.053 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.053 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.053 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.053 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.053 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.053 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.053 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.053 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.053 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.053 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.053 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.053 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.053 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.053 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.053 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.053 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.053 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.053 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.053 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.053 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.053 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.053 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.053 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.053 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.053 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.053 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.053 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.053 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.053 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.053 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.053 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.053 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.053 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.053 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.053 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.053 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.053 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.053 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.053 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.053 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.053 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.053 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.053 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.053 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.053 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.053 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.053 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.053 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.053 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.053 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.053 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.053 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.053 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.053 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.053 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.053 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.053 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.053 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:49.053 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.053 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.053 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.053 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:49.053 00:04:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:49.053 00:04:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:02:49.053 00:04:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:02:49.053 00:04:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:02:49.053 00:04:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:02:49.053 00:04:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # echo 'node0=512 expecting 512' 00:02:49.053 node0=512 expecting 512 00:02:49.053 00:04:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:02:49.053 00:04:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:02:49.053 00:04:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:02:49.053 00:04:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # echo 'node1=1024 expecting 1024' 00:02:49.053 node1=1024 expecting 1024 00:02:49.053 00:04:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@129 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:02:49.053 00:02:49.053 real 0m3.037s 00:02:49.053 user 0m1.244s 00:02:49.053 sys 0m1.863s 00:02:49.053 00:04:07 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1118 -- # xtrace_disable 00:02:49.053 00:04:07 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:49.053 ************************************ 00:02:49.053 END TEST custom_alloc 00:02:49.053 ************************************ 00:02:49.053 00:04:07 setup.sh.hugepages -- common/autotest_common.sh@1136 -- # return 0 00:02:49.053 00:04:07 setup.sh.hugepages -- setup/hugepages.sh@204 -- # run_test no_shrink_alloc no_shrink_alloc 00:02:49.053 00:04:07 setup.sh.hugepages -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:02:49.053 00:04:07 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # xtrace_disable 00:02:49.053 00:04:07 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:49.053 ************************************ 00:02:49.053 START TEST no_shrink_alloc 00:02:49.053 ************************************ 00:02:49.053 00:04:07 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1117 -- # no_shrink_alloc 00:02:49.053 00:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@185 -- # get_test_nr_hugepages 2097152 0 00:02:49.053 00:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@48 -- # local size=2097152 00:02:49.053 00:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # (( 2 > 1 )) 00:02:49.054 00:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # shift 00:02:49.054 00:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # node_ids=('0') 00:02:49.054 00:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # local node_ids 00:02:49.054 00:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:02:49.054 00:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=1024 00:02:49.054 00:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 0 00:02:49.054 00:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@61 -- # user_nodes=('0') 00:02:49.054 00:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:02:49.054 00:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:02:49.054 00:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:02:49.054 00:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:02:49.054 00:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:02:49.054 00:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@68 -- # (( 1 > 0 )) 00:02:49.054 00:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # for _no_nodes in "${user_nodes[@]}" 00:02:49.054 00:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # nodes_test[_no_nodes]=1024 00:02:49.054 00:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@72 -- # return 0 00:02:49.054 00:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@188 -- # NRHUGE=1024 00:02:49.054 00:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@188 -- # HUGENODE=0 00:02:49.054 00:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@188 -- # setup output 00:02:49.054 00:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:49.054 00:04:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:51.601 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:02:51.601 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:51.601 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:02:51.601 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:02:51.601 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:02:51.601 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:02:51.601 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:02:51.601 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:02:51.601 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:02:51.601 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:02:51.601 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:02:51.601 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:02:51.601 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:02:51.601 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:02:51.601 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:02:51.601 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:02:51.601 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:02:51.601 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@189 -- # verify_nr_hugepages 00:02:51.601 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@88 -- # local node 00:02:51.601 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:02:51.601 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:02:51.601 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local surp 00:02:51.601 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local resv 00:02:51.601 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local anon 00:02:51.601 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:51.601 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:02:51.601 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:51.601 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:51.601 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:51.601 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:51.601 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:51.601 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:51.601 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:51.601 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:51.601 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:51.601 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.601 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.602 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175496712 kB' 'MemAvailable: 178369720 kB' 'Buffers: 3896 kB' 'Cached: 10164396 kB' 'SwapCached: 0 kB' 'Active: 7198300 kB' 'Inactive: 3507524 kB' 'Active(anon): 6806292 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 539848 kB' 'Mapped: 201292 kB' 'Shmem: 6268760 kB' 'KReclaimable: 235856 kB' 'Slab: 825624 kB' 'SReclaimable: 235856 kB' 'SUnreclaim: 589768 kB' 'KernelStack: 20640 kB' 'PageTables: 9052 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8329872 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315596 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3062740 kB' 'DirectMap2M: 16539648 kB' 'DirectMap1G: 182452224 kB' 00:02:51.602 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.602 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.602 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.602 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.602 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.602 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.602 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.602 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.602 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.602 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.602 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.602 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.602 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.602 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.602 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.602 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.602 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.602 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.602 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.602 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.602 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.602 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.602 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.602 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.602 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.602 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.602 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.602 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.602 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.602 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.602 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.602 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.602 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.602 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.602 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.602 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.602 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.602 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.602 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.602 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.602 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.602 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.602 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.602 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.602 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.602 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.602 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.602 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.602 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.602 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.602 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.602 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.602 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.602 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.602 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.602 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.602 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.602 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.602 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.602 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.602 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.602 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.602 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.602 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.602 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.602 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.602 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.602 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.602 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.602 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.602 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.602 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.602 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.602 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.602 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.602 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.602 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.602 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.602 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.602 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.602 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.602 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.602 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.602 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.602 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.602 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.602 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.602 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.602 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.602 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.602 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.602 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.602 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.602 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.602 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.602 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.602 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.602 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.602 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.602 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.602 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.602 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.602 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.602 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.602 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.602 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.602 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.602 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.602 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.602 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.602 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.602 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.602 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.602 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.602 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.602 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.602 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.602 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.602 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.602 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.602 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.602 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.602 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.603 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.603 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.603 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.603 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.603 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.603 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.603 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.603 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.603 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.603 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.603 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.603 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.603 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.603 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.603 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.603 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.603 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.603 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.603 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.603 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.603 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.603 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.603 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.603 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.603 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.603 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.603 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.603 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.603 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.603 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.603 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.603 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.603 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.603 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.603 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.603 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.603 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.603 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.603 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:51.603 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:51.603 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # anon=0 00:02:51.603 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:02:51.603 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:51.603 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:51.603 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:51.603 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:51.603 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:51.603 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:51.603 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:51.603 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:51.603 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:51.603 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.603 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.603 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175497476 kB' 'MemAvailable: 178370484 kB' 'Buffers: 3896 kB' 'Cached: 10164400 kB' 'SwapCached: 0 kB' 'Active: 7197052 kB' 'Inactive: 3507524 kB' 'Active(anon): 6805044 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 539536 kB' 'Mapped: 201212 kB' 'Shmem: 6268764 kB' 'KReclaimable: 235856 kB' 'Slab: 825596 kB' 'SReclaimable: 235856 kB' 'SUnreclaim: 589740 kB' 'KernelStack: 20544 kB' 'PageTables: 8744 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8330020 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315548 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3062740 kB' 'DirectMap2M: 16539648 kB' 'DirectMap1G: 182452224 kB' 00:02:51.603 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.603 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.603 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.603 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.603 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.603 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.603 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.603 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.603 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.603 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.603 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.603 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.603 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.603 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.603 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.603 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.603 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.603 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.603 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.603 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.603 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.603 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.603 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.603 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.603 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.603 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.603 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.603 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.603 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.603 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.603 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.603 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.603 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.603 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.603 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.603 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.603 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.603 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.603 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.603 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.603 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.603 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.603 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.603 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.603 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.603 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.603 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.603 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.603 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.603 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.603 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.603 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.603 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.603 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.603 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.603 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.603 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.603 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.603 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.603 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.603 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.604 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.605 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.605 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.605 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.605 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.605 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.605 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.605 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.605 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.605 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.605 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.605 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.605 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.605 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.605 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.605 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.605 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.605 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.605 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:51.605 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:51.605 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@98 -- # surp=0 00:02:51.605 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:02:51.605 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:51.605 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:51.605 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:51.605 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:51.605 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:51.605 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:51.605 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:51.605 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:51.605 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:51.605 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.605 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.605 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175496668 kB' 'MemAvailable: 178369676 kB' 'Buffers: 3896 kB' 'Cached: 10164420 kB' 'SwapCached: 0 kB' 'Active: 7198456 kB' 'Inactive: 3507524 kB' 'Active(anon): 6806448 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 540924 kB' 'Mapped: 201716 kB' 'Shmem: 6268784 kB' 'KReclaimable: 235856 kB' 'Slab: 825596 kB' 'SReclaimable: 235856 kB' 'SUnreclaim: 589740 kB' 'KernelStack: 20528 kB' 'PageTables: 8696 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8332192 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315532 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3062740 kB' 'DirectMap2M: 16539648 kB' 'DirectMap1G: 182452224 kB' 00:02:51.605 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.605 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.605 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.605 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.605 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.605 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.605 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.605 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.605 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.605 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.605 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.605 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.605 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.605 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.605 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.605 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.605 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.605 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.605 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.605 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.605 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.605 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.605 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.605 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.605 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.605 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.605 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.605 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.605 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.605 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.605 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.605 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.605 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.605 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.605 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.605 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.605 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.605 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.605 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.605 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.605 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.605 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.605 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.605 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.605 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.605 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.605 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.605 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.605 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.605 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.605 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.605 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.605 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.605 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.605 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.605 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.605 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.605 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.605 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.605 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.605 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.605 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.605 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.605 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.605 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.605 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.605 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.605 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.605 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.606 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.606 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.606 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.606 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.606 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.606 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.606 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.606 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.606 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.606 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.606 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.606 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.606 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.606 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.606 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.606 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.606 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.606 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.606 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.606 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.606 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.606 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.606 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.606 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.606 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.606 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.606 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.606 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.606 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.606 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.606 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.606 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.606 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.606 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.606 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.606 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.606 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.606 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.606 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.606 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.606 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.606 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.606 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.606 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.606 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.606 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.606 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.606 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.606 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.606 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.606 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.606 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.606 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.606 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.606 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.606 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.606 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.606 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.606 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.606 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.606 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.606 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.606 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.606 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.606 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.606 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.606 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.606 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.606 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.606 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.606 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.606 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.606 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.606 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.606 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.606 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.606 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.606 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.606 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.606 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.606 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.606 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.606 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.606 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.606 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.606 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.606 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.606 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.606 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.606 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.606 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.606 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.606 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.606 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.606 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.606 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.606 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.606 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.606 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.606 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.606 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.606 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.606 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.606 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.606 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.606 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.606 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.606 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.606 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.606 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.606 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.606 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.606 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.606 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.606 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.606 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.606 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.606 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.607 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.607 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.607 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.607 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.607 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.607 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.607 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.607 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.607 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.607 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.607 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.607 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.607 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.607 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.607 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:51.607 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:51.607 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # resv=0 00:02:51.607 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1024 00:02:51.607 nr_hugepages=1024 00:02:51.607 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:02:51.607 resv_hugepages=0 00:02:51.607 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:02:51.607 surplus_hugepages=0 00:02:51.607 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:02:51.607 anon_hugepages=0 00:02:51.607 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@106 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:51.607 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@108 -- # (( 1024 == nr_hugepages )) 00:02:51.607 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:02:51.607 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:51.607 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:51.607 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:51.607 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:51.607 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:51.607 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:51.607 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:51.607 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:51.607 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:51.607 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.607 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.607 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175489968 kB' 'MemAvailable: 178362976 kB' 'Buffers: 3896 kB' 'Cached: 10164440 kB' 'SwapCached: 0 kB' 'Active: 7202656 kB' 'Inactive: 3507524 kB' 'Active(anon): 6810648 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 545160 kB' 'Mapped: 202080 kB' 'Shmem: 6268804 kB' 'KReclaimable: 235856 kB' 'Slab: 825596 kB' 'SReclaimable: 235856 kB' 'SUnreclaim: 589740 kB' 'KernelStack: 20544 kB' 'PageTables: 8768 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8336184 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315536 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3062740 kB' 'DirectMap2M: 16539648 kB' 'DirectMap1G: 182452224 kB' 00:02:51.607 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.607 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.607 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.607 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.607 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.607 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.607 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.607 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.607 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.607 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.607 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.607 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.607 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.607 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.607 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.607 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.607 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.607 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.607 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.607 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.607 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.607 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.607 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.607 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.607 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.607 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.607 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.607 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.607 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.607 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.607 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.607 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.607 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.607 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.607 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.607 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.607 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.607 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.607 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.607 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.607 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.607 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.607 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.607 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.607 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.607 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.607 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.607 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.607 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.607 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.607 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.607 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.607 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.607 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.607 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.607 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.607 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.607 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.607 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.607 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.607 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.607 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.607 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.607 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.607 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.607 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.607 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.607 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.607 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.607 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.607 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.607 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.607 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.607 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.607 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.607 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.607 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.607 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@26 -- # local node 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=0 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:02:51.608 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:02:51.609 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:02:51.609 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:51.609 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:02:51.609 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:51.609 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:51.609 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:51.609 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:51.609 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:51.609 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:51.609 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:51.609 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.609 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.609 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 85653132 kB' 'MemUsed: 12009552 kB' 'SwapCached: 0 kB' 'Active: 5053052 kB' 'Inactive: 3335888 kB' 'Active(anon): 4895512 kB' 'Inactive(anon): 0 kB' 'Active(file): 157540 kB' 'Inactive(file): 3335888 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8208660 kB' 'Mapped: 74972 kB' 'AnonPages: 183492 kB' 'Shmem: 4715232 kB' 'KernelStack: 10888 kB' 'PageTables: 4204 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 125720 kB' 'Slab: 400692 kB' 'SReclaimable: 125720 kB' 'SUnreclaim: 274972 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:51.609 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.609 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.609 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.609 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.609 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.609 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.867 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.867 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.867 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.867 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.867 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.867 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.867 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.867 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.867 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.867 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.867 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.867 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.867 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.867 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.867 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.867 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.867 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.867 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.867 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.867 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.867 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.867 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.867 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.867 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.867 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.867 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.867 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.867 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.867 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.867 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.867 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.867 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.867 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.867 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.868 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.868 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.868 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.868 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.868 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.868 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.868 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.868 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.868 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.868 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.868 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.868 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.868 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.868 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.868 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.868 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.868 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.868 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.868 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.868 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.868 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.868 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.868 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.868 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.868 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.868 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.868 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.868 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.868 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.868 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.868 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.868 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.868 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.868 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.868 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.868 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.868 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.868 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.868 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.868 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.868 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.868 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.868 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.868 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.868 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.868 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.868 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.868 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.868 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.868 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.868 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.868 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.868 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.868 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.868 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.868 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.868 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.868 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.868 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.868 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.868 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.868 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.868 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.868 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.868 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.868 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.868 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.868 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.868 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.868 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.868 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.868 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.868 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.868 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.868 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.868 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.868 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.868 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.868 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.868 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.868 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.868 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.868 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.868 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.868 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.868 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.868 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.868 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.868 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.868 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.868 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.868 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.868 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.868 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.868 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.868 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.868 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.868 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.868 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.868 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.868 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.868 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:51.868 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.868 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.868 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.868 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:51.868 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:51.868 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:02:51.868 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:02:51.868 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:02:51.868 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:02:51.868 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # echo 'node0=1024 expecting 1024' 00:02:51.868 node0=1024 expecting 1024 00:02:51.868 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@129 -- # [[ 1024 == \1\0\2\4 ]] 00:02:51.868 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@192 -- # CLEAR_HUGE=no 00:02:51.868 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@192 -- # NRHUGE=512 00:02:51.868 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@192 -- # HUGENODE=0 00:02:51.868 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@192 -- # setup output 00:02:51.868 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:51.868 00:04:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:54.405 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:02:54.405 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:54.405 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:02:54.405 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:02:54.405 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:02:54.405 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:02:54.405 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:02:54.405 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:02:54.405 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:02:54.405 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:02:54.405 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:02:54.405 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:02:54.405 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:02:54.405 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:02:54.405 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:02:54.405 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:02:54.405 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:02:54.405 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:02:54.405 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@194 -- # verify_nr_hugepages 00:02:54.405 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@88 -- # local node 00:02:54.405 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:02:54.405 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:02:54.405 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local surp 00:02:54.405 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local resv 00:02:54.405 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local anon 00:02:54.405 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:54.405 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:02:54.405 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:54.405 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:54.405 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:54.405 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:54.405 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:54.405 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:54.405 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:54.405 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:54.405 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:54.406 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.406 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.406 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175483596 kB' 'MemAvailable: 178356604 kB' 'Buffers: 3896 kB' 'Cached: 10164520 kB' 'SwapCached: 0 kB' 'Active: 7199464 kB' 'Inactive: 3507524 kB' 'Active(anon): 6807456 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 541324 kB' 'Mapped: 201320 kB' 'Shmem: 6268884 kB' 'KReclaimable: 235856 kB' 'Slab: 825756 kB' 'SReclaimable: 235856 kB' 'SUnreclaim: 589900 kB' 'KernelStack: 20848 kB' 'PageTables: 9336 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8333376 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315724 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3062740 kB' 'DirectMap2M: 16539648 kB' 'DirectMap1G: 182452224 kB' 00:02:54.406 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.406 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.406 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.406 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.406 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.406 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.406 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.406 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.406 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.406 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.406 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.406 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.406 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.406 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.406 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.406 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.406 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.406 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.406 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.406 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.406 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.406 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.406 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.406 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.406 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.406 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.406 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.406 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.406 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.406 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.406 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.406 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.406 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.406 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.406 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.406 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.406 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.406 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.406 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.406 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.406 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.406 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.406 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.406 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.406 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.406 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.406 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.406 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.406 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.406 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.406 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.406 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.406 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.406 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.406 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.406 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.406 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.406 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.406 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.406 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.406 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.406 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.406 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.406 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.406 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.406 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.406 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.406 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.406 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.406 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.406 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.406 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.406 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.406 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.406 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.406 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.406 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.406 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.406 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.406 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.406 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.406 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.406 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.406 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.406 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.406 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.406 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.406 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.406 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.406 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.406 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.406 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.406 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.406 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.406 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.406 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.406 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.406 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.406 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.406 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.406 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.406 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.406 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.406 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.406 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.406 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.406 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.406 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.406 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.406 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.406 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.406 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.406 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.407 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.407 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.407 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.407 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.407 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.407 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.407 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.407 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.407 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.407 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.407 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.407 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.407 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.407 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.407 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.407 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.407 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.407 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.407 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.407 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.407 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.407 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.407 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.407 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.407 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.407 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.407 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.407 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.407 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.407 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.407 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.407 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.407 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.407 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.407 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.407 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.407 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.407 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.407 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.407 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.407 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.407 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.407 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.407 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.407 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.407 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.407 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.407 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.407 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:54.407 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:54.407 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # anon=0 00:02:54.407 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:02:54.407 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:54.407 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:54.407 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:54.407 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:54.407 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:54.407 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:54.407 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:54.407 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:54.407 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:54.407 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.407 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.407 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175483500 kB' 'MemAvailable: 178356508 kB' 'Buffers: 3896 kB' 'Cached: 10164524 kB' 'SwapCached: 0 kB' 'Active: 7198864 kB' 'Inactive: 3507524 kB' 'Active(anon): 6806856 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 540708 kB' 'Mapped: 201324 kB' 'Shmem: 6268888 kB' 'KReclaimable: 235856 kB' 'Slab: 825808 kB' 'SReclaimable: 235856 kB' 'SUnreclaim: 589952 kB' 'KernelStack: 20768 kB' 'PageTables: 9436 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8333392 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315660 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3062740 kB' 'DirectMap2M: 16539648 kB' 'DirectMap1G: 182452224 kB' 00:02:54.407 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.407 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.407 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.407 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.407 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.407 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.407 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.407 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.407 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.407 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.407 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.407 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.407 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.407 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.407 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.407 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.407 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.407 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.407 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.407 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.407 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.407 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.407 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.407 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.407 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.407 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.407 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.407 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.407 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.407 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.407 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.407 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.407 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.407 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.407 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.407 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.407 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.407 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.407 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.407 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.407 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.407 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.407 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.407 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.407 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.407 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.407 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.407 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.407 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.407 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.407 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.407 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.407 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.408 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.409 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.409 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.409 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.409 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.409 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.409 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.409 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.409 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.409 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.409 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.409 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.409 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.409 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.409 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.409 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.409 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.409 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.409 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.409 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.409 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.409 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.409 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.409 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.409 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:54.409 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:54.409 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@98 -- # surp=0 00:02:54.409 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:02:54.409 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:54.409 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:54.409 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:54.409 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:54.409 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:54.409 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:54.409 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:54.409 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:54.409 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:54.409 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.409 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.409 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175484256 kB' 'MemAvailable: 178357264 kB' 'Buffers: 3896 kB' 'Cached: 10164544 kB' 'SwapCached: 0 kB' 'Active: 7198152 kB' 'Inactive: 3507524 kB' 'Active(anon): 6806144 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 540424 kB' 'Mapped: 201244 kB' 'Shmem: 6268908 kB' 'KReclaimable: 235856 kB' 'Slab: 825800 kB' 'SReclaimable: 235856 kB' 'SUnreclaim: 589944 kB' 'KernelStack: 20624 kB' 'PageTables: 8800 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8333416 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315644 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3062740 kB' 'DirectMap2M: 16539648 kB' 'DirectMap1G: 182452224 kB' 00:02:54.409 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.409 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.409 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.409 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.409 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.409 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.409 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.409 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.409 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.409 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.409 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.409 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.409 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.409 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.409 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.409 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.409 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.409 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.409 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.409 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.409 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.409 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.409 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.409 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.409 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.409 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.409 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.409 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.409 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.409 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.409 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.409 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.409 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.409 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.409 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.409 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.409 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.409 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.409 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.409 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.409 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.409 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.409 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.409 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.409 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.409 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.409 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.409 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.409 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.409 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.409 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.409 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.409 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.409 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.409 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.409 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.409 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.409 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.409 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.409 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.409 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.409 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.409 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.409 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.409 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.409 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.409 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.409 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.410 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.410 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.410 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.410 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.410 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.410 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.410 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.410 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.410 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.410 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.410 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.410 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.410 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.410 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.410 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.410 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.410 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.410 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.410 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.410 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.410 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.410 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.410 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.410 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.410 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.410 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.410 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.410 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.410 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.410 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.410 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.410 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.410 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.410 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.410 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.410 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.410 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.410 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.410 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.410 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.410 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.410 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.410 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.410 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.410 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.410 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.410 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.410 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.410 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.410 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.410 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.410 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.410 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.410 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.410 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.410 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.410 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.410 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.410 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.410 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.410 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.410 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.410 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.410 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.410 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.410 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.410 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.410 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.410 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.410 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.410 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.410 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.410 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.410 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.410 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.410 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.410 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.410 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.410 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.410 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.410 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.410 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.410 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.410 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.410 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.410 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.410 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.410 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.410 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.410 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.410 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.410 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.410 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.410 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.410 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.410 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.410 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.410 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.410 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.410 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.410 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.410 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.410 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.410 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.410 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.410 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.410 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.410 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.410 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.410 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.410 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.410 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.410 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.410 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.410 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.411 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.411 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.411 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.411 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.411 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.411 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.411 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.411 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.411 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.411 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.411 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.411 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.411 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.411 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.411 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.411 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.411 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.411 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.411 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:54.411 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:54.411 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # resv=0 00:02:54.411 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1024 00:02:54.411 nr_hugepages=1024 00:02:54.411 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:02:54.411 resv_hugepages=0 00:02:54.411 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:02:54.411 surplus_hugepages=0 00:02:54.411 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:02:54.411 anon_hugepages=0 00:02:54.411 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@106 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:54.411 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@108 -- # (( 1024 == nr_hugepages )) 00:02:54.411 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:02:54.411 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:54.411 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:54.411 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:54.411 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:54.411 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:54.411 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:54.411 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:54.411 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:54.411 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:54.411 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.411 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.411 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175483504 kB' 'MemAvailable: 178356512 kB' 'Buffers: 3896 kB' 'Cached: 10164564 kB' 'SwapCached: 0 kB' 'Active: 7197932 kB' 'Inactive: 3507524 kB' 'Active(anon): 6805924 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 540208 kB' 'Mapped: 201244 kB' 'Shmem: 6268928 kB' 'KReclaimable: 235856 kB' 'Slab: 825800 kB' 'SReclaimable: 235856 kB' 'SUnreclaim: 589944 kB' 'KernelStack: 20704 kB' 'PageTables: 9004 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8333436 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315612 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3062740 kB' 'DirectMap2M: 16539648 kB' 'DirectMap1G: 182452224 kB' 00:02:54.411 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.411 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.411 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.411 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.411 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.411 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.411 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.411 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.411 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.411 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.411 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.411 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.411 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.411 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.411 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.411 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.411 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.411 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.411 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.411 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.411 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.411 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.411 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.411 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.411 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.411 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.411 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.411 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.411 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.411 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.411 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.411 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.411 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.411 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.411 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.411 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.411 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.411 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.411 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.411 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.411 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.411 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.411 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.411 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.411 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.411 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.411 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.411 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.411 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.411 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.411 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.411 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.411 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.411 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.411 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.411 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.411 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.411 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.411 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.411 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.411 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.411 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.411 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.411 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.411 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.411 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.411 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.411 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.411 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.411 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.411 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.411 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.411 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.411 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.411 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@26 -- # local node 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=0 00:02:54.412 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:02:54.413 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:02:54.413 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:02:54.413 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:02:54.413 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:02:54.413 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:54.413 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:02:54.413 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:54.413 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:54.413 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:54.413 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:54.413 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:54.413 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:54.413 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:54.413 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.413 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.413 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 85640716 kB' 'MemUsed: 12021968 kB' 'SwapCached: 0 kB' 'Active: 5051680 kB' 'Inactive: 3335888 kB' 'Active(anon): 4894140 kB' 'Inactive(anon): 0 kB' 'Active(file): 157540 kB' 'Inactive(file): 3335888 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8208740 kB' 'Mapped: 74988 kB' 'AnonPages: 181968 kB' 'Shmem: 4715312 kB' 'KernelStack: 10856 kB' 'PageTables: 4112 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 125720 kB' 'Slab: 400780 kB' 'SReclaimable: 125720 kB' 'SUnreclaim: 275060 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:54.413 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.413 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.413 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.413 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.413 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.413 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.413 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.413 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.413 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.413 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.413 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.413 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.413 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.413 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.413 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.413 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.413 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.413 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.413 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.413 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.413 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.413 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.413 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.413 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.413 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.413 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.413 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.413 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.413 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.413 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.413 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.413 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.413 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.413 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.413 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.413 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.413 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.413 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.413 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.413 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.413 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.413 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.413 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.413 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.413 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.413 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.413 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.413 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.413 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.413 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.413 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.413 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.413 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.413 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.413 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.413 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.413 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.413 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.413 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.413 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.413 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.413 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.413 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.413 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.413 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.413 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.413 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.413 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.413 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.413 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.413 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.413 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.413 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.413 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.413 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.413 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.413 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.413 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.413 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.413 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.413 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.413 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.413 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.413 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.413 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.413 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.413 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.413 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.413 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.413 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.413 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.413 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.413 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.413 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.413 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.413 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.413 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.413 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.413 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.413 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.413 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.413 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.413 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.413 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.413 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.414 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.414 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.414 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.414 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.414 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.414 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.414 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.414 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.414 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.414 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.414 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.414 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.414 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.414 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.414 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.414 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.414 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.414 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.414 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.414 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.414 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.414 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.414 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.414 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.414 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.414 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.414 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.414 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.414 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.414 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.414 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.414 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.414 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.414 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.414 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.414 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.414 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.414 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.414 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.414 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.414 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:54.414 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:54.414 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:02:54.414 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:02:54.414 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:02:54.414 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:02:54.414 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # echo 'node0=1024 expecting 1024' 00:02:54.414 node0=1024 expecting 1024 00:02:54.414 00:04:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@129 -- # [[ 1024 == \1\0\2\4 ]] 00:02:54.414 00:02:54.414 real 0m5.175s 00:02:54.414 user 0m2.027s 00:02:54.414 sys 0m3.192s 00:02:54.414 00:04:12 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1118 -- # xtrace_disable 00:02:54.414 00:04:12 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:54.414 ************************************ 00:02:54.414 END TEST no_shrink_alloc 00:02:54.414 ************************************ 00:02:54.414 00:04:12 setup.sh.hugepages -- common/autotest_common.sh@1136 -- # return 0 00:02:54.414 00:04:12 setup.sh.hugepages -- setup/hugepages.sh@206 -- # clear_hp 00:02:54.414 00:04:12 setup.sh.hugepages -- setup/hugepages.sh@36 -- # local node hp 00:02:54.414 00:04:12 setup.sh.hugepages -- setup/hugepages.sh@38 -- # for node in "${!nodes_sys[@]}" 00:02:54.414 00:04:12 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:54.414 00:04:12 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:02:54.414 00:04:12 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:54.414 00:04:12 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:02:54.414 00:04:12 setup.sh.hugepages -- setup/hugepages.sh@38 -- # for node in "${!nodes_sys[@]}" 00:02:54.414 00:04:12 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:54.414 00:04:12 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:02:54.414 00:04:12 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:54.414 00:04:12 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:02:54.414 00:04:12 setup.sh.hugepages -- setup/hugepages.sh@44 -- # export CLEAR_HUGE=yes 00:02:54.414 00:04:12 setup.sh.hugepages -- setup/hugepages.sh@44 -- # CLEAR_HUGE=yes 00:02:54.414 00:02:54.414 real 0m18.031s 00:02:54.414 user 0m7.024s 00:02:54.414 sys 0m10.543s 00:02:54.414 00:04:12 setup.sh.hugepages -- common/autotest_common.sh@1118 -- # xtrace_disable 00:02:54.414 00:04:12 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:54.414 ************************************ 00:02:54.414 END TEST hugepages 00:02:54.414 ************************************ 00:02:54.414 00:04:13 setup.sh -- common/autotest_common.sh@1136 -- # return 0 00:02:54.414 00:04:13 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:02:54.414 00:04:13 setup.sh -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:02:54.414 00:04:13 setup.sh -- common/autotest_common.sh@1099 -- # xtrace_disable 00:02:54.414 00:04:13 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:54.414 ************************************ 00:02:54.414 START TEST driver 00:02:54.414 ************************************ 00:02:54.414 00:04:13 setup.sh.driver -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:02:54.414 * Looking for test storage... 00:02:54.414 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:54.414 00:04:13 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:02:54.414 00:04:13 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:54.414 00:04:13 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:58.616 00:04:16 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:02:58.616 00:04:16 setup.sh.driver -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:02:58.616 00:04:16 setup.sh.driver -- common/autotest_common.sh@1099 -- # xtrace_disable 00:02:58.616 00:04:16 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:02:58.616 ************************************ 00:02:58.616 START TEST guess_driver 00:02:58.616 ************************************ 00:02:58.616 00:04:16 setup.sh.driver.guess_driver -- common/autotest_common.sh@1117 -- # guess_driver 00:02:58.616 00:04:16 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:02:58.616 00:04:16 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:02:58.616 00:04:16 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:02:58.616 00:04:16 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:02:58.616 00:04:16 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:02:58.616 00:04:16 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:02:58.616 00:04:16 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:02:58.616 00:04:16 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:02:58.616 00:04:16 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:02:58.616 00:04:16 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 174 > 0 )) 00:02:58.616 00:04:16 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:02:58.617 00:04:16 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:02:58.617 00:04:16 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:02:58.617 00:04:16 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:02:58.617 00:04:16 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:02:58.617 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:02:58.617 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:02:58.617 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:02:58.617 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:02:58.617 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:02:58.617 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:02:58.617 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:02:58.617 00:04:16 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:02:58.617 00:04:16 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:02:58.617 00:04:16 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:02:58.617 00:04:16 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:02:58.617 00:04:16 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:02:58.617 Looking for driver=vfio-pci 00:02:58.617 00:04:16 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:58.617 00:04:16 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:02:58.617 00:04:16 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:02:58.617 00:04:16 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:00.524 00:04:19 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:00.524 00:04:19 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:00.524 00:04:19 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:00.524 00:04:19 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:00.524 00:04:19 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:00.524 00:04:19 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:00.524 00:04:19 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:00.524 00:04:19 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:00.524 00:04:19 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:00.524 00:04:19 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:00.524 00:04:19 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:00.524 00:04:19 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:00.524 00:04:19 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:00.524 00:04:19 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:00.524 00:04:19 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:00.524 00:04:19 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:00.524 00:04:19 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:00.524 00:04:19 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:00.524 00:04:19 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:00.524 00:04:19 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:00.524 00:04:19 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:00.524 00:04:19 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:00.524 00:04:19 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:00.524 00:04:19 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:00.524 00:04:19 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:00.524 00:04:19 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:00.524 00:04:19 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:00.524 00:04:19 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:00.524 00:04:19 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:00.524 00:04:19 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:00.524 00:04:19 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:00.524 00:04:19 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:00.524 00:04:19 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:00.524 00:04:19 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:00.524 00:04:19 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:00.524 00:04:19 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:00.524 00:04:19 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:00.524 00:04:19 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:00.524 00:04:19 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:00.524 00:04:19 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:00.524 00:04:19 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:00.524 00:04:19 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:00.524 00:04:19 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:00.524 00:04:19 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:00.524 00:04:19 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:00.783 00:04:19 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:00.783 00:04:19 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:00.783 00:04:19 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:01.352 00:04:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:01.352 00:04:20 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:01.352 00:04:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:01.610 00:04:20 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:01.610 00:04:20 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:03:01.610 00:04:20 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:01.610 00:04:20 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:05.797 00:03:05.797 real 0m7.200s 00:03:05.797 user 0m1.955s 00:03:05.797 sys 0m3.689s 00:03:05.797 00:04:24 setup.sh.driver.guess_driver -- common/autotest_common.sh@1118 -- # xtrace_disable 00:03:05.797 00:04:24 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:03:05.797 ************************************ 00:03:05.797 END TEST guess_driver 00:03:05.797 ************************************ 00:03:05.797 00:04:24 setup.sh.driver -- common/autotest_common.sh@1136 -- # return 0 00:03:05.797 00:03:05.797 real 0m11.013s 00:03:05.797 user 0m2.974s 00:03:05.797 sys 0m5.702s 00:03:05.797 00:04:24 setup.sh.driver -- common/autotest_common.sh@1118 -- # xtrace_disable 00:03:05.797 00:04:24 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:05.797 ************************************ 00:03:05.797 END TEST driver 00:03:05.797 ************************************ 00:03:05.797 00:04:24 setup.sh -- common/autotest_common.sh@1136 -- # return 0 00:03:05.797 00:04:24 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:05.797 00:04:24 setup.sh -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:03:05.797 00:04:24 setup.sh -- common/autotest_common.sh@1099 -- # xtrace_disable 00:03:05.797 00:04:24 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:05.798 ************************************ 00:03:05.798 START TEST devices 00:03:05.798 ************************************ 00:03:05.798 00:04:24 setup.sh.devices -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:05.798 * Looking for test storage... 00:03:05.798 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:05.798 00:04:24 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:05.798 00:04:24 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:03:05.798 00:04:24 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:05.798 00:04:24 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:08.393 00:04:27 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:03:08.393 00:04:27 setup.sh.devices -- common/autotest_common.sh@1663 -- # zoned_devs=() 00:03:08.393 00:04:27 setup.sh.devices -- common/autotest_common.sh@1663 -- # local -gA zoned_devs 00:03:08.393 00:04:27 setup.sh.devices -- common/autotest_common.sh@1664 -- # local nvme bdf 00:03:08.393 00:04:27 setup.sh.devices -- common/autotest_common.sh@1666 -- # for nvme in /sys/block/nvme* 00:03:08.393 00:04:27 setup.sh.devices -- common/autotest_common.sh@1667 -- # is_block_zoned nvme0n1 00:03:08.393 00:04:27 setup.sh.devices -- common/autotest_common.sh@1656 -- # local device=nvme0n1 00:03:08.393 00:04:27 setup.sh.devices -- common/autotest_common.sh@1658 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:08.393 00:04:27 setup.sh.devices -- common/autotest_common.sh@1659 -- # [[ none != none ]] 00:03:08.393 00:04:27 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:03:08.393 00:04:27 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:03:08.393 00:04:27 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:08.393 00:04:27 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:08.393 00:04:27 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:08.393 00:04:27 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:08.393 00:04:27 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:08.393 00:04:27 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:08.393 00:04:27 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:5e:00.0 00:03:08.393 00:04:27 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:03:08.393 00:04:27 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:08.393 00:04:27 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:03:08.393 00:04:27 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:03:08.393 No valid GPT data, bailing 00:03:08.393 00:04:27 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:08.393 00:04:27 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:08.393 00:04:27 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:08.393 00:04:27 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:08.393 00:04:27 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:08.393 00:04:27 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:08.393 00:04:27 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:03:08.393 00:04:27 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:03:08.393 00:04:27 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:08.393 00:04:27 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:5e:00.0 00:03:08.393 00:04:27 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:03:08.393 00:04:27 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:08.393 00:04:27 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:08.393 00:04:27 setup.sh.devices -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:03:08.393 00:04:27 setup.sh.devices -- common/autotest_common.sh@1099 -- # xtrace_disable 00:03:08.393 00:04:27 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:08.652 ************************************ 00:03:08.652 START TEST nvme_mount 00:03:08.652 ************************************ 00:03:08.652 00:04:27 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1117 -- # nvme_mount 00:03:08.652 00:04:27 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:08.652 00:04:27 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:08.652 00:04:27 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:08.652 00:04:27 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:08.652 00:04:27 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:08.652 00:04:27 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:08.652 00:04:27 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:03:08.652 00:04:27 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:08.652 00:04:27 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:08.652 00:04:27 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:03:08.652 00:04:27 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:03:08.652 00:04:27 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:08.652 00:04:27 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:08.652 00:04:27 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:08.652 00:04:27 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:08.652 00:04:27 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:08.652 00:04:27 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:08.652 00:04:27 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:08.652 00:04:27 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:09.591 Creating new GPT entries in memory. 00:03:09.591 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:09.591 other utilities. 00:03:09.591 00:04:28 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:09.591 00:04:28 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:09.591 00:04:28 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:09.591 00:04:28 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:09.591 00:04:28 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:10.527 Creating new GPT entries in memory. 00:03:10.527 The operation has completed successfully. 00:03:10.527 00:04:29 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:10.527 00:04:29 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:10.527 00:04:29 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 1308819 00:03:10.527 00:04:29 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:10.527 00:04:29 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:03:10.527 00:04:29 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:10.527 00:04:29 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:10.527 00:04:29 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:10.527 00:04:29 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:10.787 00:04:29 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:10.787 00:04:29 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:03:10.787 00:04:29 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:10.787 00:04:29 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:10.787 00:04:29 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:10.787 00:04:29 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:10.787 00:04:29 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:10.787 00:04:29 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:10.787 00:04:29 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:10.787 00:04:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:10.787 00:04:29 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:03:10.787 00:04:29 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:10.787 00:04:29 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:10.787 00:04:29 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:13.319 00:04:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:13.319 00:04:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:13.319 00:04:31 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:13.319 00:04:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:13.319 00:04:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:13.319 00:04:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:13.319 00:04:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:13.319 00:04:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:13.319 00:04:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:13.319 00:04:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:13.319 00:04:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:13.319 00:04:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:13.319 00:04:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:13.319 00:04:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:13.319 00:04:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:13.319 00:04:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:13.319 00:04:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:13.319 00:04:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:13.319 00:04:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:13.319 00:04:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:13.319 00:04:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:13.319 00:04:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:13.319 00:04:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:13.319 00:04:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:13.319 00:04:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:13.319 00:04:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:13.319 00:04:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:13.319 00:04:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:13.319 00:04:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:13.319 00:04:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:13.319 00:04:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:13.319 00:04:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:13.319 00:04:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:13.319 00:04:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:13.319 00:04:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:13.319 00:04:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:13.319 00:04:32 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:13.319 00:04:32 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:13.319 00:04:32 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:13.319 00:04:32 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:13.319 00:04:32 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:13.319 00:04:32 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:03:13.319 00:04:32 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:13.319 00:04:32 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:13.319 00:04:32 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:13.319 00:04:32 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:13.319 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:13.319 00:04:32 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:13.319 00:04:32 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:13.579 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:13.579 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:13.579 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:13.579 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:13.579 00:04:32 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:03:13.579 00:04:32 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:03:13.579 00:04:32 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:13.579 00:04:32 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:13.579 00:04:32 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:13.579 00:04:32 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:13.838 00:04:32 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:13.838 00:04:32 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:03:13.838 00:04:32 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:13.838 00:04:32 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:13.838 00:04:32 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:13.838 00:04:32 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:13.838 00:04:32 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:13.838 00:04:32 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:13.838 00:04:32 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:13.838 00:04:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:13.838 00:04:32 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:03:13.838 00:04:32 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:13.838 00:04:32 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:13.838 00:04:32 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:16.407 00:04:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:16.407 00:04:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:16.407 00:04:34 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:16.407 00:04:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.407 00:04:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:16.407 00:04:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.407 00:04:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:16.407 00:04:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.407 00:04:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:16.407 00:04:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.407 00:04:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:16.407 00:04:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.407 00:04:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:16.407 00:04:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.407 00:04:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:16.407 00:04:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.407 00:04:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:16.407 00:04:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.407 00:04:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:16.407 00:04:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.407 00:04:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:16.407 00:04:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.407 00:04:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:16.407 00:04:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.407 00:04:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:16.407 00:04:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.407 00:04:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:16.407 00:04:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.407 00:04:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:16.407 00:04:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.407 00:04:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:16.408 00:04:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.408 00:04:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:16.408 00:04:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.408 00:04:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:16.408 00:04:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.408 00:04:34 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:16.408 00:04:34 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:16.408 00:04:34 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:16.408 00:04:34 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:16.408 00:04:34 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:16.408 00:04:34 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:16.408 00:04:35 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:5e:00.0 data@nvme0n1 '' '' 00:03:16.408 00:04:35 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:03:16.408 00:04:35 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:16.408 00:04:35 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:16.408 00:04:35 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:03:16.408 00:04:35 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:16.408 00:04:35 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:16.408 00:04:35 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:16.408 00:04:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.408 00:04:35 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:03:16.408 00:04:35 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:16.408 00:04:35 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:16.408 00:04:35 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:18.943 00:04:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:18.943 00:04:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:18.943 00:04:37 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:18.943 00:04:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:18.943 00:04:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:18.943 00:04:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:18.943 00:04:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:18.943 00:04:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:18.943 00:04:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:18.943 00:04:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:18.943 00:04:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:18.943 00:04:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:18.943 00:04:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:18.943 00:04:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:18.943 00:04:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:18.943 00:04:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:18.943 00:04:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:18.943 00:04:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:18.943 00:04:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:18.943 00:04:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:18.943 00:04:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:18.943 00:04:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:18.943 00:04:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:18.943 00:04:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:18.943 00:04:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:18.943 00:04:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:18.943 00:04:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:18.943 00:04:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:18.943 00:04:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:18.943 00:04:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:18.943 00:04:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:18.943 00:04:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:18.943 00:04:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:18.944 00:04:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:18.944 00:04:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:18.944 00:04:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:18.944 00:04:37 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:18.944 00:04:37 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:18.944 00:04:37 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:03:18.944 00:04:37 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:03:18.944 00:04:37 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:18.944 00:04:37 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:18.944 00:04:37 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:18.944 00:04:37 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:18.944 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:18.944 00:03:18.944 real 0m10.447s 00:03:18.944 user 0m3.056s 00:03:18.944 sys 0m5.181s 00:03:18.944 00:04:37 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1118 -- # xtrace_disable 00:03:18.944 00:04:37 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:03:18.944 ************************************ 00:03:18.944 END TEST nvme_mount 00:03:18.944 ************************************ 00:03:18.944 00:04:37 setup.sh.devices -- common/autotest_common.sh@1136 -- # return 0 00:03:18.944 00:04:37 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:18.944 00:04:37 setup.sh.devices -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:03:18.944 00:04:37 setup.sh.devices -- common/autotest_common.sh@1099 -- # xtrace_disable 00:03:18.944 00:04:37 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:18.944 ************************************ 00:03:18.944 START TEST dm_mount 00:03:18.944 ************************************ 00:03:18.944 00:04:37 setup.sh.devices.dm_mount -- common/autotest_common.sh@1117 -- # dm_mount 00:03:18.944 00:04:37 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:03:18.944 00:04:37 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:03:18.944 00:04:37 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:03:18.944 00:04:37 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:03:18.944 00:04:37 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:18.944 00:04:37 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:03:18.944 00:04:37 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:18.944 00:04:37 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:18.944 00:04:37 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:03:18.944 00:04:37 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:03:18.944 00:04:37 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:18.944 00:04:37 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:18.944 00:04:37 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:18.944 00:04:37 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:18.944 00:04:37 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:18.944 00:04:37 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:18.944 00:04:37 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:18.944 00:04:37 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:18.944 00:04:37 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:18.944 00:04:37 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:18.944 00:04:37 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:03:20.318 Creating new GPT entries in memory. 00:03:20.318 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:20.318 other utilities. 00:03:20.318 00:04:38 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:20.318 00:04:38 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:20.318 00:04:38 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:20.318 00:04:38 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:20.318 00:04:38 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:21.255 Creating new GPT entries in memory. 00:03:21.255 The operation has completed successfully. 00:03:21.255 00:04:39 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:21.255 00:04:39 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:21.255 00:04:39 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:21.255 00:04:39 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:21.255 00:04:39 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:03:22.192 The operation has completed successfully. 00:03:22.192 00:04:40 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:22.192 00:04:40 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:22.192 00:04:40 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 1312871 00:03:22.192 00:04:40 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:03:22.192 00:04:40 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:22.192 00:04:40 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:22.192 00:04:40 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:03:22.192 00:04:40 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:03:22.192 00:04:40 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:22.192 00:04:40 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:03:22.192 00:04:40 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:22.192 00:04:40 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:03:22.192 00:04:40 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-2 00:03:22.192 00:04:40 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-2 00:03:22.192 00:04:40 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-2 ]] 00:03:22.192 00:04:40 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-2 ]] 00:03:22.192 00:04:40 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:22.192 00:04:40 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:03:22.192 00:04:40 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:22.192 00:04:40 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:22.192 00:04:40 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:03:22.192 00:04:40 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:22.192 00:04:40 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:5e:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:22.192 00:04:40 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:03:22.192 00:04:40 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:03:22.192 00:04:40 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:22.192 00:04:40 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:22.192 00:04:40 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:22.192 00:04:40 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:22.192 00:04:40 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:03:22.192 00:04:40 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:22.192 00:04:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:22.192 00:04:40 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:03:22.192 00:04:40 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:22.192 00:04:40 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:22.192 00:04:40 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:24.727 00:04:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:24.727 00:04:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:03:24.727 00:04:43 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:24.727 00:04:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.727 00:04:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:24.727 00:04:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.727 00:04:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:24.727 00:04:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.727 00:04:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:24.727 00:04:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.727 00:04:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:24.727 00:04:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.727 00:04:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:24.727 00:04:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.727 00:04:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:24.727 00:04:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.727 00:04:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:24.727 00:04:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.727 00:04:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:24.727 00:04:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.727 00:04:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:24.727 00:04:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.727 00:04:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:24.727 00:04:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.727 00:04:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:24.727 00:04:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.727 00:04:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:24.727 00:04:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.727 00:04:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:24.727 00:04:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.727 00:04:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:24.727 00:04:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.727 00:04:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:24.727 00:04:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.727 00:04:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:24.727 00:04:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.727 00:04:43 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:24.727 00:04:43 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:03:24.727 00:04:43 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:24.727 00:04:43 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:24.727 00:04:43 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:24.727 00:04:43 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:24.727 00:04:43 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:5e:00.0 holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 '' '' 00:03:24.727 00:04:43 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:03:24.727 00:04:43 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 00:03:24.727 00:04:43 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:24.727 00:04:43 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:03:24.727 00:04:43 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:24.727 00:04:43 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:24.727 00:04:43 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:24.985 00:04:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.985 00:04:43 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:03:24.985 00:04:43 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:24.985 00:04:43 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:24.985 00:04:43 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:27.519 00:04:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:27.519 00:04:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\2\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\2* ]] 00:03:27.519 00:04:46 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:27.519 00:04:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.519 00:04:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:27.519 00:04:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.519 00:04:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:27.519 00:04:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.519 00:04:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:27.519 00:04:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.519 00:04:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:27.519 00:04:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.519 00:04:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:27.519 00:04:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.519 00:04:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:27.519 00:04:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.519 00:04:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:27.519 00:04:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.519 00:04:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:27.519 00:04:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.519 00:04:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:27.519 00:04:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.519 00:04:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:27.519 00:04:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.519 00:04:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:27.519 00:04:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.519 00:04:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:27.519 00:04:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.519 00:04:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:27.519 00:04:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.519 00:04:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:27.519 00:04:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.519 00:04:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:27.519 00:04:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.519 00:04:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:27.519 00:04:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.519 00:04:46 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:27.519 00:04:46 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:27.519 00:04:46 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:03:27.519 00:04:46 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:03:27.519 00:04:46 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:27.519 00:04:46 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:27.520 00:04:46 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:03:27.520 00:04:46 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:27.520 00:04:46 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:03:27.520 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:27.520 00:04:46 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:27.520 00:04:46 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:03:27.779 00:03:27.779 real 0m8.593s 00:03:27.779 user 0m2.030s 00:03:27.779 sys 0m3.564s 00:03:27.779 00:04:46 setup.sh.devices.dm_mount -- common/autotest_common.sh@1118 -- # xtrace_disable 00:03:27.779 00:04:46 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:03:27.779 ************************************ 00:03:27.779 END TEST dm_mount 00:03:27.779 ************************************ 00:03:27.779 00:04:46 setup.sh.devices -- common/autotest_common.sh@1136 -- # return 0 00:03:27.779 00:04:46 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:03:27.779 00:04:46 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:03:27.779 00:04:46 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:27.779 00:04:46 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:27.779 00:04:46 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:27.779 00:04:46 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:27.779 00:04:46 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:28.038 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:28.038 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:28.038 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:28.038 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:28.038 00:04:46 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:03:28.038 00:04:46 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:28.038 00:04:46 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:28.038 00:04:46 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:28.038 00:04:46 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:28.038 00:04:46 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:03:28.038 00:04:46 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:03:28.038 00:03:28.038 real 0m22.558s 00:03:28.038 user 0m6.291s 00:03:28.038 sys 0m10.925s 00:03:28.038 00:04:46 setup.sh.devices -- common/autotest_common.sh@1118 -- # xtrace_disable 00:03:28.038 00:04:46 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:28.038 ************************************ 00:03:28.038 END TEST devices 00:03:28.038 ************************************ 00:03:28.038 00:04:46 setup.sh -- common/autotest_common.sh@1136 -- # return 0 00:03:28.038 00:03:28.038 real 1m10.682s 00:03:28.038 user 0m22.575s 00:03:28.038 sys 0m38.518s 00:03:28.038 00:04:46 setup.sh -- common/autotest_common.sh@1118 -- # xtrace_disable 00:03:28.038 00:04:46 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:28.038 ************************************ 00:03:28.038 END TEST setup.sh 00:03:28.038 ************************************ 00:03:28.038 00:04:46 -- common/autotest_common.sh@1136 -- # return 0 00:03:28.038 00:04:46 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:30.573 Hugepages 00:03:30.573 node hugesize free / total 00:03:30.573 node0 1048576kB 0 / 0 00:03:30.573 node0 2048kB 1024 / 1024 00:03:30.573 node1 1048576kB 0 / 0 00:03:30.573 node1 2048kB 1024 / 1024 00:03:30.573 00:03:30.573 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:30.573 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:03:30.573 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:03:30.573 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:03:30.573 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:03:30.573 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:03:30.573 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:03:30.573 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:03:30.573 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:03:30.832 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:03:30.832 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:03:30.832 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:03:30.832 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:03:30.832 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:03:30.832 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:03:30.832 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:03:30.832 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:03:30.832 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:03:30.832 00:04:49 -- spdk/autotest.sh@130 -- # uname -s 00:03:30.832 00:04:49 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:03:30.832 00:04:49 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:03:30.832 00:04:49 -- common/autotest_common.sh@1525 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:33.394 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:33.394 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:33.394 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:33.394 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:33.394 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:33.394 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:33.394 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:33.394 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:33.394 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:33.394 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:33.394 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:33.394 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:33.394 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:33.394 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:33.394 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:33.394 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:33.979 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:34.238 00:04:52 -- common/autotest_common.sh@1526 -- # sleep 1 00:03:35.175 00:04:53 -- common/autotest_common.sh@1527 -- # bdfs=() 00:03:35.175 00:04:53 -- common/autotest_common.sh@1527 -- # local bdfs 00:03:35.175 00:04:53 -- common/autotest_common.sh@1528 -- # bdfs=($(get_nvme_bdfs)) 00:03:35.175 00:04:53 -- common/autotest_common.sh@1528 -- # get_nvme_bdfs 00:03:35.175 00:04:53 -- common/autotest_common.sh@1507 -- # bdfs=() 00:03:35.175 00:04:53 -- common/autotest_common.sh@1507 -- # local bdfs 00:03:35.175 00:04:53 -- common/autotest_common.sh@1508 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:35.175 00:04:53 -- common/autotest_common.sh@1508 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:35.175 00:04:53 -- common/autotest_common.sh@1508 -- # jq -r '.config[].params.traddr' 00:03:35.175 00:04:54 -- common/autotest_common.sh@1509 -- # (( 1 == 0 )) 00:03:35.175 00:04:54 -- common/autotest_common.sh@1513 -- # printf '%s\n' 0000:5e:00.0 00:03:35.175 00:04:54 -- common/autotest_common.sh@1530 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:37.709 Waiting for block devices as requested 00:03:37.709 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:03:37.709 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:37.709 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:37.709 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:37.968 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:37.968 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:37.968 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:37.968 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:38.227 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:38.227 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:38.227 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:38.485 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:38.485 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:38.485 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:38.485 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:38.742 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:38.742 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:38.742 00:04:57 -- common/autotest_common.sh@1532 -- # for bdf in "${bdfs[@]}" 00:03:38.742 00:04:57 -- common/autotest_common.sh@1533 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:03:38.742 00:04:57 -- common/autotest_common.sh@1496 -- # readlink -f /sys/class/nvme/nvme0 00:03:38.742 00:04:57 -- common/autotest_common.sh@1496 -- # grep 0000:5e:00.0/nvme/nvme 00:03:38.742 00:04:57 -- common/autotest_common.sh@1496 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:03:38.742 00:04:57 -- common/autotest_common.sh@1497 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:03:38.742 00:04:57 -- common/autotest_common.sh@1501 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:03:38.742 00:04:57 -- common/autotest_common.sh@1501 -- # printf '%s\n' nvme0 00:03:38.742 00:04:57 -- common/autotest_common.sh@1533 -- # nvme_ctrlr=/dev/nvme0 00:03:38.742 00:04:57 -- common/autotest_common.sh@1534 -- # [[ -z /dev/nvme0 ]] 00:03:38.742 00:04:57 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme0 00:03:38.742 00:04:57 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:03:38.742 00:04:57 -- common/autotest_common.sh@1539 -- # grep oacs 00:03:38.742 00:04:57 -- common/autotest_common.sh@1539 -- # oacs=' 0xe' 00:03:38.742 00:04:57 -- common/autotest_common.sh@1540 -- # oacs_ns_manage=8 00:03:38.742 00:04:57 -- common/autotest_common.sh@1542 -- # [[ 8 -ne 0 ]] 00:03:38.742 00:04:57 -- common/autotest_common.sh@1548 -- # nvme id-ctrl /dev/nvme0 00:03:38.742 00:04:57 -- common/autotest_common.sh@1548 -- # grep unvmcap 00:03:38.742 00:04:57 -- common/autotest_common.sh@1548 -- # cut -d: -f2 00:03:38.742 00:04:57 -- common/autotest_common.sh@1548 -- # unvmcap=' 0' 00:03:38.742 00:04:57 -- common/autotest_common.sh@1549 -- # [[ 0 -eq 0 ]] 00:03:38.742 00:04:57 -- common/autotest_common.sh@1551 -- # continue 00:03:38.743 00:04:57 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:03:38.743 00:04:57 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:38.743 00:04:57 -- common/autotest_common.sh@10 -- # set +x 00:03:39.014 00:04:57 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:03:39.014 00:04:57 -- common/autotest_common.sh@716 -- # xtrace_disable 00:03:39.014 00:04:57 -- common/autotest_common.sh@10 -- # set +x 00:03:39.014 00:04:57 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:40.914 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:40.914 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:40.914 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:40.914 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:40.914 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:40.914 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:40.914 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:40.914 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:40.914 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:41.173 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:41.173 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:41.173 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:41.173 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:41.173 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:41.173 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:41.173 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:42.111 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:42.111 00:05:00 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:03:42.111 00:05:00 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:42.111 00:05:00 -- common/autotest_common.sh@10 -- # set +x 00:03:42.111 00:05:00 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:03:42.111 00:05:00 -- common/autotest_common.sh@1585 -- # mapfile -t bdfs 00:03:42.111 00:05:00 -- common/autotest_common.sh@1585 -- # get_nvme_bdfs_by_id 0x0a54 00:03:42.111 00:05:00 -- common/autotest_common.sh@1571 -- # bdfs=() 00:03:42.111 00:05:00 -- common/autotest_common.sh@1571 -- # local bdfs 00:03:42.111 00:05:00 -- common/autotest_common.sh@1573 -- # get_nvme_bdfs 00:03:42.111 00:05:00 -- common/autotest_common.sh@1507 -- # bdfs=() 00:03:42.111 00:05:00 -- common/autotest_common.sh@1507 -- # local bdfs 00:03:42.111 00:05:00 -- common/autotest_common.sh@1508 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:42.111 00:05:00 -- common/autotest_common.sh@1508 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:42.111 00:05:00 -- common/autotest_common.sh@1508 -- # jq -r '.config[].params.traddr' 00:03:42.111 00:05:00 -- common/autotest_common.sh@1509 -- # (( 1 == 0 )) 00:03:42.111 00:05:00 -- common/autotest_common.sh@1513 -- # printf '%s\n' 0000:5e:00.0 00:03:42.111 00:05:00 -- common/autotest_common.sh@1573 -- # for bdf in $(get_nvme_bdfs) 00:03:42.111 00:05:00 -- common/autotest_common.sh@1574 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:03:42.111 00:05:00 -- common/autotest_common.sh@1574 -- # device=0x0a54 00:03:42.111 00:05:00 -- common/autotest_common.sh@1575 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:03:42.111 00:05:00 -- common/autotest_common.sh@1576 -- # bdfs+=($bdf) 00:03:42.111 00:05:00 -- common/autotest_common.sh@1580 -- # printf '%s\n' 0000:5e:00.0 00:03:42.111 00:05:00 -- common/autotest_common.sh@1586 -- # [[ -z 0000:5e:00.0 ]] 00:03:42.111 00:05:00 -- common/autotest_common.sh@1591 -- # spdk_tgt_pid=1321451 00:03:42.111 00:05:00 -- common/autotest_common.sh@1592 -- # waitforlisten 1321451 00:03:42.111 00:05:00 -- common/autotest_common.sh@1590 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:42.111 00:05:00 -- common/autotest_common.sh@823 -- # '[' -z 1321451 ']' 00:03:42.111 00:05:00 -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:42.111 00:05:00 -- common/autotest_common.sh@828 -- # local max_retries=100 00:03:42.111 00:05:00 -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:42.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:42.111 00:05:00 -- common/autotest_common.sh@832 -- # xtrace_disable 00:03:42.111 00:05:00 -- common/autotest_common.sh@10 -- # set +x 00:03:42.111 [2024-07-16 00:05:00.902467] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:03:42.111 [2024-07-16 00:05:00.902513] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1321451 ] 00:03:42.111 [2024-07-16 00:05:00.956902] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:42.370 [2024-07-16 00:05:01.030177] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:42.939 00:05:01 -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:03:42.939 00:05:01 -- common/autotest_common.sh@856 -- # return 0 00:03:42.939 00:05:01 -- common/autotest_common.sh@1594 -- # bdf_id=0 00:03:42.939 00:05:01 -- common/autotest_common.sh@1595 -- # for bdf in "${bdfs[@]}" 00:03:42.939 00:05:01 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:03:46.229 nvme0n1 00:03:46.229 00:05:04 -- common/autotest_common.sh@1598 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:03:46.229 [2024-07-16 00:05:04.836151] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:03:46.229 request: 00:03:46.229 { 00:03:46.229 "nvme_ctrlr_name": "nvme0", 00:03:46.229 "password": "test", 00:03:46.229 "method": "bdev_nvme_opal_revert", 00:03:46.229 "req_id": 1 00:03:46.229 } 00:03:46.229 Got JSON-RPC error response 00:03:46.229 response: 00:03:46.229 { 00:03:46.229 "code": -32602, 00:03:46.229 "message": "Invalid parameters" 00:03:46.229 } 00:03:46.229 00:05:04 -- common/autotest_common.sh@1598 -- # true 00:03:46.229 00:05:04 -- common/autotest_common.sh@1599 -- # (( ++bdf_id )) 00:03:46.229 00:05:04 -- common/autotest_common.sh@1602 -- # killprocess 1321451 00:03:46.229 00:05:04 -- common/autotest_common.sh@942 -- # '[' -z 1321451 ']' 00:03:46.229 00:05:04 -- common/autotest_common.sh@946 -- # kill -0 1321451 00:03:46.229 00:05:04 -- common/autotest_common.sh@947 -- # uname 00:03:46.229 00:05:04 -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:03:46.229 00:05:04 -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1321451 00:03:46.229 00:05:04 -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:03:46.229 00:05:04 -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:03:46.229 00:05:04 -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1321451' 00:03:46.229 killing process with pid 1321451 00:03:46.229 00:05:04 -- common/autotest_common.sh@961 -- # kill 1321451 00:03:46.229 00:05:04 -- common/autotest_common.sh@966 -- # wait 1321451 00:03:48.138 00:05:06 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:03:48.138 00:05:06 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:03:48.138 00:05:06 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:03:48.138 00:05:06 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:03:48.138 00:05:06 -- spdk/autotest.sh@162 -- # timing_enter lib 00:03:48.138 00:05:06 -- common/autotest_common.sh@716 -- # xtrace_disable 00:03:48.138 00:05:06 -- common/autotest_common.sh@10 -- # set +x 00:03:48.138 00:05:06 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:03:48.138 00:05:06 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:48.138 00:05:06 -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:03:48.138 00:05:06 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:03:48.138 00:05:06 -- common/autotest_common.sh@10 -- # set +x 00:03:48.138 ************************************ 00:03:48.138 START TEST env 00:03:48.138 ************************************ 00:03:48.138 00:05:06 env -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:48.138 * Looking for test storage... 00:03:48.138 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:03:48.138 00:05:06 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:48.138 00:05:06 env -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:03:48.138 00:05:06 env -- common/autotest_common.sh@1099 -- # xtrace_disable 00:03:48.138 00:05:06 env -- common/autotest_common.sh@10 -- # set +x 00:03:48.138 ************************************ 00:03:48.138 START TEST env_memory 00:03:48.138 ************************************ 00:03:48.138 00:05:06 env.env_memory -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:48.138 00:03:48.138 00:03:48.139 CUnit - A unit testing framework for C - Version 2.1-3 00:03:48.139 http://cunit.sourceforge.net/ 00:03:48.139 00:03:48.139 00:03:48.139 Suite: memory 00:03:48.139 Test: alloc and free memory map ...[2024-07-16 00:05:06.672557] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:48.139 passed 00:03:48.139 Test: mem map translation ...[2024-07-16 00:05:06.692677] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:48.139 [2024-07-16 00:05:06.692693] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:48.139 [2024-07-16 00:05:06.692730] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:48.139 [2024-07-16 00:05:06.692738] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:48.139 passed 00:03:48.139 Test: mem map registration ...[2024-07-16 00:05:06.735979] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:03:48.139 [2024-07-16 00:05:06.735993] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:03:48.139 passed 00:03:48.139 Test: mem map adjacent registrations ...passed 00:03:48.139 00:03:48.139 Run Summary: Type Total Ran Passed Failed Inactive 00:03:48.139 suites 1 1 n/a 0 0 00:03:48.139 tests 4 4 4 0 0 00:03:48.139 asserts 152 152 152 0 n/a 00:03:48.139 00:03:48.139 Elapsed time = 0.144 seconds 00:03:48.139 00:03:48.139 real 0m0.157s 00:03:48.139 user 0m0.145s 00:03:48.139 sys 0m0.011s 00:03:48.139 00:05:06 env.env_memory -- common/autotest_common.sh@1118 -- # xtrace_disable 00:03:48.139 00:05:06 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:48.139 ************************************ 00:03:48.139 END TEST env_memory 00:03:48.139 ************************************ 00:03:48.139 00:05:06 env -- common/autotest_common.sh@1136 -- # return 0 00:03:48.139 00:05:06 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:48.139 00:05:06 env -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:03:48.139 00:05:06 env -- common/autotest_common.sh@1099 -- # xtrace_disable 00:03:48.139 00:05:06 env -- common/autotest_common.sh@10 -- # set +x 00:03:48.139 ************************************ 00:03:48.139 START TEST env_vtophys 00:03:48.139 ************************************ 00:03:48.139 00:05:06 env.env_vtophys -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:48.139 EAL: lib.eal log level changed from notice to debug 00:03:48.139 EAL: Detected lcore 0 as core 0 on socket 0 00:03:48.139 EAL: Detected lcore 1 as core 1 on socket 0 00:03:48.139 EAL: Detected lcore 2 as core 2 on socket 0 00:03:48.139 EAL: Detected lcore 3 as core 3 on socket 0 00:03:48.139 EAL: Detected lcore 4 as core 4 on socket 0 00:03:48.139 EAL: Detected lcore 5 as core 5 on socket 0 00:03:48.139 EAL: Detected lcore 6 as core 6 on socket 0 00:03:48.139 EAL: Detected lcore 7 as core 8 on socket 0 00:03:48.139 EAL: Detected lcore 8 as core 9 on socket 0 00:03:48.139 EAL: Detected lcore 9 as core 10 on socket 0 00:03:48.139 EAL: Detected lcore 10 as core 11 on socket 0 00:03:48.139 EAL: Detected lcore 11 as core 12 on socket 0 00:03:48.139 EAL: Detected lcore 12 as core 13 on socket 0 00:03:48.139 EAL: Detected lcore 13 as core 16 on socket 0 00:03:48.139 EAL: Detected lcore 14 as core 17 on socket 0 00:03:48.139 EAL: Detected lcore 15 as core 18 on socket 0 00:03:48.139 EAL: Detected lcore 16 as core 19 on socket 0 00:03:48.139 EAL: Detected lcore 17 as core 20 on socket 0 00:03:48.139 EAL: Detected lcore 18 as core 21 on socket 0 00:03:48.139 EAL: Detected lcore 19 as core 25 on socket 0 00:03:48.139 EAL: Detected lcore 20 as core 26 on socket 0 00:03:48.139 EAL: Detected lcore 21 as core 27 on socket 0 00:03:48.139 EAL: Detected lcore 22 as core 28 on socket 0 00:03:48.139 EAL: Detected lcore 23 as core 29 on socket 0 00:03:48.139 EAL: Detected lcore 24 as core 0 on socket 1 00:03:48.139 EAL: Detected lcore 25 as core 1 on socket 1 00:03:48.139 EAL: Detected lcore 26 as core 2 on socket 1 00:03:48.139 EAL: Detected lcore 27 as core 3 on socket 1 00:03:48.139 EAL: Detected lcore 28 as core 4 on socket 1 00:03:48.139 EAL: Detected lcore 29 as core 5 on socket 1 00:03:48.139 EAL: Detected lcore 30 as core 6 on socket 1 00:03:48.139 EAL: Detected lcore 31 as core 9 on socket 1 00:03:48.139 EAL: Detected lcore 32 as core 10 on socket 1 00:03:48.139 EAL: Detected lcore 33 as core 11 on socket 1 00:03:48.139 EAL: Detected lcore 34 as core 12 on socket 1 00:03:48.139 EAL: Detected lcore 35 as core 13 on socket 1 00:03:48.139 EAL: Detected lcore 36 as core 16 on socket 1 00:03:48.139 EAL: Detected lcore 37 as core 17 on socket 1 00:03:48.139 EAL: Detected lcore 38 as core 18 on socket 1 00:03:48.139 EAL: Detected lcore 39 as core 19 on socket 1 00:03:48.139 EAL: Detected lcore 40 as core 20 on socket 1 00:03:48.139 EAL: Detected lcore 41 as core 21 on socket 1 00:03:48.139 EAL: Detected lcore 42 as core 24 on socket 1 00:03:48.139 EAL: Detected lcore 43 as core 25 on socket 1 00:03:48.139 EAL: Detected lcore 44 as core 26 on socket 1 00:03:48.139 EAL: Detected lcore 45 as core 27 on socket 1 00:03:48.139 EAL: Detected lcore 46 as core 28 on socket 1 00:03:48.139 EAL: Detected lcore 47 as core 29 on socket 1 00:03:48.139 EAL: Detected lcore 48 as core 0 on socket 0 00:03:48.139 EAL: Detected lcore 49 as core 1 on socket 0 00:03:48.139 EAL: Detected lcore 50 as core 2 on socket 0 00:03:48.139 EAL: Detected lcore 51 as core 3 on socket 0 00:03:48.139 EAL: Detected lcore 52 as core 4 on socket 0 00:03:48.139 EAL: Detected lcore 53 as core 5 on socket 0 00:03:48.139 EAL: Detected lcore 54 as core 6 on socket 0 00:03:48.139 EAL: Detected lcore 55 as core 8 on socket 0 00:03:48.139 EAL: Detected lcore 56 as core 9 on socket 0 00:03:48.139 EAL: Detected lcore 57 as core 10 on socket 0 00:03:48.139 EAL: Detected lcore 58 as core 11 on socket 0 00:03:48.139 EAL: Detected lcore 59 as core 12 on socket 0 00:03:48.139 EAL: Detected lcore 60 as core 13 on socket 0 00:03:48.139 EAL: Detected lcore 61 as core 16 on socket 0 00:03:48.139 EAL: Detected lcore 62 as core 17 on socket 0 00:03:48.139 EAL: Detected lcore 63 as core 18 on socket 0 00:03:48.139 EAL: Detected lcore 64 as core 19 on socket 0 00:03:48.139 EAL: Detected lcore 65 as core 20 on socket 0 00:03:48.139 EAL: Detected lcore 66 as core 21 on socket 0 00:03:48.139 EAL: Detected lcore 67 as core 25 on socket 0 00:03:48.139 EAL: Detected lcore 68 as core 26 on socket 0 00:03:48.139 EAL: Detected lcore 69 as core 27 on socket 0 00:03:48.139 EAL: Detected lcore 70 as core 28 on socket 0 00:03:48.139 EAL: Detected lcore 71 as core 29 on socket 0 00:03:48.139 EAL: Detected lcore 72 as core 0 on socket 1 00:03:48.139 EAL: Detected lcore 73 as core 1 on socket 1 00:03:48.139 EAL: Detected lcore 74 as core 2 on socket 1 00:03:48.139 EAL: Detected lcore 75 as core 3 on socket 1 00:03:48.139 EAL: Detected lcore 76 as core 4 on socket 1 00:03:48.139 EAL: Detected lcore 77 as core 5 on socket 1 00:03:48.139 EAL: Detected lcore 78 as core 6 on socket 1 00:03:48.139 EAL: Detected lcore 79 as core 9 on socket 1 00:03:48.139 EAL: Detected lcore 80 as core 10 on socket 1 00:03:48.139 EAL: Detected lcore 81 as core 11 on socket 1 00:03:48.139 EAL: Detected lcore 82 as core 12 on socket 1 00:03:48.139 EAL: Detected lcore 83 as core 13 on socket 1 00:03:48.139 EAL: Detected lcore 84 as core 16 on socket 1 00:03:48.139 EAL: Detected lcore 85 as core 17 on socket 1 00:03:48.139 EAL: Detected lcore 86 as core 18 on socket 1 00:03:48.139 EAL: Detected lcore 87 as core 19 on socket 1 00:03:48.139 EAL: Detected lcore 88 as core 20 on socket 1 00:03:48.139 EAL: Detected lcore 89 as core 21 on socket 1 00:03:48.139 EAL: Detected lcore 90 as core 24 on socket 1 00:03:48.139 EAL: Detected lcore 91 as core 25 on socket 1 00:03:48.139 EAL: Detected lcore 92 as core 26 on socket 1 00:03:48.139 EAL: Detected lcore 93 as core 27 on socket 1 00:03:48.139 EAL: Detected lcore 94 as core 28 on socket 1 00:03:48.139 EAL: Detected lcore 95 as core 29 on socket 1 00:03:48.139 EAL: Maximum logical cores by configuration: 128 00:03:48.139 EAL: Detected CPU lcores: 96 00:03:48.139 EAL: Detected NUMA nodes: 2 00:03:48.139 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:48.139 EAL: Detected shared linkage of DPDK 00:03:48.139 EAL: No shared files mode enabled, IPC will be disabled 00:03:48.139 EAL: Bus pci wants IOVA as 'DC' 00:03:48.139 EAL: Buses did not request a specific IOVA mode. 00:03:48.139 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:48.139 EAL: Selected IOVA mode 'VA' 00:03:48.139 EAL: Probing VFIO support... 00:03:48.139 EAL: IOMMU type 1 (Type 1) is supported 00:03:48.139 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:48.139 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:48.139 EAL: VFIO support initialized 00:03:48.139 EAL: Ask a virtual area of 0x2e000 bytes 00:03:48.139 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:48.139 EAL: Setting up physically contiguous memory... 00:03:48.139 EAL: Setting maximum number of open files to 524288 00:03:48.139 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:48.139 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:48.139 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:48.139 EAL: Ask a virtual area of 0x61000 bytes 00:03:48.139 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:48.139 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:48.139 EAL: Ask a virtual area of 0x400000000 bytes 00:03:48.139 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:48.139 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:48.139 EAL: Ask a virtual area of 0x61000 bytes 00:03:48.139 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:48.139 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:48.139 EAL: Ask a virtual area of 0x400000000 bytes 00:03:48.139 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:48.139 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:48.139 EAL: Ask a virtual area of 0x61000 bytes 00:03:48.139 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:48.139 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:48.139 EAL: Ask a virtual area of 0x400000000 bytes 00:03:48.139 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:48.139 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:48.139 EAL: Ask a virtual area of 0x61000 bytes 00:03:48.139 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:48.139 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:48.139 EAL: Ask a virtual area of 0x400000000 bytes 00:03:48.139 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:48.139 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:48.139 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:48.139 EAL: Ask a virtual area of 0x61000 bytes 00:03:48.140 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:48.140 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:48.140 EAL: Ask a virtual area of 0x400000000 bytes 00:03:48.140 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:48.140 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:48.140 EAL: Ask a virtual area of 0x61000 bytes 00:03:48.140 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:48.140 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:48.140 EAL: Ask a virtual area of 0x400000000 bytes 00:03:48.140 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:48.140 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:48.140 EAL: Ask a virtual area of 0x61000 bytes 00:03:48.140 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:48.140 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:48.140 EAL: Ask a virtual area of 0x400000000 bytes 00:03:48.140 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:48.140 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:48.140 EAL: Ask a virtual area of 0x61000 bytes 00:03:48.140 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:48.140 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:48.140 EAL: Ask a virtual area of 0x400000000 bytes 00:03:48.140 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:48.140 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:48.140 EAL: Hugepages will be freed exactly as allocated. 00:03:48.140 EAL: No shared files mode enabled, IPC is disabled 00:03:48.140 EAL: No shared files mode enabled, IPC is disabled 00:03:48.140 EAL: TSC frequency is ~2300000 KHz 00:03:48.140 EAL: Main lcore 0 is ready (tid=7f271dfeba00;cpuset=[0]) 00:03:48.140 EAL: Trying to obtain current memory policy. 00:03:48.140 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:48.140 EAL: Restoring previous memory policy: 0 00:03:48.140 EAL: request: mp_malloc_sync 00:03:48.140 EAL: No shared files mode enabled, IPC is disabled 00:03:48.140 EAL: Heap on socket 0 was expanded by 2MB 00:03:48.140 EAL: No shared files mode enabled, IPC is disabled 00:03:48.140 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:48.140 EAL: Mem event callback 'spdk:(nil)' registered 00:03:48.140 00:03:48.140 00:03:48.140 CUnit - A unit testing framework for C - Version 2.1-3 00:03:48.140 http://cunit.sourceforge.net/ 00:03:48.140 00:03:48.140 00:03:48.140 Suite: components_suite 00:03:48.140 Test: vtophys_malloc_test ...passed 00:03:48.140 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:48.140 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:48.140 EAL: Restoring previous memory policy: 4 00:03:48.140 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.140 EAL: request: mp_malloc_sync 00:03:48.140 EAL: No shared files mode enabled, IPC is disabled 00:03:48.140 EAL: Heap on socket 0 was expanded by 4MB 00:03:48.140 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.140 EAL: request: mp_malloc_sync 00:03:48.140 EAL: No shared files mode enabled, IPC is disabled 00:03:48.140 EAL: Heap on socket 0 was shrunk by 4MB 00:03:48.140 EAL: Trying to obtain current memory policy. 00:03:48.140 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:48.140 EAL: Restoring previous memory policy: 4 00:03:48.140 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.140 EAL: request: mp_malloc_sync 00:03:48.140 EAL: No shared files mode enabled, IPC is disabled 00:03:48.140 EAL: Heap on socket 0 was expanded by 6MB 00:03:48.140 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.140 EAL: request: mp_malloc_sync 00:03:48.140 EAL: No shared files mode enabled, IPC is disabled 00:03:48.140 EAL: Heap on socket 0 was shrunk by 6MB 00:03:48.140 EAL: Trying to obtain current memory policy. 00:03:48.140 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:48.140 EAL: Restoring previous memory policy: 4 00:03:48.140 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.140 EAL: request: mp_malloc_sync 00:03:48.140 EAL: No shared files mode enabled, IPC is disabled 00:03:48.140 EAL: Heap on socket 0 was expanded by 10MB 00:03:48.140 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.140 EAL: request: mp_malloc_sync 00:03:48.140 EAL: No shared files mode enabled, IPC is disabled 00:03:48.140 EAL: Heap on socket 0 was shrunk by 10MB 00:03:48.140 EAL: Trying to obtain current memory policy. 00:03:48.140 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:48.140 EAL: Restoring previous memory policy: 4 00:03:48.140 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.140 EAL: request: mp_malloc_sync 00:03:48.140 EAL: No shared files mode enabled, IPC is disabled 00:03:48.140 EAL: Heap on socket 0 was expanded by 18MB 00:03:48.140 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.140 EAL: request: mp_malloc_sync 00:03:48.140 EAL: No shared files mode enabled, IPC is disabled 00:03:48.140 EAL: Heap on socket 0 was shrunk by 18MB 00:03:48.140 EAL: Trying to obtain current memory policy. 00:03:48.140 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:48.140 EAL: Restoring previous memory policy: 4 00:03:48.140 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.140 EAL: request: mp_malloc_sync 00:03:48.140 EAL: No shared files mode enabled, IPC is disabled 00:03:48.140 EAL: Heap on socket 0 was expanded by 34MB 00:03:48.140 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.140 EAL: request: mp_malloc_sync 00:03:48.140 EAL: No shared files mode enabled, IPC is disabled 00:03:48.140 EAL: Heap on socket 0 was shrunk by 34MB 00:03:48.140 EAL: Trying to obtain current memory policy. 00:03:48.140 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:48.140 EAL: Restoring previous memory policy: 4 00:03:48.140 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.140 EAL: request: mp_malloc_sync 00:03:48.140 EAL: No shared files mode enabled, IPC is disabled 00:03:48.140 EAL: Heap on socket 0 was expanded by 66MB 00:03:48.140 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.140 EAL: request: mp_malloc_sync 00:03:48.140 EAL: No shared files mode enabled, IPC is disabled 00:03:48.140 EAL: Heap on socket 0 was shrunk by 66MB 00:03:48.140 EAL: Trying to obtain current memory policy. 00:03:48.140 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:48.140 EAL: Restoring previous memory policy: 4 00:03:48.140 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.140 EAL: request: mp_malloc_sync 00:03:48.140 EAL: No shared files mode enabled, IPC is disabled 00:03:48.140 EAL: Heap on socket 0 was expanded by 130MB 00:03:48.399 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.399 EAL: request: mp_malloc_sync 00:03:48.399 EAL: No shared files mode enabled, IPC is disabled 00:03:48.399 EAL: Heap on socket 0 was shrunk by 130MB 00:03:48.399 EAL: Trying to obtain current memory policy. 00:03:48.399 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:48.399 EAL: Restoring previous memory policy: 4 00:03:48.399 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.399 EAL: request: mp_malloc_sync 00:03:48.399 EAL: No shared files mode enabled, IPC is disabled 00:03:48.399 EAL: Heap on socket 0 was expanded by 258MB 00:03:48.399 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.399 EAL: request: mp_malloc_sync 00:03:48.399 EAL: No shared files mode enabled, IPC is disabled 00:03:48.399 EAL: Heap on socket 0 was shrunk by 258MB 00:03:48.399 EAL: Trying to obtain current memory policy. 00:03:48.399 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:48.399 EAL: Restoring previous memory policy: 4 00:03:48.399 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.399 EAL: request: mp_malloc_sync 00:03:48.399 EAL: No shared files mode enabled, IPC is disabled 00:03:48.399 EAL: Heap on socket 0 was expanded by 514MB 00:03:48.657 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.657 EAL: request: mp_malloc_sync 00:03:48.657 EAL: No shared files mode enabled, IPC is disabled 00:03:48.657 EAL: Heap on socket 0 was shrunk by 514MB 00:03:48.657 EAL: Trying to obtain current memory policy. 00:03:48.657 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:48.914 EAL: Restoring previous memory policy: 4 00:03:48.914 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.914 EAL: request: mp_malloc_sync 00:03:48.914 EAL: No shared files mode enabled, IPC is disabled 00:03:48.914 EAL: Heap on socket 0 was expanded by 1026MB 00:03:48.914 EAL: Calling mem event callback 'spdk:(nil)' 00:03:49.173 EAL: request: mp_malloc_sync 00:03:49.173 EAL: No shared files mode enabled, IPC is disabled 00:03:49.173 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:49.173 passed 00:03:49.173 00:03:49.173 Run Summary: Type Total Ran Passed Failed Inactive 00:03:49.173 suites 1 1 n/a 0 0 00:03:49.173 tests 2 2 2 0 0 00:03:49.173 asserts 497 497 497 0 n/a 00:03:49.173 00:03:49.173 Elapsed time = 0.960 seconds 00:03:49.173 EAL: Calling mem event callback 'spdk:(nil)' 00:03:49.173 EAL: request: mp_malloc_sync 00:03:49.173 EAL: No shared files mode enabled, IPC is disabled 00:03:49.173 EAL: Heap on socket 0 was shrunk by 2MB 00:03:49.173 EAL: No shared files mode enabled, IPC is disabled 00:03:49.173 EAL: No shared files mode enabled, IPC is disabled 00:03:49.173 EAL: No shared files mode enabled, IPC is disabled 00:03:49.173 00:03:49.173 real 0m1.069s 00:03:49.173 user 0m0.633s 00:03:49.173 sys 0m0.409s 00:03:49.173 00:05:07 env.env_vtophys -- common/autotest_common.sh@1118 -- # xtrace_disable 00:03:49.173 00:05:07 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:49.173 ************************************ 00:03:49.173 END TEST env_vtophys 00:03:49.173 ************************************ 00:03:49.173 00:05:07 env -- common/autotest_common.sh@1136 -- # return 0 00:03:49.173 00:05:07 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:49.173 00:05:07 env -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:03:49.173 00:05:07 env -- common/autotest_common.sh@1099 -- # xtrace_disable 00:03:49.173 00:05:07 env -- common/autotest_common.sh@10 -- # set +x 00:03:49.173 ************************************ 00:03:49.173 START TEST env_pci 00:03:49.173 ************************************ 00:03:49.173 00:05:07 env.env_pci -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:49.173 00:03:49.173 00:03:49.173 CUnit - A unit testing framework for C - Version 2.1-3 00:03:49.173 http://cunit.sourceforge.net/ 00:03:49.173 00:03:49.173 00:03:49.173 Suite: pci 00:03:49.173 Test: pci_hook ...[2024-07-16 00:05:07.976928] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1322750 has claimed it 00:03:49.173 EAL: Cannot find device (10000:00:01.0) 00:03:49.173 EAL: Failed to attach device on primary process 00:03:49.173 passed 00:03:49.173 00:03:49.173 Run Summary: Type Total Ran Passed Failed Inactive 00:03:49.173 suites 1 1 n/a 0 0 00:03:49.173 tests 1 1 1 0 0 00:03:49.173 asserts 25 25 25 0 n/a 00:03:49.173 00:03:49.173 Elapsed time = 0.025 seconds 00:03:49.173 00:03:49.173 real 0m0.041s 00:03:49.173 user 0m0.009s 00:03:49.173 sys 0m0.031s 00:03:49.173 00:05:08 env.env_pci -- common/autotest_common.sh@1118 -- # xtrace_disable 00:03:49.173 00:05:08 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:49.173 ************************************ 00:03:49.173 END TEST env_pci 00:03:49.173 ************************************ 00:03:49.431 00:05:08 env -- common/autotest_common.sh@1136 -- # return 0 00:03:49.431 00:05:08 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:49.431 00:05:08 env -- env/env.sh@15 -- # uname 00:03:49.431 00:05:08 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:49.431 00:05:08 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:49.431 00:05:08 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:49.431 00:05:08 env -- common/autotest_common.sh@1093 -- # '[' 5 -le 1 ']' 00:03:49.432 00:05:08 env -- common/autotest_common.sh@1099 -- # xtrace_disable 00:03:49.432 00:05:08 env -- common/autotest_common.sh@10 -- # set +x 00:03:49.432 ************************************ 00:03:49.432 START TEST env_dpdk_post_init 00:03:49.432 ************************************ 00:03:49.432 00:05:08 env.env_dpdk_post_init -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:49.432 EAL: Detected CPU lcores: 96 00:03:49.432 EAL: Detected NUMA nodes: 2 00:03:49.432 EAL: Detected shared linkage of DPDK 00:03:49.432 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:49.432 EAL: Selected IOVA mode 'VA' 00:03:49.432 EAL: VFIO support initialized 00:03:49.432 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:49.432 EAL: Using IOMMU type 1 (Type 1) 00:03:49.432 EAL: Ignore mapping IO port bar(1) 00:03:49.432 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:03:49.432 EAL: Ignore mapping IO port bar(1) 00:03:49.432 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:03:49.432 EAL: Ignore mapping IO port bar(1) 00:03:49.432 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:03:49.432 EAL: Ignore mapping IO port bar(1) 00:03:49.432 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:03:49.432 EAL: Ignore mapping IO port bar(1) 00:03:49.432 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:03:49.432 EAL: Ignore mapping IO port bar(1) 00:03:49.432 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:03:49.432 EAL: Ignore mapping IO port bar(1) 00:03:49.432 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:03:49.432 EAL: Ignore mapping IO port bar(1) 00:03:49.432 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:03:50.389 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:03:50.389 EAL: Ignore mapping IO port bar(1) 00:03:50.389 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:03:50.389 EAL: Ignore mapping IO port bar(1) 00:03:50.389 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:03:50.389 EAL: Ignore mapping IO port bar(1) 00:03:50.389 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:03:50.389 EAL: Ignore mapping IO port bar(1) 00:03:50.389 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:03:50.389 EAL: Ignore mapping IO port bar(1) 00:03:50.389 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:03:50.389 EAL: Ignore mapping IO port bar(1) 00:03:50.389 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:03:50.389 EAL: Ignore mapping IO port bar(1) 00:03:50.389 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:03:50.389 EAL: Ignore mapping IO port bar(1) 00:03:50.389 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:03:53.677 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:03:53.677 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:03:53.677 Starting DPDK initialization... 00:03:53.677 Starting SPDK post initialization... 00:03:53.677 SPDK NVMe probe 00:03:53.677 Attaching to 0000:5e:00.0 00:03:53.677 Attached to 0000:5e:00.0 00:03:53.677 Cleaning up... 00:03:53.677 00:03:53.677 real 0m4.337s 00:03:53.677 user 0m3.287s 00:03:53.677 sys 0m0.118s 00:03:53.677 00:05:12 env.env_dpdk_post_init -- common/autotest_common.sh@1118 -- # xtrace_disable 00:03:53.677 00:05:12 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:03:53.677 ************************************ 00:03:53.677 END TEST env_dpdk_post_init 00:03:53.677 ************************************ 00:03:53.677 00:05:12 env -- common/autotest_common.sh@1136 -- # return 0 00:03:53.677 00:05:12 env -- env/env.sh@26 -- # uname 00:03:53.677 00:05:12 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:53.677 00:05:12 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:53.677 00:05:12 env -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:03:53.677 00:05:12 env -- common/autotest_common.sh@1099 -- # xtrace_disable 00:03:53.677 00:05:12 env -- common/autotest_common.sh@10 -- # set +x 00:03:53.677 ************************************ 00:03:53.677 START TEST env_mem_callbacks 00:03:53.677 ************************************ 00:03:53.677 00:05:12 env.env_mem_callbacks -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:53.677 EAL: Detected CPU lcores: 96 00:03:53.677 EAL: Detected NUMA nodes: 2 00:03:53.677 EAL: Detected shared linkage of DPDK 00:03:53.677 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:53.677 EAL: Selected IOVA mode 'VA' 00:03:53.677 EAL: VFIO support initialized 00:03:53.677 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:53.677 00:03:53.677 00:03:53.677 CUnit - A unit testing framework for C - Version 2.1-3 00:03:53.677 http://cunit.sourceforge.net/ 00:03:53.677 00:03:53.677 00:03:53.677 Suite: memory 00:03:53.677 Test: test ... 00:03:53.677 register 0x200000200000 2097152 00:03:53.677 malloc 3145728 00:03:53.677 register 0x200000400000 4194304 00:03:53.677 buf 0x200000500000 len 3145728 PASSED 00:03:53.677 malloc 64 00:03:53.677 buf 0x2000004fff40 len 64 PASSED 00:03:53.677 malloc 4194304 00:03:53.677 register 0x200000800000 6291456 00:03:53.677 buf 0x200000a00000 len 4194304 PASSED 00:03:53.677 free 0x200000500000 3145728 00:03:53.677 free 0x2000004fff40 64 00:03:53.677 unregister 0x200000400000 4194304 PASSED 00:03:53.677 free 0x200000a00000 4194304 00:03:53.677 unregister 0x200000800000 6291456 PASSED 00:03:53.677 malloc 8388608 00:03:53.677 register 0x200000400000 10485760 00:03:53.677 buf 0x200000600000 len 8388608 PASSED 00:03:53.677 free 0x200000600000 8388608 00:03:53.677 unregister 0x200000400000 10485760 PASSED 00:03:53.677 passed 00:03:53.677 00:03:53.677 Run Summary: Type Total Ran Passed Failed Inactive 00:03:53.677 suites 1 1 n/a 0 0 00:03:53.677 tests 1 1 1 0 0 00:03:53.677 asserts 15 15 15 0 n/a 00:03:53.677 00:03:53.677 Elapsed time = 0.005 seconds 00:03:53.936 00:03:53.936 real 0m0.059s 00:03:53.936 user 0m0.022s 00:03:53.936 sys 0m0.037s 00:03:53.936 00:05:12 env.env_mem_callbacks -- common/autotest_common.sh@1118 -- # xtrace_disable 00:03:53.936 00:05:12 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:03:53.936 ************************************ 00:03:53.936 END TEST env_mem_callbacks 00:03:53.936 ************************************ 00:03:53.936 00:05:12 env -- common/autotest_common.sh@1136 -- # return 0 00:03:53.936 00:03:53.936 real 0m6.040s 00:03:53.936 user 0m4.265s 00:03:53.936 sys 0m0.846s 00:03:53.936 00:05:12 env -- common/autotest_common.sh@1118 -- # xtrace_disable 00:03:53.936 00:05:12 env -- common/autotest_common.sh@10 -- # set +x 00:03:53.936 ************************************ 00:03:53.936 END TEST env 00:03:53.936 ************************************ 00:03:53.936 00:05:12 -- common/autotest_common.sh@1136 -- # return 0 00:03:53.936 00:05:12 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:53.937 00:05:12 -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:03:53.937 00:05:12 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:03:53.937 00:05:12 -- common/autotest_common.sh@10 -- # set +x 00:03:53.937 ************************************ 00:03:53.937 START TEST rpc 00:03:53.937 ************************************ 00:03:53.937 00:05:12 rpc -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:53.937 * Looking for test storage... 00:03:53.937 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:53.937 00:05:12 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1323572 00:03:53.937 00:05:12 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:53.937 00:05:12 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:03:53.937 00:05:12 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1323572 00:03:53.937 00:05:12 rpc -- common/autotest_common.sh@823 -- # '[' -z 1323572 ']' 00:03:53.937 00:05:12 rpc -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:53.937 00:05:12 rpc -- common/autotest_common.sh@828 -- # local max_retries=100 00:03:53.937 00:05:12 rpc -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:53.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:53.937 00:05:12 rpc -- common/autotest_common.sh@832 -- # xtrace_disable 00:03:53.937 00:05:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:53.937 [2024-07-16 00:05:12.760450] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:03:53.937 [2024-07-16 00:05:12.760495] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1323572 ] 00:03:54.196 [2024-07-16 00:05:12.814140] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:54.196 [2024-07-16 00:05:12.887087] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:54.196 [2024-07-16 00:05:12.887130] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1323572' to capture a snapshot of events at runtime. 00:03:54.196 [2024-07-16 00:05:12.887137] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:54.196 [2024-07-16 00:05:12.887143] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:54.196 [2024-07-16 00:05:12.887149] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1323572 for offline analysis/debug. 00:03:54.196 [2024-07-16 00:05:12.887168] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:54.764 00:05:13 rpc -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:03:54.764 00:05:13 rpc -- common/autotest_common.sh@856 -- # return 0 00:03:54.764 00:05:13 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:54.764 00:05:13 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:54.764 00:05:13 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:54.764 00:05:13 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:54.764 00:05:13 rpc -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:03:54.764 00:05:13 rpc -- common/autotest_common.sh@1099 -- # xtrace_disable 00:03:54.764 00:05:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:54.764 ************************************ 00:03:54.764 START TEST rpc_integrity 00:03:54.764 ************************************ 00:03:54.764 00:05:13 rpc.rpc_integrity -- common/autotest_common.sh@1117 -- # rpc_integrity 00:03:54.764 00:05:13 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:54.764 00:05:13 rpc.rpc_integrity -- common/autotest_common.sh@553 -- # xtrace_disable 00:03:54.764 00:05:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:54.764 00:05:13 rpc.rpc_integrity -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:03:54.764 00:05:13 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:54.764 00:05:13 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:55.039 00:05:13 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:55.039 00:05:13 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:55.039 00:05:13 rpc.rpc_integrity -- common/autotest_common.sh@553 -- # xtrace_disable 00:03:55.039 00:05:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:55.039 00:05:13 rpc.rpc_integrity -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:03:55.039 00:05:13 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:55.039 00:05:13 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:55.039 00:05:13 rpc.rpc_integrity -- common/autotest_common.sh@553 -- # xtrace_disable 00:03:55.039 00:05:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:55.039 00:05:13 rpc.rpc_integrity -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:03:55.039 00:05:13 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:55.039 { 00:03:55.039 "name": "Malloc0", 00:03:55.039 "aliases": [ 00:03:55.039 "27b0b33b-af2e-4fed-954c-6a99ab4db2c2" 00:03:55.039 ], 00:03:55.039 "product_name": "Malloc disk", 00:03:55.039 "block_size": 512, 00:03:55.039 "num_blocks": 16384, 00:03:55.039 "uuid": "27b0b33b-af2e-4fed-954c-6a99ab4db2c2", 00:03:55.039 "assigned_rate_limits": { 00:03:55.039 "rw_ios_per_sec": 0, 00:03:55.039 "rw_mbytes_per_sec": 0, 00:03:55.039 "r_mbytes_per_sec": 0, 00:03:55.039 "w_mbytes_per_sec": 0 00:03:55.039 }, 00:03:55.039 "claimed": false, 00:03:55.039 "zoned": false, 00:03:55.039 "supported_io_types": { 00:03:55.039 "read": true, 00:03:55.039 "write": true, 00:03:55.039 "unmap": true, 00:03:55.039 "flush": true, 00:03:55.039 "reset": true, 00:03:55.039 "nvme_admin": false, 00:03:55.039 "nvme_io": false, 00:03:55.039 "nvme_io_md": false, 00:03:55.039 "write_zeroes": true, 00:03:55.039 "zcopy": true, 00:03:55.039 "get_zone_info": false, 00:03:55.039 "zone_management": false, 00:03:55.039 "zone_append": false, 00:03:55.039 "compare": false, 00:03:55.039 "compare_and_write": false, 00:03:55.039 "abort": true, 00:03:55.039 "seek_hole": false, 00:03:55.039 "seek_data": false, 00:03:55.039 "copy": true, 00:03:55.039 "nvme_iov_md": false 00:03:55.039 }, 00:03:55.039 "memory_domains": [ 00:03:55.039 { 00:03:55.039 "dma_device_id": "system", 00:03:55.039 "dma_device_type": 1 00:03:55.039 }, 00:03:55.039 { 00:03:55.039 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:55.039 "dma_device_type": 2 00:03:55.039 } 00:03:55.039 ], 00:03:55.039 "driver_specific": {} 00:03:55.039 } 00:03:55.039 ]' 00:03:55.039 00:05:13 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:55.039 00:05:13 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:55.039 00:05:13 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:55.039 00:05:13 rpc.rpc_integrity -- common/autotest_common.sh@553 -- # xtrace_disable 00:03:55.039 00:05:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:55.039 [2024-07-16 00:05:13.709620] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:55.039 [2024-07-16 00:05:13.709651] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:55.039 [2024-07-16 00:05:13.709663] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x11c02d0 00:03:55.039 [2024-07-16 00:05:13.709669] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:55.039 [2024-07-16 00:05:13.710743] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:55.039 [2024-07-16 00:05:13.710765] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:55.039 Passthru0 00:03:55.039 00:05:13 rpc.rpc_integrity -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:03:55.039 00:05:13 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:55.039 00:05:13 rpc.rpc_integrity -- common/autotest_common.sh@553 -- # xtrace_disable 00:03:55.039 00:05:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:55.039 00:05:13 rpc.rpc_integrity -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:03:55.039 00:05:13 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:55.039 { 00:03:55.039 "name": "Malloc0", 00:03:55.039 "aliases": [ 00:03:55.039 "27b0b33b-af2e-4fed-954c-6a99ab4db2c2" 00:03:55.039 ], 00:03:55.039 "product_name": "Malloc disk", 00:03:55.039 "block_size": 512, 00:03:55.039 "num_blocks": 16384, 00:03:55.039 "uuid": "27b0b33b-af2e-4fed-954c-6a99ab4db2c2", 00:03:55.039 "assigned_rate_limits": { 00:03:55.039 "rw_ios_per_sec": 0, 00:03:55.039 "rw_mbytes_per_sec": 0, 00:03:55.039 "r_mbytes_per_sec": 0, 00:03:55.039 "w_mbytes_per_sec": 0 00:03:55.039 }, 00:03:55.039 "claimed": true, 00:03:55.039 "claim_type": "exclusive_write", 00:03:55.039 "zoned": false, 00:03:55.039 "supported_io_types": { 00:03:55.039 "read": true, 00:03:55.039 "write": true, 00:03:55.039 "unmap": true, 00:03:55.039 "flush": true, 00:03:55.039 "reset": true, 00:03:55.039 "nvme_admin": false, 00:03:55.039 "nvme_io": false, 00:03:55.039 "nvme_io_md": false, 00:03:55.039 "write_zeroes": true, 00:03:55.039 "zcopy": true, 00:03:55.039 "get_zone_info": false, 00:03:55.039 "zone_management": false, 00:03:55.039 "zone_append": false, 00:03:55.039 "compare": false, 00:03:55.040 "compare_and_write": false, 00:03:55.040 "abort": true, 00:03:55.040 "seek_hole": false, 00:03:55.040 "seek_data": false, 00:03:55.040 "copy": true, 00:03:55.040 "nvme_iov_md": false 00:03:55.040 }, 00:03:55.040 "memory_domains": [ 00:03:55.040 { 00:03:55.040 "dma_device_id": "system", 00:03:55.040 "dma_device_type": 1 00:03:55.040 }, 00:03:55.040 { 00:03:55.040 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:55.040 "dma_device_type": 2 00:03:55.040 } 00:03:55.040 ], 00:03:55.040 "driver_specific": {} 00:03:55.040 }, 00:03:55.040 { 00:03:55.040 "name": "Passthru0", 00:03:55.040 "aliases": [ 00:03:55.040 "c2e2254f-e9fc-5e8d-a763-a6183a9b3051" 00:03:55.040 ], 00:03:55.040 "product_name": "passthru", 00:03:55.040 "block_size": 512, 00:03:55.040 "num_blocks": 16384, 00:03:55.040 "uuid": "c2e2254f-e9fc-5e8d-a763-a6183a9b3051", 00:03:55.040 "assigned_rate_limits": { 00:03:55.040 "rw_ios_per_sec": 0, 00:03:55.040 "rw_mbytes_per_sec": 0, 00:03:55.040 "r_mbytes_per_sec": 0, 00:03:55.040 "w_mbytes_per_sec": 0 00:03:55.040 }, 00:03:55.040 "claimed": false, 00:03:55.040 "zoned": false, 00:03:55.040 "supported_io_types": { 00:03:55.040 "read": true, 00:03:55.040 "write": true, 00:03:55.040 "unmap": true, 00:03:55.040 "flush": true, 00:03:55.040 "reset": true, 00:03:55.040 "nvme_admin": false, 00:03:55.040 "nvme_io": false, 00:03:55.040 "nvme_io_md": false, 00:03:55.040 "write_zeroes": true, 00:03:55.040 "zcopy": true, 00:03:55.040 "get_zone_info": false, 00:03:55.040 "zone_management": false, 00:03:55.040 "zone_append": false, 00:03:55.040 "compare": false, 00:03:55.040 "compare_and_write": false, 00:03:55.040 "abort": true, 00:03:55.040 "seek_hole": false, 00:03:55.040 "seek_data": false, 00:03:55.040 "copy": true, 00:03:55.040 "nvme_iov_md": false 00:03:55.040 }, 00:03:55.040 "memory_domains": [ 00:03:55.040 { 00:03:55.040 "dma_device_id": "system", 00:03:55.040 "dma_device_type": 1 00:03:55.040 }, 00:03:55.040 { 00:03:55.040 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:55.040 "dma_device_type": 2 00:03:55.040 } 00:03:55.040 ], 00:03:55.040 "driver_specific": { 00:03:55.040 "passthru": { 00:03:55.040 "name": "Passthru0", 00:03:55.040 "base_bdev_name": "Malloc0" 00:03:55.040 } 00:03:55.040 } 00:03:55.040 } 00:03:55.040 ]' 00:03:55.040 00:05:13 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:55.040 00:05:13 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:55.040 00:05:13 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:55.040 00:05:13 rpc.rpc_integrity -- common/autotest_common.sh@553 -- # xtrace_disable 00:03:55.040 00:05:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:55.040 00:05:13 rpc.rpc_integrity -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:03:55.040 00:05:13 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:55.040 00:05:13 rpc.rpc_integrity -- common/autotest_common.sh@553 -- # xtrace_disable 00:03:55.040 00:05:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:55.040 00:05:13 rpc.rpc_integrity -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:03:55.040 00:05:13 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:55.040 00:05:13 rpc.rpc_integrity -- common/autotest_common.sh@553 -- # xtrace_disable 00:03:55.040 00:05:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:55.040 00:05:13 rpc.rpc_integrity -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:03:55.040 00:05:13 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:55.040 00:05:13 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:55.040 00:05:13 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:55.040 00:03:55.040 real 0m0.265s 00:03:55.040 user 0m0.168s 00:03:55.040 sys 0m0.032s 00:03:55.040 00:05:13 rpc.rpc_integrity -- common/autotest_common.sh@1118 -- # xtrace_disable 00:03:55.040 00:05:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:55.040 ************************************ 00:03:55.040 END TEST rpc_integrity 00:03:55.040 ************************************ 00:03:55.040 00:05:13 rpc -- common/autotest_common.sh@1136 -- # return 0 00:03:55.040 00:05:13 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:55.040 00:05:13 rpc -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:03:55.040 00:05:13 rpc -- common/autotest_common.sh@1099 -- # xtrace_disable 00:03:55.040 00:05:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:55.301 ************************************ 00:03:55.301 START TEST rpc_plugins 00:03:55.301 ************************************ 00:03:55.301 00:05:13 rpc.rpc_plugins -- common/autotest_common.sh@1117 -- # rpc_plugins 00:03:55.301 00:05:13 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:55.301 00:05:13 rpc.rpc_plugins -- common/autotest_common.sh@553 -- # xtrace_disable 00:03:55.301 00:05:13 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:55.301 00:05:13 rpc.rpc_plugins -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:03:55.301 00:05:13 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:55.301 00:05:13 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:55.301 00:05:13 rpc.rpc_plugins -- common/autotest_common.sh@553 -- # xtrace_disable 00:03:55.301 00:05:13 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:55.301 00:05:13 rpc.rpc_plugins -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:03:55.301 00:05:13 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:55.301 { 00:03:55.301 "name": "Malloc1", 00:03:55.301 "aliases": [ 00:03:55.301 "77176223-5f74-4c88-900c-191f4ab78ed7" 00:03:55.301 ], 00:03:55.301 "product_name": "Malloc disk", 00:03:55.301 "block_size": 4096, 00:03:55.301 "num_blocks": 256, 00:03:55.301 "uuid": "77176223-5f74-4c88-900c-191f4ab78ed7", 00:03:55.301 "assigned_rate_limits": { 00:03:55.301 "rw_ios_per_sec": 0, 00:03:55.301 "rw_mbytes_per_sec": 0, 00:03:55.301 "r_mbytes_per_sec": 0, 00:03:55.301 "w_mbytes_per_sec": 0 00:03:55.301 }, 00:03:55.301 "claimed": false, 00:03:55.301 "zoned": false, 00:03:55.301 "supported_io_types": { 00:03:55.301 "read": true, 00:03:55.301 "write": true, 00:03:55.301 "unmap": true, 00:03:55.301 "flush": true, 00:03:55.301 "reset": true, 00:03:55.301 "nvme_admin": false, 00:03:55.301 "nvme_io": false, 00:03:55.301 "nvme_io_md": false, 00:03:55.301 "write_zeroes": true, 00:03:55.301 "zcopy": true, 00:03:55.301 "get_zone_info": false, 00:03:55.301 "zone_management": false, 00:03:55.301 "zone_append": false, 00:03:55.301 "compare": false, 00:03:55.301 "compare_and_write": false, 00:03:55.301 "abort": true, 00:03:55.301 "seek_hole": false, 00:03:55.301 "seek_data": false, 00:03:55.301 "copy": true, 00:03:55.301 "nvme_iov_md": false 00:03:55.301 }, 00:03:55.301 "memory_domains": [ 00:03:55.301 { 00:03:55.301 "dma_device_id": "system", 00:03:55.301 "dma_device_type": 1 00:03:55.301 }, 00:03:55.301 { 00:03:55.301 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:55.301 "dma_device_type": 2 00:03:55.301 } 00:03:55.301 ], 00:03:55.301 "driver_specific": {} 00:03:55.301 } 00:03:55.301 ]' 00:03:55.301 00:05:13 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:03:55.301 00:05:13 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:55.301 00:05:13 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:55.301 00:05:13 rpc.rpc_plugins -- common/autotest_common.sh@553 -- # xtrace_disable 00:03:55.301 00:05:13 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:55.301 00:05:13 rpc.rpc_plugins -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:03:55.301 00:05:13 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:55.301 00:05:13 rpc.rpc_plugins -- common/autotest_common.sh@553 -- # xtrace_disable 00:03:55.301 00:05:13 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:55.301 00:05:14 rpc.rpc_plugins -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:03:55.301 00:05:14 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:55.301 00:05:14 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:03:55.301 00:05:14 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:55.301 00:03:55.301 real 0m0.138s 00:03:55.301 user 0m0.088s 00:03:55.301 sys 0m0.019s 00:03:55.301 00:05:14 rpc.rpc_plugins -- common/autotest_common.sh@1118 -- # xtrace_disable 00:03:55.301 00:05:14 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:55.301 ************************************ 00:03:55.301 END TEST rpc_plugins 00:03:55.301 ************************************ 00:03:55.301 00:05:14 rpc -- common/autotest_common.sh@1136 -- # return 0 00:03:55.301 00:05:14 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:55.301 00:05:14 rpc -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:03:55.301 00:05:14 rpc -- common/autotest_common.sh@1099 -- # xtrace_disable 00:03:55.301 00:05:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:55.301 ************************************ 00:03:55.301 START TEST rpc_trace_cmd_test 00:03:55.301 ************************************ 00:03:55.301 00:05:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1117 -- # rpc_trace_cmd_test 00:03:55.301 00:05:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:03:55.301 00:05:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:55.301 00:05:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@553 -- # xtrace_disable 00:03:55.301 00:05:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:55.301 00:05:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:03:55.301 00:05:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:03:55.301 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1323572", 00:03:55.301 "tpoint_group_mask": "0x8", 00:03:55.301 "iscsi_conn": { 00:03:55.301 "mask": "0x2", 00:03:55.301 "tpoint_mask": "0x0" 00:03:55.301 }, 00:03:55.301 "scsi": { 00:03:55.301 "mask": "0x4", 00:03:55.301 "tpoint_mask": "0x0" 00:03:55.301 }, 00:03:55.301 "bdev": { 00:03:55.301 "mask": "0x8", 00:03:55.301 "tpoint_mask": "0xffffffffffffffff" 00:03:55.301 }, 00:03:55.301 "nvmf_rdma": { 00:03:55.301 "mask": "0x10", 00:03:55.301 "tpoint_mask": "0x0" 00:03:55.301 }, 00:03:55.301 "nvmf_tcp": { 00:03:55.301 "mask": "0x20", 00:03:55.301 "tpoint_mask": "0x0" 00:03:55.301 }, 00:03:55.301 "ftl": { 00:03:55.301 "mask": "0x40", 00:03:55.301 "tpoint_mask": "0x0" 00:03:55.301 }, 00:03:55.301 "blobfs": { 00:03:55.301 "mask": "0x80", 00:03:55.301 "tpoint_mask": "0x0" 00:03:55.301 }, 00:03:55.301 "dsa": { 00:03:55.301 "mask": "0x200", 00:03:55.301 "tpoint_mask": "0x0" 00:03:55.301 }, 00:03:55.301 "thread": { 00:03:55.301 "mask": "0x400", 00:03:55.301 "tpoint_mask": "0x0" 00:03:55.301 }, 00:03:55.301 "nvme_pcie": { 00:03:55.301 "mask": "0x800", 00:03:55.302 "tpoint_mask": "0x0" 00:03:55.302 }, 00:03:55.302 "iaa": { 00:03:55.302 "mask": "0x1000", 00:03:55.302 "tpoint_mask": "0x0" 00:03:55.302 }, 00:03:55.302 "nvme_tcp": { 00:03:55.302 "mask": "0x2000", 00:03:55.302 "tpoint_mask": "0x0" 00:03:55.302 }, 00:03:55.302 "bdev_nvme": { 00:03:55.302 "mask": "0x4000", 00:03:55.302 "tpoint_mask": "0x0" 00:03:55.302 }, 00:03:55.302 "sock": { 00:03:55.302 "mask": "0x8000", 00:03:55.302 "tpoint_mask": "0x0" 00:03:55.302 } 00:03:55.302 }' 00:03:55.302 00:05:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:03:55.561 00:05:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:03:55.561 00:05:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:55.561 00:05:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:55.561 00:05:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:55.561 00:05:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:55.561 00:05:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:55.561 00:05:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:55.561 00:05:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:55.561 00:05:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:55.561 00:03:55.561 real 0m0.219s 00:03:55.561 user 0m0.191s 00:03:55.561 sys 0m0.021s 00:03:55.561 00:05:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1118 -- # xtrace_disable 00:03:55.561 00:05:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:55.561 ************************************ 00:03:55.561 END TEST rpc_trace_cmd_test 00:03:55.561 ************************************ 00:03:55.561 00:05:14 rpc -- common/autotest_common.sh@1136 -- # return 0 00:03:55.561 00:05:14 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:55.561 00:05:14 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:55.561 00:05:14 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:55.561 00:05:14 rpc -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:03:55.561 00:05:14 rpc -- common/autotest_common.sh@1099 -- # xtrace_disable 00:03:55.561 00:05:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:55.561 ************************************ 00:03:55.561 START TEST rpc_daemon_integrity 00:03:55.561 ************************************ 00:03:55.561 00:05:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1117 -- # rpc_integrity 00:03:55.561 00:05:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:55.561 00:05:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@553 -- # xtrace_disable 00:03:55.561 00:05:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:55.561 00:05:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:03:55.561 00:05:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:55.561 00:05:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:55.821 00:05:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:55.821 00:05:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:55.821 00:05:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@553 -- # xtrace_disable 00:03:55.821 00:05:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:55.821 00:05:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:03:55.821 00:05:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:55.821 00:05:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:55.821 00:05:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@553 -- # xtrace_disable 00:03:55.821 00:05:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:55.821 00:05:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:03:55.821 00:05:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:55.821 { 00:03:55.821 "name": "Malloc2", 00:03:55.821 "aliases": [ 00:03:55.821 "ea68d9ca-a842-4f40-93e5-17e95572f39e" 00:03:55.821 ], 00:03:55.821 "product_name": "Malloc disk", 00:03:55.821 "block_size": 512, 00:03:55.821 "num_blocks": 16384, 00:03:55.821 "uuid": "ea68d9ca-a842-4f40-93e5-17e95572f39e", 00:03:55.821 "assigned_rate_limits": { 00:03:55.821 "rw_ios_per_sec": 0, 00:03:55.821 "rw_mbytes_per_sec": 0, 00:03:55.821 "r_mbytes_per_sec": 0, 00:03:55.821 "w_mbytes_per_sec": 0 00:03:55.821 }, 00:03:55.821 "claimed": false, 00:03:55.821 "zoned": false, 00:03:55.821 "supported_io_types": { 00:03:55.821 "read": true, 00:03:55.821 "write": true, 00:03:55.821 "unmap": true, 00:03:55.821 "flush": true, 00:03:55.821 "reset": true, 00:03:55.821 "nvme_admin": false, 00:03:55.821 "nvme_io": false, 00:03:55.821 "nvme_io_md": false, 00:03:55.821 "write_zeroes": true, 00:03:55.821 "zcopy": true, 00:03:55.821 "get_zone_info": false, 00:03:55.821 "zone_management": false, 00:03:55.821 "zone_append": false, 00:03:55.821 "compare": false, 00:03:55.821 "compare_and_write": false, 00:03:55.821 "abort": true, 00:03:55.821 "seek_hole": false, 00:03:55.821 "seek_data": false, 00:03:55.821 "copy": true, 00:03:55.821 "nvme_iov_md": false 00:03:55.821 }, 00:03:55.821 "memory_domains": [ 00:03:55.821 { 00:03:55.821 "dma_device_id": "system", 00:03:55.821 "dma_device_type": 1 00:03:55.821 }, 00:03:55.821 { 00:03:55.821 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:55.821 "dma_device_type": 2 00:03:55.821 } 00:03:55.821 ], 00:03:55.821 "driver_specific": {} 00:03:55.821 } 00:03:55.821 ]' 00:03:55.821 00:05:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:55.821 00:05:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:55.821 00:05:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:55.821 00:05:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@553 -- # xtrace_disable 00:03:55.821 00:05:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:55.821 [2024-07-16 00:05:14.519826] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:55.821 [2024-07-16 00:05:14.519853] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:55.821 [2024-07-16 00:05:14.519866] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1357ac0 00:03:55.821 [2024-07-16 00:05:14.519872] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:55.821 [2024-07-16 00:05:14.520828] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:55.821 [2024-07-16 00:05:14.520850] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:55.821 Passthru0 00:03:55.821 00:05:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:03:55.821 00:05:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:55.821 00:05:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@553 -- # xtrace_disable 00:03:55.821 00:05:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:55.821 00:05:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:03:55.821 00:05:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:55.821 { 00:03:55.821 "name": "Malloc2", 00:03:55.821 "aliases": [ 00:03:55.821 "ea68d9ca-a842-4f40-93e5-17e95572f39e" 00:03:55.821 ], 00:03:55.821 "product_name": "Malloc disk", 00:03:55.821 "block_size": 512, 00:03:55.821 "num_blocks": 16384, 00:03:55.821 "uuid": "ea68d9ca-a842-4f40-93e5-17e95572f39e", 00:03:55.821 "assigned_rate_limits": { 00:03:55.821 "rw_ios_per_sec": 0, 00:03:55.821 "rw_mbytes_per_sec": 0, 00:03:55.821 "r_mbytes_per_sec": 0, 00:03:55.821 "w_mbytes_per_sec": 0 00:03:55.821 }, 00:03:55.821 "claimed": true, 00:03:55.821 "claim_type": "exclusive_write", 00:03:55.821 "zoned": false, 00:03:55.821 "supported_io_types": { 00:03:55.821 "read": true, 00:03:55.821 "write": true, 00:03:55.821 "unmap": true, 00:03:55.821 "flush": true, 00:03:55.821 "reset": true, 00:03:55.821 "nvme_admin": false, 00:03:55.821 "nvme_io": false, 00:03:55.821 "nvme_io_md": false, 00:03:55.821 "write_zeroes": true, 00:03:55.821 "zcopy": true, 00:03:55.821 "get_zone_info": false, 00:03:55.821 "zone_management": false, 00:03:55.821 "zone_append": false, 00:03:55.821 "compare": false, 00:03:55.821 "compare_and_write": false, 00:03:55.821 "abort": true, 00:03:55.821 "seek_hole": false, 00:03:55.821 "seek_data": false, 00:03:55.821 "copy": true, 00:03:55.821 "nvme_iov_md": false 00:03:55.821 }, 00:03:55.821 "memory_domains": [ 00:03:55.821 { 00:03:55.821 "dma_device_id": "system", 00:03:55.821 "dma_device_type": 1 00:03:55.821 }, 00:03:55.821 { 00:03:55.821 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:55.821 "dma_device_type": 2 00:03:55.821 } 00:03:55.821 ], 00:03:55.821 "driver_specific": {} 00:03:55.821 }, 00:03:55.821 { 00:03:55.821 "name": "Passthru0", 00:03:55.821 "aliases": [ 00:03:55.821 "61e584a0-430e-513e-a653-bf11223e8589" 00:03:55.821 ], 00:03:55.822 "product_name": "passthru", 00:03:55.822 "block_size": 512, 00:03:55.822 "num_blocks": 16384, 00:03:55.822 "uuid": "61e584a0-430e-513e-a653-bf11223e8589", 00:03:55.822 "assigned_rate_limits": { 00:03:55.822 "rw_ios_per_sec": 0, 00:03:55.822 "rw_mbytes_per_sec": 0, 00:03:55.822 "r_mbytes_per_sec": 0, 00:03:55.822 "w_mbytes_per_sec": 0 00:03:55.822 }, 00:03:55.822 "claimed": false, 00:03:55.822 "zoned": false, 00:03:55.822 "supported_io_types": { 00:03:55.822 "read": true, 00:03:55.822 "write": true, 00:03:55.822 "unmap": true, 00:03:55.822 "flush": true, 00:03:55.822 "reset": true, 00:03:55.822 "nvme_admin": false, 00:03:55.822 "nvme_io": false, 00:03:55.822 "nvme_io_md": false, 00:03:55.822 "write_zeroes": true, 00:03:55.822 "zcopy": true, 00:03:55.822 "get_zone_info": false, 00:03:55.822 "zone_management": false, 00:03:55.822 "zone_append": false, 00:03:55.822 "compare": false, 00:03:55.822 "compare_and_write": false, 00:03:55.822 "abort": true, 00:03:55.822 "seek_hole": false, 00:03:55.822 "seek_data": false, 00:03:55.822 "copy": true, 00:03:55.822 "nvme_iov_md": false 00:03:55.822 }, 00:03:55.822 "memory_domains": [ 00:03:55.822 { 00:03:55.822 "dma_device_id": "system", 00:03:55.822 "dma_device_type": 1 00:03:55.822 }, 00:03:55.822 { 00:03:55.822 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:55.822 "dma_device_type": 2 00:03:55.822 } 00:03:55.822 ], 00:03:55.822 "driver_specific": { 00:03:55.822 "passthru": { 00:03:55.822 "name": "Passthru0", 00:03:55.822 "base_bdev_name": "Malloc2" 00:03:55.822 } 00:03:55.822 } 00:03:55.822 } 00:03:55.822 ]' 00:03:55.822 00:05:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:55.822 00:05:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:55.822 00:05:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:55.822 00:05:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@553 -- # xtrace_disable 00:03:55.822 00:05:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:55.822 00:05:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:03:55.822 00:05:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:55.822 00:05:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@553 -- # xtrace_disable 00:03:55.822 00:05:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:55.822 00:05:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:03:55.822 00:05:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:55.822 00:05:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@553 -- # xtrace_disable 00:03:55.822 00:05:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:55.822 00:05:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:03:55.822 00:05:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:55.822 00:05:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:55.822 00:05:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:55.822 00:03:55.822 real 0m0.282s 00:03:55.822 user 0m0.177s 00:03:55.822 sys 0m0.037s 00:03:55.822 00:05:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1118 -- # xtrace_disable 00:03:55.822 00:05:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:55.822 ************************************ 00:03:55.822 END TEST rpc_daemon_integrity 00:03:55.822 ************************************ 00:03:56.081 00:05:14 rpc -- common/autotest_common.sh@1136 -- # return 0 00:03:56.081 00:05:14 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:56.081 00:05:14 rpc -- rpc/rpc.sh@84 -- # killprocess 1323572 00:03:56.081 00:05:14 rpc -- common/autotest_common.sh@942 -- # '[' -z 1323572 ']' 00:03:56.081 00:05:14 rpc -- common/autotest_common.sh@946 -- # kill -0 1323572 00:03:56.081 00:05:14 rpc -- common/autotest_common.sh@947 -- # uname 00:03:56.081 00:05:14 rpc -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:03:56.081 00:05:14 rpc -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1323572 00:03:56.081 00:05:14 rpc -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:03:56.081 00:05:14 rpc -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:03:56.081 00:05:14 rpc -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1323572' 00:03:56.081 killing process with pid 1323572 00:03:56.081 00:05:14 rpc -- common/autotest_common.sh@961 -- # kill 1323572 00:03:56.081 00:05:14 rpc -- common/autotest_common.sh@966 -- # wait 1323572 00:03:56.341 00:03:56.341 real 0m2.421s 00:03:56.341 user 0m3.132s 00:03:56.341 sys 0m0.638s 00:03:56.341 00:05:15 rpc -- common/autotest_common.sh@1118 -- # xtrace_disable 00:03:56.341 00:05:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:56.341 ************************************ 00:03:56.341 END TEST rpc 00:03:56.341 ************************************ 00:03:56.341 00:05:15 -- common/autotest_common.sh@1136 -- # return 0 00:03:56.341 00:05:15 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:56.341 00:05:15 -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:03:56.341 00:05:15 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:03:56.341 00:05:15 -- common/autotest_common.sh@10 -- # set +x 00:03:56.341 ************************************ 00:03:56.341 START TEST skip_rpc 00:03:56.341 ************************************ 00:03:56.341 00:05:15 skip_rpc -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:56.601 * Looking for test storage... 00:03:56.601 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:56.601 00:05:15 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:56.601 00:05:15 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:56.601 00:05:15 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:03:56.601 00:05:15 skip_rpc -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:03:56.601 00:05:15 skip_rpc -- common/autotest_common.sh@1099 -- # xtrace_disable 00:03:56.601 00:05:15 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:56.601 ************************************ 00:03:56.601 START TEST skip_rpc 00:03:56.601 ************************************ 00:03:56.601 00:05:15 skip_rpc.skip_rpc -- common/autotest_common.sh@1117 -- # test_skip_rpc 00:03:56.601 00:05:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:03:56.601 00:05:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1324203 00:03:56.601 00:05:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:56.601 00:05:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:03:56.601 [2024-07-16 00:05:15.268367] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:03:56.601 [2024-07-16 00:05:15.268405] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1324203 ] 00:03:56.601 [2024-07-16 00:05:15.320597] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:56.601 [2024-07-16 00:05:15.392417] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:01.876 00:05:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:01.876 00:05:20 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # local es=0 00:04:01.876 00:05:20 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:01.876 00:05:20 skip_rpc.skip_rpc -- common/autotest_common.sh@630 -- # local arg=rpc_cmd 00:04:01.876 00:05:20 skip_rpc.skip_rpc -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:04:01.876 00:05:20 skip_rpc.skip_rpc -- common/autotest_common.sh@634 -- # type -t rpc_cmd 00:04:01.876 00:05:20 skip_rpc.skip_rpc -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:04:01.876 00:05:20 skip_rpc.skip_rpc -- common/autotest_common.sh@645 -- # rpc_cmd spdk_get_version 00:04:01.876 00:05:20 skip_rpc.skip_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:04:01.876 00:05:20 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:01.876 00:05:20 skip_rpc.skip_rpc -- common/autotest_common.sh@581 -- # [[ 1 == 0 ]] 00:04:01.876 00:05:20 skip_rpc.skip_rpc -- common/autotest_common.sh@645 -- # es=1 00:04:01.876 00:05:20 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:04:01.876 00:05:20 skip_rpc.skip_rpc -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:04:01.876 00:05:20 skip_rpc.skip_rpc -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:04:01.876 00:05:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:01.876 00:05:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1324203 00:04:01.876 00:05:20 skip_rpc.skip_rpc -- common/autotest_common.sh@942 -- # '[' -z 1324203 ']' 00:04:01.876 00:05:20 skip_rpc.skip_rpc -- common/autotest_common.sh@946 -- # kill -0 1324203 00:04:01.876 00:05:20 skip_rpc.skip_rpc -- common/autotest_common.sh@947 -- # uname 00:04:01.876 00:05:20 skip_rpc.skip_rpc -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:04:01.876 00:05:20 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1324203 00:04:01.876 00:05:20 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:04:01.876 00:05:20 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:04:01.876 00:05:20 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1324203' 00:04:01.876 killing process with pid 1324203 00:04:01.876 00:05:20 skip_rpc.skip_rpc -- common/autotest_common.sh@961 -- # kill 1324203 00:04:01.876 00:05:20 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # wait 1324203 00:04:01.876 00:04:01.876 real 0m5.364s 00:04:01.876 user 0m5.139s 00:04:01.876 sys 0m0.248s 00:04:01.876 00:05:20 skip_rpc.skip_rpc -- common/autotest_common.sh@1118 -- # xtrace_disable 00:04:01.876 00:05:20 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:01.876 ************************************ 00:04:01.876 END TEST skip_rpc 00:04:01.876 ************************************ 00:04:01.876 00:05:20 skip_rpc -- common/autotest_common.sh@1136 -- # return 0 00:04:01.876 00:05:20 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:01.876 00:05:20 skip_rpc -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:04:01.877 00:05:20 skip_rpc -- common/autotest_common.sh@1099 -- # xtrace_disable 00:04:01.877 00:05:20 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:01.877 ************************************ 00:04:01.877 START TEST skip_rpc_with_json 00:04:01.877 ************************************ 00:04:01.877 00:05:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1117 -- # test_skip_rpc_with_json 00:04:01.877 00:05:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:01.877 00:05:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:01.877 00:05:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1325148 00:04:01.877 00:05:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:01.877 00:05:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1325148 00:04:01.877 00:05:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@823 -- # '[' -z 1325148 ']' 00:04:01.877 00:05:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:01.877 00:05:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@828 -- # local max_retries=100 00:04:01.877 00:05:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:01.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:01.877 00:05:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@832 -- # xtrace_disable 00:04:01.877 00:05:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:01.877 [2024-07-16 00:05:20.686028] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:04:01.877 [2024-07-16 00:05:20.686066] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1325148 ] 00:04:02.136 [2024-07-16 00:05:20.738736] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:02.136 [2024-07-16 00:05:20.817537] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:02.706 00:05:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:04:02.706 00:05:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@856 -- # return 0 00:04:02.706 00:05:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:02.706 00:05:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@553 -- # xtrace_disable 00:04:02.706 00:05:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:02.706 [2024-07-16 00:05:21.506487] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:02.706 request: 00:04:02.706 { 00:04:02.706 "trtype": "tcp", 00:04:02.706 "method": "nvmf_get_transports", 00:04:02.706 "req_id": 1 00:04:02.706 } 00:04:02.706 Got JSON-RPC error response 00:04:02.706 response: 00:04:02.706 { 00:04:02.706 "code": -19, 00:04:02.706 "message": "No such device" 00:04:02.706 } 00:04:02.706 00:05:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@581 -- # [[ 1 == 0 ]] 00:04:02.706 00:05:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:02.706 00:05:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@553 -- # xtrace_disable 00:04:02.706 00:05:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:02.706 [2024-07-16 00:05:21.514581] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:02.706 00:05:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:04:02.706 00:05:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:02.706 00:05:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@553 -- # xtrace_disable 00:04:02.706 00:05:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:02.966 00:05:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:04:02.966 00:05:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:02.966 { 00:04:02.966 "subsystems": [ 00:04:02.966 { 00:04:02.966 "subsystem": "vfio_user_target", 00:04:02.966 "config": null 00:04:02.966 }, 00:04:02.966 { 00:04:02.966 "subsystem": "keyring", 00:04:02.966 "config": [] 00:04:02.966 }, 00:04:02.966 { 00:04:02.966 "subsystem": "iobuf", 00:04:02.966 "config": [ 00:04:02.966 { 00:04:02.966 "method": "iobuf_set_options", 00:04:02.966 "params": { 00:04:02.966 "small_pool_count": 8192, 00:04:02.966 "large_pool_count": 1024, 00:04:02.966 "small_bufsize": 8192, 00:04:02.966 "large_bufsize": 135168 00:04:02.966 } 00:04:02.966 } 00:04:02.966 ] 00:04:02.966 }, 00:04:02.966 { 00:04:02.966 "subsystem": "sock", 00:04:02.966 "config": [ 00:04:02.966 { 00:04:02.966 "method": "sock_set_default_impl", 00:04:02.966 "params": { 00:04:02.966 "impl_name": "posix" 00:04:02.966 } 00:04:02.966 }, 00:04:02.966 { 00:04:02.966 "method": "sock_impl_set_options", 00:04:02.966 "params": { 00:04:02.966 "impl_name": "ssl", 00:04:02.966 "recv_buf_size": 4096, 00:04:02.966 "send_buf_size": 4096, 00:04:02.966 "enable_recv_pipe": true, 00:04:02.966 "enable_quickack": false, 00:04:02.966 "enable_placement_id": 0, 00:04:02.966 "enable_zerocopy_send_server": true, 00:04:02.966 "enable_zerocopy_send_client": false, 00:04:02.966 "zerocopy_threshold": 0, 00:04:02.966 "tls_version": 0, 00:04:02.966 "enable_ktls": false 00:04:02.966 } 00:04:02.966 }, 00:04:02.966 { 00:04:02.966 "method": "sock_impl_set_options", 00:04:02.966 "params": { 00:04:02.966 "impl_name": "posix", 00:04:02.966 "recv_buf_size": 2097152, 00:04:02.966 "send_buf_size": 2097152, 00:04:02.966 "enable_recv_pipe": true, 00:04:02.966 "enable_quickack": false, 00:04:02.966 "enable_placement_id": 0, 00:04:02.966 "enable_zerocopy_send_server": true, 00:04:02.966 "enable_zerocopy_send_client": false, 00:04:02.966 "zerocopy_threshold": 0, 00:04:02.966 "tls_version": 0, 00:04:02.966 "enable_ktls": false 00:04:02.966 } 00:04:02.966 } 00:04:02.966 ] 00:04:02.966 }, 00:04:02.966 { 00:04:02.966 "subsystem": "vmd", 00:04:02.966 "config": [] 00:04:02.966 }, 00:04:02.966 { 00:04:02.966 "subsystem": "accel", 00:04:02.966 "config": [ 00:04:02.966 { 00:04:02.966 "method": "accel_set_options", 00:04:02.966 "params": { 00:04:02.966 "small_cache_size": 128, 00:04:02.966 "large_cache_size": 16, 00:04:02.966 "task_count": 2048, 00:04:02.966 "sequence_count": 2048, 00:04:02.966 "buf_count": 2048 00:04:02.966 } 00:04:02.966 } 00:04:02.966 ] 00:04:02.966 }, 00:04:02.966 { 00:04:02.966 "subsystem": "bdev", 00:04:02.966 "config": [ 00:04:02.966 { 00:04:02.966 "method": "bdev_set_options", 00:04:02.966 "params": { 00:04:02.966 "bdev_io_pool_size": 65535, 00:04:02.966 "bdev_io_cache_size": 256, 00:04:02.966 "bdev_auto_examine": true, 00:04:02.966 "iobuf_small_cache_size": 128, 00:04:02.966 "iobuf_large_cache_size": 16 00:04:02.966 } 00:04:02.966 }, 00:04:02.966 { 00:04:02.966 "method": "bdev_raid_set_options", 00:04:02.967 "params": { 00:04:02.967 "process_window_size_kb": 1024 00:04:02.967 } 00:04:02.967 }, 00:04:02.967 { 00:04:02.967 "method": "bdev_iscsi_set_options", 00:04:02.967 "params": { 00:04:02.967 "timeout_sec": 30 00:04:02.967 } 00:04:02.967 }, 00:04:02.967 { 00:04:02.967 "method": "bdev_nvme_set_options", 00:04:02.967 "params": { 00:04:02.967 "action_on_timeout": "none", 00:04:02.967 "timeout_us": 0, 00:04:02.967 "timeout_admin_us": 0, 00:04:02.967 "keep_alive_timeout_ms": 10000, 00:04:02.967 "arbitration_burst": 0, 00:04:02.967 "low_priority_weight": 0, 00:04:02.967 "medium_priority_weight": 0, 00:04:02.967 "high_priority_weight": 0, 00:04:02.967 "nvme_adminq_poll_period_us": 10000, 00:04:02.967 "nvme_ioq_poll_period_us": 0, 00:04:02.967 "io_queue_requests": 0, 00:04:02.967 "delay_cmd_submit": true, 00:04:02.967 "transport_retry_count": 4, 00:04:02.967 "bdev_retry_count": 3, 00:04:02.967 "transport_ack_timeout": 0, 00:04:02.967 "ctrlr_loss_timeout_sec": 0, 00:04:02.967 "reconnect_delay_sec": 0, 00:04:02.967 "fast_io_fail_timeout_sec": 0, 00:04:02.967 "disable_auto_failback": false, 00:04:02.967 "generate_uuids": false, 00:04:02.967 "transport_tos": 0, 00:04:02.967 "nvme_error_stat": false, 00:04:02.967 "rdma_srq_size": 0, 00:04:02.967 "io_path_stat": false, 00:04:02.967 "allow_accel_sequence": false, 00:04:02.967 "rdma_max_cq_size": 0, 00:04:02.967 "rdma_cm_event_timeout_ms": 0, 00:04:02.967 "dhchap_digests": [ 00:04:02.967 "sha256", 00:04:02.967 "sha384", 00:04:02.967 "sha512" 00:04:02.967 ], 00:04:02.967 "dhchap_dhgroups": [ 00:04:02.967 "null", 00:04:02.967 "ffdhe2048", 00:04:02.967 "ffdhe3072", 00:04:02.967 "ffdhe4096", 00:04:02.967 "ffdhe6144", 00:04:02.967 "ffdhe8192" 00:04:02.967 ] 00:04:02.967 } 00:04:02.967 }, 00:04:02.967 { 00:04:02.967 "method": "bdev_nvme_set_hotplug", 00:04:02.967 "params": { 00:04:02.967 "period_us": 100000, 00:04:02.967 "enable": false 00:04:02.967 } 00:04:02.967 }, 00:04:02.967 { 00:04:02.967 "method": "bdev_wait_for_examine" 00:04:02.967 } 00:04:02.967 ] 00:04:02.967 }, 00:04:02.967 { 00:04:02.967 "subsystem": "scsi", 00:04:02.967 "config": null 00:04:02.967 }, 00:04:02.967 { 00:04:02.967 "subsystem": "scheduler", 00:04:02.967 "config": [ 00:04:02.967 { 00:04:02.967 "method": "framework_set_scheduler", 00:04:02.967 "params": { 00:04:02.967 "name": "static" 00:04:02.967 } 00:04:02.967 } 00:04:02.967 ] 00:04:02.967 }, 00:04:02.967 { 00:04:02.967 "subsystem": "vhost_scsi", 00:04:02.967 "config": [] 00:04:02.967 }, 00:04:02.967 { 00:04:02.967 "subsystem": "vhost_blk", 00:04:02.967 "config": [] 00:04:02.967 }, 00:04:02.967 { 00:04:02.967 "subsystem": "ublk", 00:04:02.967 "config": [] 00:04:02.967 }, 00:04:02.967 { 00:04:02.967 "subsystem": "nbd", 00:04:02.967 "config": [] 00:04:02.967 }, 00:04:02.967 { 00:04:02.967 "subsystem": "nvmf", 00:04:02.967 "config": [ 00:04:02.967 { 00:04:02.967 "method": "nvmf_set_config", 00:04:02.967 "params": { 00:04:02.967 "discovery_filter": "match_any", 00:04:02.967 "admin_cmd_passthru": { 00:04:02.967 "identify_ctrlr": false 00:04:02.967 } 00:04:02.967 } 00:04:02.967 }, 00:04:02.967 { 00:04:02.967 "method": "nvmf_set_max_subsystems", 00:04:02.967 "params": { 00:04:02.967 "max_subsystems": 1024 00:04:02.967 } 00:04:02.967 }, 00:04:02.967 { 00:04:02.967 "method": "nvmf_set_crdt", 00:04:02.967 "params": { 00:04:02.967 "crdt1": 0, 00:04:02.967 "crdt2": 0, 00:04:02.967 "crdt3": 0 00:04:02.967 } 00:04:02.967 }, 00:04:02.967 { 00:04:02.967 "method": "nvmf_create_transport", 00:04:02.967 "params": { 00:04:02.967 "trtype": "TCP", 00:04:02.967 "max_queue_depth": 128, 00:04:02.967 "max_io_qpairs_per_ctrlr": 127, 00:04:02.967 "in_capsule_data_size": 4096, 00:04:02.967 "max_io_size": 131072, 00:04:02.967 "io_unit_size": 131072, 00:04:02.967 "max_aq_depth": 128, 00:04:02.967 "num_shared_buffers": 511, 00:04:02.967 "buf_cache_size": 4294967295, 00:04:02.967 "dif_insert_or_strip": false, 00:04:02.967 "zcopy": false, 00:04:02.967 "c2h_success": true, 00:04:02.967 "sock_priority": 0, 00:04:02.967 "abort_timeout_sec": 1, 00:04:02.967 "ack_timeout": 0, 00:04:02.967 "data_wr_pool_size": 0 00:04:02.967 } 00:04:02.967 } 00:04:02.967 ] 00:04:02.967 }, 00:04:02.967 { 00:04:02.967 "subsystem": "iscsi", 00:04:02.967 "config": [ 00:04:02.967 { 00:04:02.967 "method": "iscsi_set_options", 00:04:02.967 "params": { 00:04:02.967 "node_base": "iqn.2016-06.io.spdk", 00:04:02.967 "max_sessions": 128, 00:04:02.967 "max_connections_per_session": 2, 00:04:02.967 "max_queue_depth": 64, 00:04:02.967 "default_time2wait": 2, 00:04:02.967 "default_time2retain": 20, 00:04:02.967 "first_burst_length": 8192, 00:04:02.967 "immediate_data": true, 00:04:02.967 "allow_duplicated_isid": false, 00:04:02.967 "error_recovery_level": 0, 00:04:02.967 "nop_timeout": 60, 00:04:02.967 "nop_in_interval": 30, 00:04:02.967 "disable_chap": false, 00:04:02.967 "require_chap": false, 00:04:02.967 "mutual_chap": false, 00:04:02.967 "chap_group": 0, 00:04:02.967 "max_large_datain_per_connection": 64, 00:04:02.967 "max_r2t_per_connection": 4, 00:04:02.967 "pdu_pool_size": 36864, 00:04:02.967 "immediate_data_pool_size": 16384, 00:04:02.967 "data_out_pool_size": 2048 00:04:02.967 } 00:04:02.967 } 00:04:02.967 ] 00:04:02.967 } 00:04:02.967 ] 00:04:02.967 } 00:04:02.967 00:05:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:02.967 00:05:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1325148 00:04:02.967 00:05:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@942 -- # '[' -z 1325148 ']' 00:04:02.967 00:05:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # kill -0 1325148 00:04:02.967 00:05:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@947 -- # uname 00:04:02.967 00:05:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:04:02.967 00:05:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1325148 00:04:02.967 00:05:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:04:02.967 00:05:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:04:02.967 00:05:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1325148' 00:04:02.967 killing process with pid 1325148 00:04:02.967 00:05:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@961 -- # kill 1325148 00:04:02.967 00:05:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # wait 1325148 00:04:03.227 00:05:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1325396 00:04:03.227 00:05:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:03.227 00:05:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:08.506 00:05:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1325396 00:04:08.506 00:05:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@942 -- # '[' -z 1325396 ']' 00:04:08.506 00:05:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # kill -0 1325396 00:04:08.506 00:05:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@947 -- # uname 00:04:08.506 00:05:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:04:08.506 00:05:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1325396 00:04:08.506 00:05:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:04:08.506 00:05:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:04:08.506 00:05:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1325396' 00:04:08.506 killing process with pid 1325396 00:04:08.506 00:05:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@961 -- # kill 1325396 00:04:08.506 00:05:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # wait 1325396 00:04:08.506 00:05:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:08.506 00:05:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:08.506 00:04:08.506 real 0m6.695s 00:04:08.506 user 0m6.539s 00:04:08.506 sys 0m0.543s 00:04:08.506 00:05:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1118 -- # xtrace_disable 00:04:08.506 00:05:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:08.506 ************************************ 00:04:08.506 END TEST skip_rpc_with_json 00:04:08.506 ************************************ 00:04:08.765 00:05:27 skip_rpc -- common/autotest_common.sh@1136 -- # return 0 00:04:08.765 00:05:27 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:08.765 00:05:27 skip_rpc -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:04:08.765 00:05:27 skip_rpc -- common/autotest_common.sh@1099 -- # xtrace_disable 00:04:08.765 00:05:27 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:08.765 ************************************ 00:04:08.765 START TEST skip_rpc_with_delay 00:04:08.765 ************************************ 00:04:08.765 00:05:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1117 -- # test_skip_rpc_with_delay 00:04:08.765 00:05:27 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:08.765 00:05:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # local es=0 00:04:08.765 00:05:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:08.765 00:05:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@630 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:08.765 00:05:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:04:08.765 00:05:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@634 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:08.765 00:05:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:04:08.765 00:05:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:08.765 00:05:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:04:08.765 00:05:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:08.765 00:05:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:08.765 00:05:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@645 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:08.765 [2024-07-16 00:05:27.463370] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:08.765 [2024-07-16 00:05:27.463442] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:08.766 00:05:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@645 -- # es=1 00:04:08.766 00:05:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:04:08.766 00:05:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:04:08.766 00:05:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:04:08.766 00:04:08.766 real 0m0.063s 00:04:08.766 user 0m0.042s 00:04:08.766 sys 0m0.020s 00:04:08.766 00:05:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1118 -- # xtrace_disable 00:04:08.766 00:05:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:08.766 ************************************ 00:04:08.766 END TEST skip_rpc_with_delay 00:04:08.766 ************************************ 00:04:08.766 00:05:27 skip_rpc -- common/autotest_common.sh@1136 -- # return 0 00:04:08.766 00:05:27 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:08.766 00:05:27 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:08.766 00:05:27 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:08.766 00:05:27 skip_rpc -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:04:08.766 00:05:27 skip_rpc -- common/autotest_common.sh@1099 -- # xtrace_disable 00:04:08.766 00:05:27 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:08.766 ************************************ 00:04:08.766 START TEST exit_on_failed_rpc_init 00:04:08.766 ************************************ 00:04:08.766 00:05:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1117 -- # test_exit_on_failed_rpc_init 00:04:08.766 00:05:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1326360 00:04:08.766 00:05:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1326360 00:04:08.766 00:05:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:08.766 00:05:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@823 -- # '[' -z 1326360 ']' 00:04:08.766 00:05:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:08.766 00:05:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@828 -- # local max_retries=100 00:04:08.766 00:05:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:08.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:08.766 00:05:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@832 -- # xtrace_disable 00:04:08.766 00:05:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:08.766 [2024-07-16 00:05:27.586966] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:04:08.766 [2024-07-16 00:05:27.587005] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1326360 ] 00:04:09.024 [2024-07-16 00:05:27.640503] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:09.024 [2024-07-16 00:05:27.719798] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:09.591 00:05:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:04:09.591 00:05:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@856 -- # return 0 00:04:09.591 00:05:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:09.591 00:05:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:09.591 00:05:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # local es=0 00:04:09.591 00:05:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:09.591 00:05:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@630 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:09.591 00:05:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:04:09.591 00:05:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@634 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:09.591 00:05:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:04:09.591 00:05:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:09.591 00:05:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:04:09.591 00:05:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:09.591 00:05:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:09.591 00:05:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@645 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:09.591 [2024-07-16 00:05:28.436078] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:04:09.591 [2024-07-16 00:05:28.436124] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1326595 ] 00:04:09.851 [2024-07-16 00:05:28.487693] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:09.851 [2024-07-16 00:05:28.559969] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:09.851 [2024-07-16 00:05:28.560035] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:09.851 [2024-07-16 00:05:28.560044] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:09.851 [2024-07-16 00:05:28.560051] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:09.851 00:05:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@645 -- # es=234 00:04:09.851 00:05:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:04:09.851 00:05:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # es=106 00:04:09.851 00:05:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # case "$es" in 00:04:09.851 00:05:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=1 00:04:09.851 00:05:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:04:09.851 00:05:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:09.851 00:05:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1326360 00:04:09.851 00:05:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@942 -- # '[' -z 1326360 ']' 00:04:09.851 00:05:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@946 -- # kill -0 1326360 00:04:09.851 00:05:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@947 -- # uname 00:04:09.851 00:05:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:04:09.851 00:05:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1326360 00:04:09.851 00:05:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:04:09.851 00:05:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:04:09.851 00:05:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1326360' 00:04:09.851 killing process with pid 1326360 00:04:09.851 00:05:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@961 -- # kill 1326360 00:04:09.851 00:05:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # wait 1326360 00:04:10.419 00:04:10.419 real 0m1.440s 00:04:10.419 user 0m1.670s 00:04:10.419 sys 0m0.368s 00:04:10.419 00:05:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1118 -- # xtrace_disable 00:04:10.419 00:05:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:10.419 ************************************ 00:04:10.419 END TEST exit_on_failed_rpc_init 00:04:10.419 ************************************ 00:04:10.419 00:05:29 skip_rpc -- common/autotest_common.sh@1136 -- # return 0 00:04:10.419 00:05:29 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:10.419 00:04:10.419 real 0m13.897s 00:04:10.419 user 0m13.531s 00:04:10.419 sys 0m1.395s 00:04:10.419 00:05:29 skip_rpc -- common/autotest_common.sh@1118 -- # xtrace_disable 00:04:10.419 00:05:29 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:10.419 ************************************ 00:04:10.419 END TEST skip_rpc 00:04:10.419 ************************************ 00:04:10.419 00:05:29 -- common/autotest_common.sh@1136 -- # return 0 00:04:10.419 00:05:29 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:10.419 00:05:29 -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:04:10.419 00:05:29 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:04:10.419 00:05:29 -- common/autotest_common.sh@10 -- # set +x 00:04:10.419 ************************************ 00:04:10.419 START TEST rpc_client 00:04:10.419 ************************************ 00:04:10.419 00:05:29 rpc_client -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:10.419 * Looking for test storage... 00:04:10.419 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:10.419 00:05:29 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:10.419 OK 00:04:10.419 00:05:29 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:10.419 00:04:10.419 real 0m0.113s 00:04:10.419 user 0m0.051s 00:04:10.419 sys 0m0.070s 00:04:10.419 00:05:29 rpc_client -- common/autotest_common.sh@1118 -- # xtrace_disable 00:04:10.419 00:05:29 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:10.419 ************************************ 00:04:10.419 END TEST rpc_client 00:04:10.419 ************************************ 00:04:10.419 00:05:29 -- common/autotest_common.sh@1136 -- # return 0 00:04:10.419 00:05:29 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:10.419 00:05:29 -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:04:10.419 00:05:29 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:04:10.419 00:05:29 -- common/autotest_common.sh@10 -- # set +x 00:04:10.419 ************************************ 00:04:10.419 START TEST json_config 00:04:10.419 ************************************ 00:04:10.419 00:05:29 json_config -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:10.679 00:05:29 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:10.679 00:05:29 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:10.679 00:05:29 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:10.679 00:05:29 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:10.679 00:05:29 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:10.679 00:05:29 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:10.679 00:05:29 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:10.679 00:05:29 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:10.679 00:05:29 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:10.679 00:05:29 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:10.679 00:05:29 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:10.679 00:05:29 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:10.679 00:05:29 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:04:10.679 00:05:29 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:04:10.679 00:05:29 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:10.679 00:05:29 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:10.679 00:05:29 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:10.679 00:05:29 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:10.679 00:05:29 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:10.679 00:05:29 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:10.679 00:05:29 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:10.679 00:05:29 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:10.680 00:05:29 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:10.680 00:05:29 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:10.680 00:05:29 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:10.680 00:05:29 json_config -- paths/export.sh@5 -- # export PATH 00:04:10.680 00:05:29 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:10.680 00:05:29 json_config -- nvmf/common.sh@47 -- # : 0 00:04:10.680 00:05:29 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:10.680 00:05:29 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:10.680 00:05:29 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:10.680 00:05:29 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:10.680 00:05:29 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:10.680 00:05:29 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:10.680 00:05:29 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:10.680 00:05:29 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:10.680 00:05:29 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:10.680 00:05:29 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:10.680 00:05:29 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:10.680 00:05:29 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:10.680 00:05:29 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:10.680 00:05:29 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:10.680 00:05:29 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:10.680 00:05:29 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:10.680 00:05:29 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:10.680 00:05:29 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:10.680 00:05:29 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:10.680 00:05:29 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:10.680 00:05:29 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:10.680 00:05:29 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:10.680 00:05:29 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:10.680 00:05:29 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:04:10.680 INFO: JSON configuration test init 00:04:10.680 00:05:29 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:04:10.680 00:05:29 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:04:10.680 00:05:29 json_config -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:10.680 00:05:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:10.680 00:05:29 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:04:10.680 00:05:29 json_config -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:10.680 00:05:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:10.680 00:05:29 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:04:10.680 00:05:29 json_config -- json_config/common.sh@9 -- # local app=target 00:04:10.680 00:05:29 json_config -- json_config/common.sh@10 -- # shift 00:04:10.680 00:05:29 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:10.680 00:05:29 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:10.680 00:05:29 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:10.680 00:05:29 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:10.680 00:05:29 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:10.680 00:05:29 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1326732 00:04:10.680 00:05:29 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:10.680 Waiting for target to run... 00:04:10.680 00:05:29 json_config -- json_config/common.sh@25 -- # waitforlisten 1326732 /var/tmp/spdk_tgt.sock 00:04:10.680 00:05:29 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:10.680 00:05:29 json_config -- common/autotest_common.sh@823 -- # '[' -z 1326732 ']' 00:04:10.680 00:05:29 json_config -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:10.680 00:05:29 json_config -- common/autotest_common.sh@828 -- # local max_retries=100 00:04:10.680 00:05:29 json_config -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:10.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:10.680 00:05:29 json_config -- common/autotest_common.sh@832 -- # xtrace_disable 00:04:10.680 00:05:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:10.680 [2024-07-16 00:05:29.405774] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:04:10.680 [2024-07-16 00:05:29.405827] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1326732 ] 00:04:10.939 [2024-07-16 00:05:29.691093] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:10.939 [2024-07-16 00:05:29.758593] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:11.509 00:05:30 json_config -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:04:11.509 00:05:30 json_config -- common/autotest_common.sh@856 -- # return 0 00:04:11.509 00:05:30 json_config -- json_config/common.sh@26 -- # echo '' 00:04:11.509 00:04:11.509 00:05:30 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:04:11.509 00:05:30 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:04:11.509 00:05:30 json_config -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:11.509 00:05:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:11.509 00:05:30 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:04:11.509 00:05:30 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:04:11.509 00:05:30 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:11.509 00:05:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:11.509 00:05:30 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:11.509 00:05:30 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:04:11.509 00:05:30 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:14.800 00:05:33 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:04:14.800 00:05:33 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:14.800 00:05:33 json_config -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:14.800 00:05:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:14.800 00:05:33 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:14.800 00:05:33 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:14.800 00:05:33 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:14.800 00:05:33 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:04:14.800 00:05:33 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:04:14.800 00:05:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:14.800 00:05:33 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:14.800 00:05:33 json_config -- json_config/json_config.sh@48 -- # local get_types 00:04:14.800 00:05:33 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:04:14.800 00:05:33 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:04:14.800 00:05:33 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:14.800 00:05:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:14.800 00:05:33 json_config -- json_config/json_config.sh@55 -- # return 0 00:04:14.800 00:05:33 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:04:14.800 00:05:33 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:04:14.800 00:05:33 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:04:14.800 00:05:33 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:04:14.800 00:05:33 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:04:14.800 00:05:33 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:04:14.800 00:05:33 json_config -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:14.800 00:05:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:14.800 00:05:33 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:14.800 00:05:33 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:04:14.800 00:05:33 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:04:14.800 00:05:33 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:14.800 00:05:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:15.059 MallocForNvmf0 00:04:15.059 00:05:33 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:15.059 00:05:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:15.059 MallocForNvmf1 00:04:15.059 00:05:33 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:15.059 00:05:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:15.318 [2024-07-16 00:05:34.020929] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:15.318 00:05:34 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:15.318 00:05:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:15.609 00:05:34 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:15.609 00:05:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:15.609 00:05:34 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:15.609 00:05:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:15.868 00:05:34 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:15.868 00:05:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:15.868 [2024-07-16 00:05:34.703055] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:15.868 00:05:34 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:04:15.868 00:05:34 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:15.868 00:05:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:16.127 00:05:34 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:04:16.127 00:05:34 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:16.127 00:05:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:16.127 00:05:34 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:04:16.127 00:05:34 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:16.127 00:05:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:16.127 MallocBdevForConfigChangeCheck 00:04:16.127 00:05:34 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:04:16.127 00:05:34 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:16.127 00:05:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:16.127 00:05:34 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:04:16.127 00:05:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:16.695 00:05:35 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:04:16.695 INFO: shutting down applications... 00:04:16.695 00:05:35 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:04:16.695 00:05:35 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:04:16.695 00:05:35 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:04:16.695 00:05:35 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:18.073 Calling clear_iscsi_subsystem 00:04:18.073 Calling clear_nvmf_subsystem 00:04:18.073 Calling clear_nbd_subsystem 00:04:18.073 Calling clear_ublk_subsystem 00:04:18.073 Calling clear_vhost_blk_subsystem 00:04:18.073 Calling clear_vhost_scsi_subsystem 00:04:18.073 Calling clear_bdev_subsystem 00:04:18.073 00:05:36 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:18.073 00:05:36 json_config -- json_config/json_config.sh@343 -- # count=100 00:04:18.073 00:05:36 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:04:18.073 00:05:36 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:18.073 00:05:36 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:18.073 00:05:36 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:18.333 00:05:37 json_config -- json_config/json_config.sh@345 -- # break 00:04:18.333 00:05:37 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:04:18.333 00:05:37 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:04:18.333 00:05:37 json_config -- json_config/common.sh@31 -- # local app=target 00:04:18.333 00:05:37 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:18.333 00:05:37 json_config -- json_config/common.sh@35 -- # [[ -n 1326732 ]] 00:04:18.333 00:05:37 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1326732 00:04:18.333 00:05:37 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:18.333 00:05:37 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:18.333 00:05:37 json_config -- json_config/common.sh@41 -- # kill -0 1326732 00:04:18.333 00:05:37 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:18.902 00:05:37 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:18.902 00:05:37 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:18.902 00:05:37 json_config -- json_config/common.sh@41 -- # kill -0 1326732 00:04:18.902 00:05:37 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:18.902 00:05:37 json_config -- json_config/common.sh@43 -- # break 00:04:18.902 00:05:37 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:18.902 00:05:37 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:18.902 SPDK target shutdown done 00:04:18.902 00:05:37 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:04:18.902 INFO: relaunching applications... 00:04:18.902 00:05:37 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:18.902 00:05:37 json_config -- json_config/common.sh@9 -- # local app=target 00:04:18.902 00:05:37 json_config -- json_config/common.sh@10 -- # shift 00:04:18.902 00:05:37 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:18.902 00:05:37 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:18.902 00:05:37 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:18.902 00:05:37 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:18.902 00:05:37 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:18.902 00:05:37 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1328330 00:04:18.902 00:05:37 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:18.902 Waiting for target to run... 00:04:18.902 00:05:37 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:18.902 00:05:37 json_config -- json_config/common.sh@25 -- # waitforlisten 1328330 /var/tmp/spdk_tgt.sock 00:04:18.902 00:05:37 json_config -- common/autotest_common.sh@823 -- # '[' -z 1328330 ']' 00:04:18.902 00:05:37 json_config -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:18.902 00:05:37 json_config -- common/autotest_common.sh@828 -- # local max_retries=100 00:04:18.902 00:05:37 json_config -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:18.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:18.902 00:05:37 json_config -- common/autotest_common.sh@832 -- # xtrace_disable 00:04:18.902 00:05:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:18.902 [2024-07-16 00:05:37.732907] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:04:18.902 [2024-07-16 00:05:37.732967] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1328330 ] 00:04:19.470 [2024-07-16 00:05:38.026175] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:19.470 [2024-07-16 00:05:38.094894] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:22.761 [2024-07-16 00:05:41.110851] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:22.761 [2024-07-16 00:05:41.143041] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:22.761 00:05:41 json_config -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:04:22.761 00:05:41 json_config -- common/autotest_common.sh@856 -- # return 0 00:04:22.761 00:05:41 json_config -- json_config/common.sh@26 -- # echo '' 00:04:22.761 00:04:22.761 00:05:41 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:04:22.761 00:05:41 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:22.761 INFO: Checking if target configuration is the same... 00:04:22.761 00:05:41 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:22.761 00:05:41 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:04:22.761 00:05:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:22.761 + '[' 2 -ne 2 ']' 00:04:22.761 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:22.761 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:22.761 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:22.761 +++ basename /dev/fd/62 00:04:22.761 ++ mktemp /tmp/62.XXX 00:04:22.761 + tmp_file_1=/tmp/62.jK4 00:04:22.761 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:22.761 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:22.761 + tmp_file_2=/tmp/spdk_tgt_config.json.Fmr 00:04:22.761 + ret=0 00:04:22.761 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:22.761 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:22.761 + diff -u /tmp/62.jK4 /tmp/spdk_tgt_config.json.Fmr 00:04:22.761 + echo 'INFO: JSON config files are the same' 00:04:22.761 INFO: JSON config files are the same 00:04:22.761 + rm /tmp/62.jK4 /tmp/spdk_tgt_config.json.Fmr 00:04:22.761 + exit 0 00:04:22.761 00:05:41 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:04:22.761 00:05:41 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:22.761 INFO: changing configuration and checking if this can be detected... 00:04:22.761 00:05:41 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:22.761 00:05:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:23.021 00:05:41 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:23.021 00:05:41 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:04:23.021 00:05:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:23.021 + '[' 2 -ne 2 ']' 00:04:23.021 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:23.021 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:23.021 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:23.021 +++ basename /dev/fd/62 00:04:23.021 ++ mktemp /tmp/62.XXX 00:04:23.021 + tmp_file_1=/tmp/62.QRI 00:04:23.021 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:23.021 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:23.021 + tmp_file_2=/tmp/spdk_tgt_config.json.tvE 00:04:23.021 + ret=0 00:04:23.021 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:23.280 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:23.280 + diff -u /tmp/62.QRI /tmp/spdk_tgt_config.json.tvE 00:04:23.280 + ret=1 00:04:23.280 + echo '=== Start of file: /tmp/62.QRI ===' 00:04:23.280 + cat /tmp/62.QRI 00:04:23.280 + echo '=== End of file: /tmp/62.QRI ===' 00:04:23.280 + echo '' 00:04:23.280 + echo '=== Start of file: /tmp/spdk_tgt_config.json.tvE ===' 00:04:23.280 + cat /tmp/spdk_tgt_config.json.tvE 00:04:23.280 + echo '=== End of file: /tmp/spdk_tgt_config.json.tvE ===' 00:04:23.280 + echo '' 00:04:23.280 + rm /tmp/62.QRI /tmp/spdk_tgt_config.json.tvE 00:04:23.280 + exit 1 00:04:23.280 00:05:42 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:04:23.280 INFO: configuration change detected. 00:04:23.280 00:05:42 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:04:23.280 00:05:42 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:04:23.280 00:05:42 json_config -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:23.280 00:05:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:23.280 00:05:42 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:04:23.280 00:05:42 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:04:23.280 00:05:42 json_config -- json_config/json_config.sh@317 -- # [[ -n 1328330 ]] 00:04:23.280 00:05:42 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:04:23.280 00:05:42 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:04:23.280 00:05:42 json_config -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:23.280 00:05:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:23.280 00:05:42 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:04:23.280 00:05:42 json_config -- json_config/json_config.sh@193 -- # uname -s 00:04:23.280 00:05:42 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:04:23.280 00:05:42 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:04:23.280 00:05:42 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:04:23.280 00:05:42 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:04:23.280 00:05:42 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:23.280 00:05:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:23.539 00:05:42 json_config -- json_config/json_config.sh@323 -- # killprocess 1328330 00:04:23.539 00:05:42 json_config -- common/autotest_common.sh@942 -- # '[' -z 1328330 ']' 00:04:23.539 00:05:42 json_config -- common/autotest_common.sh@946 -- # kill -0 1328330 00:04:23.539 00:05:42 json_config -- common/autotest_common.sh@947 -- # uname 00:04:23.539 00:05:42 json_config -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:04:23.539 00:05:42 json_config -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1328330 00:04:23.539 00:05:42 json_config -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:04:23.539 00:05:42 json_config -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:04:23.539 00:05:42 json_config -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1328330' 00:04:23.539 killing process with pid 1328330 00:04:23.539 00:05:42 json_config -- common/autotest_common.sh@961 -- # kill 1328330 00:04:23.539 00:05:42 json_config -- common/autotest_common.sh@966 -- # wait 1328330 00:04:24.916 00:05:43 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:24.916 00:05:43 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:04:24.916 00:05:43 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:24.916 00:05:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:24.916 00:05:43 json_config -- json_config/json_config.sh@328 -- # return 0 00:04:24.916 00:05:43 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:04:24.916 INFO: Success 00:04:24.916 00:04:24.916 real 0m14.459s 00:04:24.916 user 0m15.256s 00:04:24.916 sys 0m1.665s 00:04:24.916 00:05:43 json_config -- common/autotest_common.sh@1118 -- # xtrace_disable 00:04:24.916 00:05:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:24.916 ************************************ 00:04:24.916 END TEST json_config 00:04:24.916 ************************************ 00:04:24.916 00:05:43 -- common/autotest_common.sh@1136 -- # return 0 00:04:24.916 00:05:43 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:24.916 00:05:43 -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:04:24.916 00:05:43 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:04:24.916 00:05:43 -- common/autotest_common.sh@10 -- # set +x 00:04:25.175 ************************************ 00:04:25.175 START TEST json_config_extra_key 00:04:25.175 ************************************ 00:04:25.175 00:05:43 json_config_extra_key -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:25.175 00:05:43 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:25.175 00:05:43 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:25.175 00:05:43 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:25.175 00:05:43 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:25.175 00:05:43 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:25.175 00:05:43 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:25.175 00:05:43 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:25.175 00:05:43 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:25.175 00:05:43 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:25.175 00:05:43 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:25.175 00:05:43 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:25.175 00:05:43 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:25.175 00:05:43 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:04:25.175 00:05:43 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:04:25.175 00:05:43 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:25.175 00:05:43 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:25.175 00:05:43 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:25.175 00:05:43 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:25.175 00:05:43 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:25.175 00:05:43 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:25.175 00:05:43 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:25.175 00:05:43 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:25.175 00:05:43 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:25.175 00:05:43 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:25.175 00:05:43 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:25.175 00:05:43 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:25.175 00:05:43 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:25.176 00:05:43 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:04:25.176 00:05:43 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:25.176 00:05:43 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:25.176 00:05:43 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:25.176 00:05:43 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:25.176 00:05:43 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:25.176 00:05:43 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:25.176 00:05:43 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:25.176 00:05:43 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:25.176 00:05:43 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:25.176 00:05:43 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:25.176 00:05:43 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:25.176 00:05:43 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:25.176 00:05:43 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:25.176 00:05:43 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:25.176 00:05:43 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:25.176 00:05:43 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:25.176 00:05:43 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:25.176 00:05:43 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:25.176 00:05:43 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:25.176 INFO: launching applications... 00:04:25.176 00:05:43 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:25.176 00:05:43 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:25.176 00:05:43 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:25.176 00:05:43 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:25.176 00:05:43 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:25.176 00:05:43 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:25.176 00:05:43 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:25.176 00:05:43 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:25.176 00:05:43 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1329494 00:04:25.176 00:05:43 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:25.176 Waiting for target to run... 00:04:25.176 00:05:43 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1329494 /var/tmp/spdk_tgt.sock 00:04:25.176 00:05:43 json_config_extra_key -- common/autotest_common.sh@823 -- # '[' -z 1329494 ']' 00:04:25.176 00:05:43 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:25.176 00:05:43 json_config_extra_key -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:25.176 00:05:43 json_config_extra_key -- common/autotest_common.sh@828 -- # local max_retries=100 00:04:25.176 00:05:43 json_config_extra_key -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:25.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:25.176 00:05:43 json_config_extra_key -- common/autotest_common.sh@832 -- # xtrace_disable 00:04:25.176 00:05:43 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:25.176 [2024-07-16 00:05:43.922818] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:04:25.176 [2024-07-16 00:05:43.922871] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1329494 ] 00:04:25.743 [2024-07-16 00:05:44.364837] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:25.743 [2024-07-16 00:05:44.449672] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:26.001 00:05:44 json_config_extra_key -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:04:26.001 00:05:44 json_config_extra_key -- common/autotest_common.sh@856 -- # return 0 00:04:26.001 00:05:44 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:26.001 00:04:26.001 00:05:44 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:26.001 INFO: shutting down applications... 00:04:26.001 00:05:44 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:26.001 00:05:44 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:26.001 00:05:44 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:26.001 00:05:44 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1329494 ]] 00:04:26.001 00:05:44 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1329494 00:04:26.001 00:05:44 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:26.001 00:05:44 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:26.001 00:05:44 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1329494 00:04:26.001 00:05:44 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:26.567 00:05:45 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:26.567 00:05:45 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:26.567 00:05:45 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1329494 00:04:26.567 00:05:45 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:26.567 00:05:45 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:26.567 00:05:45 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:26.567 00:05:45 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:26.567 SPDK target shutdown done 00:04:26.567 00:05:45 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:26.567 Success 00:04:26.567 00:04:26.567 real 0m1.438s 00:04:26.567 user 0m1.042s 00:04:26.567 sys 0m0.532s 00:04:26.567 00:05:45 json_config_extra_key -- common/autotest_common.sh@1118 -- # xtrace_disable 00:04:26.567 00:05:45 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:26.567 ************************************ 00:04:26.567 END TEST json_config_extra_key 00:04:26.567 ************************************ 00:04:26.567 00:05:45 -- common/autotest_common.sh@1136 -- # return 0 00:04:26.567 00:05:45 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:26.567 00:05:45 -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:04:26.567 00:05:45 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:04:26.567 00:05:45 -- common/autotest_common.sh@10 -- # set +x 00:04:26.567 ************************************ 00:04:26.567 START TEST alias_rpc 00:04:26.567 ************************************ 00:04:26.567 00:05:45 alias_rpc -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:26.567 * Looking for test storage... 00:04:26.567 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:26.567 00:05:45 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:26.567 00:05:45 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1329776 00:04:26.567 00:05:45 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1329776 00:04:26.567 00:05:45 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:26.567 00:05:45 alias_rpc -- common/autotest_common.sh@823 -- # '[' -z 1329776 ']' 00:04:26.567 00:05:45 alias_rpc -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:26.567 00:05:45 alias_rpc -- common/autotest_common.sh@828 -- # local max_retries=100 00:04:26.567 00:05:45 alias_rpc -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:26.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:26.567 00:05:45 alias_rpc -- common/autotest_common.sh@832 -- # xtrace_disable 00:04:26.567 00:05:45 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:26.567 [2024-07-16 00:05:45.417844] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:04:26.567 [2024-07-16 00:05:45.417894] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1329776 ] 00:04:26.825 [2024-07-16 00:05:45.471391] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:26.825 [2024-07-16 00:05:45.545201] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:27.418 00:05:46 alias_rpc -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:04:27.418 00:05:46 alias_rpc -- common/autotest_common.sh@856 -- # return 0 00:04:27.418 00:05:46 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:27.677 00:05:46 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1329776 00:04:27.677 00:05:46 alias_rpc -- common/autotest_common.sh@942 -- # '[' -z 1329776 ']' 00:04:27.677 00:05:46 alias_rpc -- common/autotest_common.sh@946 -- # kill -0 1329776 00:04:27.677 00:05:46 alias_rpc -- common/autotest_common.sh@947 -- # uname 00:04:27.677 00:05:46 alias_rpc -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:04:27.677 00:05:46 alias_rpc -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1329776 00:04:27.677 00:05:46 alias_rpc -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:04:27.677 00:05:46 alias_rpc -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:04:27.677 00:05:46 alias_rpc -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1329776' 00:04:27.677 killing process with pid 1329776 00:04:27.677 00:05:46 alias_rpc -- common/autotest_common.sh@961 -- # kill 1329776 00:04:27.677 00:05:46 alias_rpc -- common/autotest_common.sh@966 -- # wait 1329776 00:04:27.934 00:04:27.934 real 0m1.475s 00:04:27.934 user 0m1.628s 00:04:27.934 sys 0m0.377s 00:04:27.934 00:05:46 alias_rpc -- common/autotest_common.sh@1118 -- # xtrace_disable 00:04:27.934 00:05:46 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:27.934 ************************************ 00:04:27.934 END TEST alias_rpc 00:04:27.934 ************************************ 00:04:28.193 00:05:46 -- common/autotest_common.sh@1136 -- # return 0 00:04:28.193 00:05:46 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:04:28.193 00:05:46 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:28.193 00:05:46 -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:04:28.193 00:05:46 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:04:28.193 00:05:46 -- common/autotest_common.sh@10 -- # set +x 00:04:28.193 ************************************ 00:04:28.193 START TEST spdkcli_tcp 00:04:28.193 ************************************ 00:04:28.193 00:05:46 spdkcli_tcp -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:28.193 * Looking for test storage... 00:04:28.193 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:28.193 00:05:46 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:28.193 00:05:46 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:28.193 00:05:46 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:28.193 00:05:46 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:28.193 00:05:46 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:28.193 00:05:46 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:28.193 00:05:46 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:28.193 00:05:46 spdkcli_tcp -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:28.193 00:05:46 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:28.193 00:05:46 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1330067 00:04:28.193 00:05:46 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1330067 00:04:28.193 00:05:46 spdkcli_tcp -- common/autotest_common.sh@823 -- # '[' -z 1330067 ']' 00:04:28.193 00:05:46 spdkcli_tcp -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:28.193 00:05:46 spdkcli_tcp -- common/autotest_common.sh@828 -- # local max_retries=100 00:04:28.193 00:05:46 spdkcli_tcp -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:28.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:28.193 00:05:46 spdkcli_tcp -- common/autotest_common.sh@832 -- # xtrace_disable 00:04:28.193 00:05:46 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:28.193 00:05:46 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:28.193 [2024-07-16 00:05:46.966394] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:04:28.193 [2024-07-16 00:05:46.966437] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1330067 ] 00:04:28.193 [2024-07-16 00:05:47.020307] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:28.451 [2024-07-16 00:05:47.101703] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:28.451 [2024-07-16 00:05:47.101706] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:29.041 00:05:47 spdkcli_tcp -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:04:29.041 00:05:47 spdkcli_tcp -- common/autotest_common.sh@856 -- # return 0 00:04:29.041 00:05:47 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1330290 00:04:29.041 00:05:47 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:29.041 00:05:47 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:29.300 [ 00:04:29.300 "bdev_malloc_delete", 00:04:29.300 "bdev_malloc_create", 00:04:29.300 "bdev_null_resize", 00:04:29.300 "bdev_null_delete", 00:04:29.300 "bdev_null_create", 00:04:29.300 "bdev_nvme_cuse_unregister", 00:04:29.300 "bdev_nvme_cuse_register", 00:04:29.300 "bdev_opal_new_user", 00:04:29.300 "bdev_opal_set_lock_state", 00:04:29.300 "bdev_opal_delete", 00:04:29.300 "bdev_opal_get_info", 00:04:29.300 "bdev_opal_create", 00:04:29.300 "bdev_nvme_opal_revert", 00:04:29.300 "bdev_nvme_opal_init", 00:04:29.300 "bdev_nvme_send_cmd", 00:04:29.300 "bdev_nvme_get_path_iostat", 00:04:29.300 "bdev_nvme_get_mdns_discovery_info", 00:04:29.300 "bdev_nvme_stop_mdns_discovery", 00:04:29.300 "bdev_nvme_start_mdns_discovery", 00:04:29.300 "bdev_nvme_set_multipath_policy", 00:04:29.300 "bdev_nvme_set_preferred_path", 00:04:29.300 "bdev_nvme_get_io_paths", 00:04:29.300 "bdev_nvme_remove_error_injection", 00:04:29.300 "bdev_nvme_add_error_injection", 00:04:29.300 "bdev_nvme_get_discovery_info", 00:04:29.300 "bdev_nvme_stop_discovery", 00:04:29.300 "bdev_nvme_start_discovery", 00:04:29.300 "bdev_nvme_get_controller_health_info", 00:04:29.300 "bdev_nvme_disable_controller", 00:04:29.300 "bdev_nvme_enable_controller", 00:04:29.300 "bdev_nvme_reset_controller", 00:04:29.300 "bdev_nvme_get_transport_statistics", 00:04:29.300 "bdev_nvme_apply_firmware", 00:04:29.300 "bdev_nvme_detach_controller", 00:04:29.300 "bdev_nvme_get_controllers", 00:04:29.300 "bdev_nvme_attach_controller", 00:04:29.300 "bdev_nvme_set_hotplug", 00:04:29.300 "bdev_nvme_set_options", 00:04:29.300 "bdev_passthru_delete", 00:04:29.300 "bdev_passthru_create", 00:04:29.300 "bdev_lvol_set_parent_bdev", 00:04:29.300 "bdev_lvol_set_parent", 00:04:29.300 "bdev_lvol_check_shallow_copy", 00:04:29.300 "bdev_lvol_start_shallow_copy", 00:04:29.300 "bdev_lvol_grow_lvstore", 00:04:29.300 "bdev_lvol_get_lvols", 00:04:29.300 "bdev_lvol_get_lvstores", 00:04:29.300 "bdev_lvol_delete", 00:04:29.300 "bdev_lvol_set_read_only", 00:04:29.300 "bdev_lvol_resize", 00:04:29.300 "bdev_lvol_decouple_parent", 00:04:29.300 "bdev_lvol_inflate", 00:04:29.300 "bdev_lvol_rename", 00:04:29.300 "bdev_lvol_clone_bdev", 00:04:29.300 "bdev_lvol_clone", 00:04:29.300 "bdev_lvol_snapshot", 00:04:29.300 "bdev_lvol_create", 00:04:29.300 "bdev_lvol_delete_lvstore", 00:04:29.300 "bdev_lvol_rename_lvstore", 00:04:29.300 "bdev_lvol_create_lvstore", 00:04:29.300 "bdev_raid_set_options", 00:04:29.300 "bdev_raid_remove_base_bdev", 00:04:29.300 "bdev_raid_add_base_bdev", 00:04:29.300 "bdev_raid_delete", 00:04:29.300 "bdev_raid_create", 00:04:29.300 "bdev_raid_get_bdevs", 00:04:29.300 "bdev_error_inject_error", 00:04:29.300 "bdev_error_delete", 00:04:29.300 "bdev_error_create", 00:04:29.300 "bdev_split_delete", 00:04:29.300 "bdev_split_create", 00:04:29.300 "bdev_delay_delete", 00:04:29.300 "bdev_delay_create", 00:04:29.300 "bdev_delay_update_latency", 00:04:29.300 "bdev_zone_block_delete", 00:04:29.300 "bdev_zone_block_create", 00:04:29.300 "blobfs_create", 00:04:29.300 "blobfs_detect", 00:04:29.300 "blobfs_set_cache_size", 00:04:29.300 "bdev_aio_delete", 00:04:29.300 "bdev_aio_rescan", 00:04:29.300 "bdev_aio_create", 00:04:29.300 "bdev_ftl_set_property", 00:04:29.300 "bdev_ftl_get_properties", 00:04:29.300 "bdev_ftl_get_stats", 00:04:29.300 "bdev_ftl_unmap", 00:04:29.300 "bdev_ftl_unload", 00:04:29.300 "bdev_ftl_delete", 00:04:29.300 "bdev_ftl_load", 00:04:29.300 "bdev_ftl_create", 00:04:29.300 "bdev_virtio_attach_controller", 00:04:29.300 "bdev_virtio_scsi_get_devices", 00:04:29.300 "bdev_virtio_detach_controller", 00:04:29.300 "bdev_virtio_blk_set_hotplug", 00:04:29.300 "bdev_iscsi_delete", 00:04:29.300 "bdev_iscsi_create", 00:04:29.300 "bdev_iscsi_set_options", 00:04:29.300 "accel_error_inject_error", 00:04:29.300 "ioat_scan_accel_module", 00:04:29.300 "dsa_scan_accel_module", 00:04:29.300 "iaa_scan_accel_module", 00:04:29.300 "vfu_virtio_create_scsi_endpoint", 00:04:29.300 "vfu_virtio_scsi_remove_target", 00:04:29.300 "vfu_virtio_scsi_add_target", 00:04:29.300 "vfu_virtio_create_blk_endpoint", 00:04:29.300 "vfu_virtio_delete_endpoint", 00:04:29.300 "keyring_file_remove_key", 00:04:29.300 "keyring_file_add_key", 00:04:29.300 "keyring_linux_set_options", 00:04:29.300 "iscsi_get_histogram", 00:04:29.300 "iscsi_enable_histogram", 00:04:29.300 "iscsi_set_options", 00:04:29.300 "iscsi_get_auth_groups", 00:04:29.300 "iscsi_auth_group_remove_secret", 00:04:29.301 "iscsi_auth_group_add_secret", 00:04:29.301 "iscsi_delete_auth_group", 00:04:29.301 "iscsi_create_auth_group", 00:04:29.301 "iscsi_set_discovery_auth", 00:04:29.301 "iscsi_get_options", 00:04:29.301 "iscsi_target_node_request_logout", 00:04:29.301 "iscsi_target_node_set_redirect", 00:04:29.301 "iscsi_target_node_set_auth", 00:04:29.301 "iscsi_target_node_add_lun", 00:04:29.301 "iscsi_get_stats", 00:04:29.301 "iscsi_get_connections", 00:04:29.301 "iscsi_portal_group_set_auth", 00:04:29.301 "iscsi_start_portal_group", 00:04:29.301 "iscsi_delete_portal_group", 00:04:29.301 "iscsi_create_portal_group", 00:04:29.301 "iscsi_get_portal_groups", 00:04:29.301 "iscsi_delete_target_node", 00:04:29.301 "iscsi_target_node_remove_pg_ig_maps", 00:04:29.301 "iscsi_target_node_add_pg_ig_maps", 00:04:29.301 "iscsi_create_target_node", 00:04:29.301 "iscsi_get_target_nodes", 00:04:29.301 "iscsi_delete_initiator_group", 00:04:29.301 "iscsi_initiator_group_remove_initiators", 00:04:29.301 "iscsi_initiator_group_add_initiators", 00:04:29.301 "iscsi_create_initiator_group", 00:04:29.301 "iscsi_get_initiator_groups", 00:04:29.301 "nvmf_set_crdt", 00:04:29.301 "nvmf_set_config", 00:04:29.301 "nvmf_set_max_subsystems", 00:04:29.301 "nvmf_stop_mdns_prr", 00:04:29.301 "nvmf_publish_mdns_prr", 00:04:29.301 "nvmf_subsystem_get_listeners", 00:04:29.301 "nvmf_subsystem_get_qpairs", 00:04:29.301 "nvmf_subsystem_get_controllers", 00:04:29.301 "nvmf_get_stats", 00:04:29.301 "nvmf_get_transports", 00:04:29.301 "nvmf_create_transport", 00:04:29.301 "nvmf_get_targets", 00:04:29.301 "nvmf_delete_target", 00:04:29.301 "nvmf_create_target", 00:04:29.301 "nvmf_subsystem_allow_any_host", 00:04:29.301 "nvmf_subsystem_remove_host", 00:04:29.301 "nvmf_subsystem_add_host", 00:04:29.301 "nvmf_ns_remove_host", 00:04:29.301 "nvmf_ns_add_host", 00:04:29.301 "nvmf_subsystem_remove_ns", 00:04:29.301 "nvmf_subsystem_add_ns", 00:04:29.301 "nvmf_subsystem_listener_set_ana_state", 00:04:29.301 "nvmf_discovery_get_referrals", 00:04:29.301 "nvmf_discovery_remove_referral", 00:04:29.301 "nvmf_discovery_add_referral", 00:04:29.301 "nvmf_subsystem_remove_listener", 00:04:29.301 "nvmf_subsystem_add_listener", 00:04:29.301 "nvmf_delete_subsystem", 00:04:29.301 "nvmf_create_subsystem", 00:04:29.301 "nvmf_get_subsystems", 00:04:29.301 "env_dpdk_get_mem_stats", 00:04:29.301 "nbd_get_disks", 00:04:29.301 "nbd_stop_disk", 00:04:29.301 "nbd_start_disk", 00:04:29.301 "ublk_recover_disk", 00:04:29.301 "ublk_get_disks", 00:04:29.301 "ublk_stop_disk", 00:04:29.301 "ublk_start_disk", 00:04:29.301 "ublk_destroy_target", 00:04:29.301 "ublk_create_target", 00:04:29.301 "virtio_blk_create_transport", 00:04:29.301 "virtio_blk_get_transports", 00:04:29.301 "vhost_controller_set_coalescing", 00:04:29.301 "vhost_get_controllers", 00:04:29.301 "vhost_delete_controller", 00:04:29.301 "vhost_create_blk_controller", 00:04:29.301 "vhost_scsi_controller_remove_target", 00:04:29.301 "vhost_scsi_controller_add_target", 00:04:29.301 "vhost_start_scsi_controller", 00:04:29.301 "vhost_create_scsi_controller", 00:04:29.301 "thread_set_cpumask", 00:04:29.301 "framework_get_governor", 00:04:29.301 "framework_get_scheduler", 00:04:29.301 "framework_set_scheduler", 00:04:29.301 "framework_get_reactors", 00:04:29.301 "thread_get_io_channels", 00:04:29.301 "thread_get_pollers", 00:04:29.301 "thread_get_stats", 00:04:29.301 "framework_monitor_context_switch", 00:04:29.301 "spdk_kill_instance", 00:04:29.301 "log_enable_timestamps", 00:04:29.301 "log_get_flags", 00:04:29.301 "log_clear_flag", 00:04:29.301 "log_set_flag", 00:04:29.301 "log_get_level", 00:04:29.301 "log_set_level", 00:04:29.301 "log_get_print_level", 00:04:29.301 "log_set_print_level", 00:04:29.301 "framework_enable_cpumask_locks", 00:04:29.301 "framework_disable_cpumask_locks", 00:04:29.301 "framework_wait_init", 00:04:29.301 "framework_start_init", 00:04:29.301 "scsi_get_devices", 00:04:29.301 "bdev_get_histogram", 00:04:29.301 "bdev_enable_histogram", 00:04:29.301 "bdev_set_qos_limit", 00:04:29.301 "bdev_set_qd_sampling_period", 00:04:29.301 "bdev_get_bdevs", 00:04:29.301 "bdev_reset_iostat", 00:04:29.301 "bdev_get_iostat", 00:04:29.301 "bdev_examine", 00:04:29.301 "bdev_wait_for_examine", 00:04:29.301 "bdev_set_options", 00:04:29.301 "notify_get_notifications", 00:04:29.301 "notify_get_types", 00:04:29.301 "accel_get_stats", 00:04:29.301 "accel_set_options", 00:04:29.301 "accel_set_driver", 00:04:29.301 "accel_crypto_key_destroy", 00:04:29.301 "accel_crypto_keys_get", 00:04:29.301 "accel_crypto_key_create", 00:04:29.301 "accel_assign_opc", 00:04:29.301 "accel_get_module_info", 00:04:29.301 "accel_get_opc_assignments", 00:04:29.301 "vmd_rescan", 00:04:29.301 "vmd_remove_device", 00:04:29.301 "vmd_enable", 00:04:29.301 "sock_get_default_impl", 00:04:29.301 "sock_set_default_impl", 00:04:29.301 "sock_impl_set_options", 00:04:29.301 "sock_impl_get_options", 00:04:29.301 "iobuf_get_stats", 00:04:29.301 "iobuf_set_options", 00:04:29.301 "keyring_get_keys", 00:04:29.301 "framework_get_pci_devices", 00:04:29.301 "framework_get_config", 00:04:29.301 "framework_get_subsystems", 00:04:29.301 "vfu_tgt_set_base_path", 00:04:29.301 "trace_get_info", 00:04:29.301 "trace_get_tpoint_group_mask", 00:04:29.301 "trace_disable_tpoint_group", 00:04:29.301 "trace_enable_tpoint_group", 00:04:29.301 "trace_clear_tpoint_mask", 00:04:29.301 "trace_set_tpoint_mask", 00:04:29.301 "spdk_get_version", 00:04:29.301 "rpc_get_methods" 00:04:29.301 ] 00:04:29.301 00:05:47 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:29.301 00:05:47 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:29.301 00:05:47 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:29.302 00:05:47 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:29.302 00:05:47 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1330067 00:04:29.302 00:05:47 spdkcli_tcp -- common/autotest_common.sh@942 -- # '[' -z 1330067 ']' 00:04:29.302 00:05:47 spdkcli_tcp -- common/autotest_common.sh@946 -- # kill -0 1330067 00:04:29.302 00:05:47 spdkcli_tcp -- common/autotest_common.sh@947 -- # uname 00:04:29.302 00:05:47 spdkcli_tcp -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:04:29.302 00:05:47 spdkcli_tcp -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1330067 00:04:29.302 00:05:48 spdkcli_tcp -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:04:29.302 00:05:48 spdkcli_tcp -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:04:29.302 00:05:48 spdkcli_tcp -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1330067' 00:04:29.302 killing process with pid 1330067 00:04:29.302 00:05:48 spdkcli_tcp -- common/autotest_common.sh@961 -- # kill 1330067 00:04:29.302 00:05:48 spdkcli_tcp -- common/autotest_common.sh@966 -- # wait 1330067 00:04:29.561 00:04:29.561 real 0m1.498s 00:04:29.561 user 0m2.768s 00:04:29.561 sys 0m0.443s 00:04:29.561 00:05:48 spdkcli_tcp -- common/autotest_common.sh@1118 -- # xtrace_disable 00:04:29.561 00:05:48 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:29.561 ************************************ 00:04:29.561 END TEST spdkcli_tcp 00:04:29.561 ************************************ 00:04:29.561 00:05:48 -- common/autotest_common.sh@1136 -- # return 0 00:04:29.561 00:05:48 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:29.561 00:05:48 -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:04:29.561 00:05:48 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:04:29.561 00:05:48 -- common/autotest_common.sh@10 -- # set +x 00:04:29.561 ************************************ 00:04:29.561 START TEST dpdk_mem_utility 00:04:29.561 ************************************ 00:04:29.561 00:05:48 dpdk_mem_utility -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:29.819 * Looking for test storage... 00:04:29.819 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:29.819 00:05:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:29.819 00:05:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1330393 00:04:29.819 00:05:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1330393 00:04:29.819 00:05:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:29.819 00:05:48 dpdk_mem_utility -- common/autotest_common.sh@823 -- # '[' -z 1330393 ']' 00:04:29.819 00:05:48 dpdk_mem_utility -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:29.819 00:05:48 dpdk_mem_utility -- common/autotest_common.sh@828 -- # local max_retries=100 00:04:29.819 00:05:48 dpdk_mem_utility -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:29.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:29.819 00:05:48 dpdk_mem_utility -- common/autotest_common.sh@832 -- # xtrace_disable 00:04:29.819 00:05:48 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:29.819 [2024-07-16 00:05:48.522165] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:04:29.819 [2024-07-16 00:05:48.522218] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1330393 ] 00:04:29.819 [2024-07-16 00:05:48.576104] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:29.819 [2024-07-16 00:05:48.657736] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:30.754 00:05:49 dpdk_mem_utility -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:04:30.754 00:05:49 dpdk_mem_utility -- common/autotest_common.sh@856 -- # return 0 00:04:30.754 00:05:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:30.754 00:05:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:30.754 00:05:49 dpdk_mem_utility -- common/autotest_common.sh@553 -- # xtrace_disable 00:04:30.754 00:05:49 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:30.754 { 00:04:30.754 "filename": "/tmp/spdk_mem_dump.txt" 00:04:30.754 } 00:04:30.754 00:05:49 dpdk_mem_utility -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:04:30.754 00:05:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:30.754 DPDK memory size 814.000000 MiB in 1 heap(s) 00:04:30.754 1 heaps totaling size 814.000000 MiB 00:04:30.754 size: 814.000000 MiB heap id: 0 00:04:30.754 end heaps---------- 00:04:30.754 8 mempools totaling size 598.116089 MiB 00:04:30.754 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:30.754 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:30.754 size: 84.521057 MiB name: bdev_io_1330393 00:04:30.754 size: 51.011292 MiB name: evtpool_1330393 00:04:30.754 size: 50.003479 MiB name: msgpool_1330393 00:04:30.754 size: 21.763794 MiB name: PDU_Pool 00:04:30.754 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:30.754 size: 0.026123 MiB name: Session_Pool 00:04:30.754 end mempools------- 00:04:30.754 6 memzones totaling size 4.142822 MiB 00:04:30.754 size: 1.000366 MiB name: RG_ring_0_1330393 00:04:30.754 size: 1.000366 MiB name: RG_ring_1_1330393 00:04:30.754 size: 1.000366 MiB name: RG_ring_4_1330393 00:04:30.754 size: 1.000366 MiB name: RG_ring_5_1330393 00:04:30.754 size: 0.125366 MiB name: RG_ring_2_1330393 00:04:30.754 size: 0.015991 MiB name: RG_ring_3_1330393 00:04:30.754 end memzones------- 00:04:30.754 00:05:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:30.754 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:04:30.754 list of free elements. size: 12.519348 MiB 00:04:30.754 element at address: 0x200000400000 with size: 1.999512 MiB 00:04:30.754 element at address: 0x200018e00000 with size: 0.999878 MiB 00:04:30.754 element at address: 0x200019000000 with size: 0.999878 MiB 00:04:30.754 element at address: 0x200003e00000 with size: 0.996277 MiB 00:04:30.754 element at address: 0x200031c00000 with size: 0.994446 MiB 00:04:30.754 element at address: 0x200013800000 with size: 0.978699 MiB 00:04:30.754 element at address: 0x200007000000 with size: 0.959839 MiB 00:04:30.754 element at address: 0x200019200000 with size: 0.936584 MiB 00:04:30.754 element at address: 0x200000200000 with size: 0.841614 MiB 00:04:30.754 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:04:30.754 element at address: 0x20000b200000 with size: 0.490723 MiB 00:04:30.754 element at address: 0x200000800000 with size: 0.487793 MiB 00:04:30.754 element at address: 0x200019400000 with size: 0.485657 MiB 00:04:30.754 element at address: 0x200027e00000 with size: 0.410034 MiB 00:04:30.754 element at address: 0x200003a00000 with size: 0.355530 MiB 00:04:30.754 list of standard malloc elements. size: 199.218079 MiB 00:04:30.754 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:04:30.754 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:04:30.754 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:30.754 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:04:30.754 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:30.754 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:30.754 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:04:30.754 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:30.755 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:04:30.755 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:04:30.755 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:04:30.755 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:04:30.755 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:04:30.755 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:04:30.755 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:30.755 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:30.755 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:04:30.755 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:04:30.755 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:04:30.755 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:04:30.755 element at address: 0x200003adb300 with size: 0.000183 MiB 00:04:30.755 element at address: 0x200003adb500 with size: 0.000183 MiB 00:04:30.755 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:04:30.755 element at address: 0x200003affa80 with size: 0.000183 MiB 00:04:30.755 element at address: 0x200003affb40 with size: 0.000183 MiB 00:04:30.755 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:04:30.755 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:04:30.755 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:04:30.755 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:04:30.755 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:04:30.755 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:04:30.755 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:04:30.755 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:04:30.755 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:04:30.755 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:04:30.755 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:04:30.755 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:04:30.755 element at address: 0x200027e69040 with size: 0.000183 MiB 00:04:30.755 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:04:30.755 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:04:30.755 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:04:30.755 list of memzone associated elements. size: 602.262573 MiB 00:04:30.755 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:04:30.755 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:30.755 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:04:30.755 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:30.755 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:04:30.755 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_1330393_0 00:04:30.755 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:04:30.755 associated memzone info: size: 48.002930 MiB name: MP_evtpool_1330393_0 00:04:30.755 element at address: 0x200003fff380 with size: 48.003052 MiB 00:04:30.755 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1330393_0 00:04:30.755 element at address: 0x2000195be940 with size: 20.255554 MiB 00:04:30.755 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:30.755 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:04:30.755 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:30.755 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:04:30.755 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_1330393 00:04:30.755 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:04:30.755 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1330393 00:04:30.755 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:30.755 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1330393 00:04:30.755 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:04:30.755 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:30.755 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:04:30.755 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:30.755 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:04:30.755 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:30.755 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:04:30.755 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:30.755 element at address: 0x200003eff180 with size: 1.000488 MiB 00:04:30.755 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1330393 00:04:30.755 element at address: 0x200003affc00 with size: 1.000488 MiB 00:04:30.755 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1330393 00:04:30.755 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:04:30.755 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1330393 00:04:30.755 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:04:30.755 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1330393 00:04:30.755 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:04:30.755 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1330393 00:04:30.755 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:04:30.755 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:30.755 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:04:30.755 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:30.755 element at address: 0x20001947c540 with size: 0.250488 MiB 00:04:30.755 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:30.755 element at address: 0x200003adf880 with size: 0.125488 MiB 00:04:30.755 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1330393 00:04:30.755 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:04:30.755 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:30.755 element at address: 0x200027e69100 with size: 0.023743 MiB 00:04:30.755 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:30.755 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:04:30.755 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1330393 00:04:30.755 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:04:30.755 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:30.755 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:04:30.755 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1330393 00:04:30.755 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:04:30.755 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1330393 00:04:30.755 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:04:30.755 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:30.755 00:05:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:30.755 00:05:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1330393 00:04:30.755 00:05:49 dpdk_mem_utility -- common/autotest_common.sh@942 -- # '[' -z 1330393 ']' 00:04:30.755 00:05:49 dpdk_mem_utility -- common/autotest_common.sh@946 -- # kill -0 1330393 00:04:30.755 00:05:49 dpdk_mem_utility -- common/autotest_common.sh@947 -- # uname 00:04:30.755 00:05:49 dpdk_mem_utility -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:04:30.755 00:05:49 dpdk_mem_utility -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1330393 00:04:30.755 00:05:49 dpdk_mem_utility -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:04:30.755 00:05:49 dpdk_mem_utility -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:04:30.755 00:05:49 dpdk_mem_utility -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1330393' 00:04:30.755 killing process with pid 1330393 00:04:30.755 00:05:49 dpdk_mem_utility -- common/autotest_common.sh@961 -- # kill 1330393 00:04:30.755 00:05:49 dpdk_mem_utility -- common/autotest_common.sh@966 -- # wait 1330393 00:04:31.013 00:04:31.013 real 0m1.390s 00:04:31.014 user 0m1.472s 00:04:31.014 sys 0m0.390s 00:04:31.014 00:05:49 dpdk_mem_utility -- common/autotest_common.sh@1118 -- # xtrace_disable 00:04:31.014 00:05:49 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:31.014 ************************************ 00:04:31.014 END TEST dpdk_mem_utility 00:04:31.014 ************************************ 00:04:31.014 00:05:49 -- common/autotest_common.sh@1136 -- # return 0 00:04:31.014 00:05:49 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:31.014 00:05:49 -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:04:31.014 00:05:49 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:04:31.014 00:05:49 -- common/autotest_common.sh@10 -- # set +x 00:04:31.014 ************************************ 00:04:31.014 START TEST event 00:04:31.014 ************************************ 00:04:31.014 00:05:49 event -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:31.272 * Looking for test storage... 00:04:31.272 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:31.272 00:05:49 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:31.272 00:05:49 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:31.272 00:05:49 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:31.272 00:05:49 event -- common/autotest_common.sh@1093 -- # '[' 6 -le 1 ']' 00:04:31.272 00:05:49 event -- common/autotest_common.sh@1099 -- # xtrace_disable 00:04:31.272 00:05:49 event -- common/autotest_common.sh@10 -- # set +x 00:04:31.272 ************************************ 00:04:31.272 START TEST event_perf 00:04:31.272 ************************************ 00:04:31.272 00:05:49 event.event_perf -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:31.272 Running I/O for 1 seconds...[2024-07-16 00:05:49.976461] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:04:31.272 [2024-07-16 00:05:49.976529] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1330752 ] 00:04:31.272 [2024-07-16 00:05:50.046814] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:31.531 [2024-07-16 00:05:50.142099] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:31.531 [2024-07-16 00:05:50.142198] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:31.531 [2024-07-16 00:05:50.142298] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:31.531 [2024-07-16 00:05:50.142300] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:32.511 Running I/O for 1 seconds... 00:04:32.511 lcore 0: 202947 00:04:32.511 lcore 1: 202947 00:04:32.511 lcore 2: 202948 00:04:32.511 lcore 3: 202948 00:04:32.511 done. 00:04:32.511 00:04:32.511 real 0m1.257s 00:04:32.511 user 0m4.168s 00:04:32.511 sys 0m0.087s 00:04:32.511 00:05:51 event.event_perf -- common/autotest_common.sh@1118 -- # xtrace_disable 00:04:32.511 00:05:51 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:32.511 ************************************ 00:04:32.511 END TEST event_perf 00:04:32.511 ************************************ 00:04:32.511 00:05:51 event -- common/autotest_common.sh@1136 -- # return 0 00:04:32.511 00:05:51 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:32.511 00:05:51 event -- common/autotest_common.sh@1093 -- # '[' 4 -le 1 ']' 00:04:32.511 00:05:51 event -- common/autotest_common.sh@1099 -- # xtrace_disable 00:04:32.511 00:05:51 event -- common/autotest_common.sh@10 -- # set +x 00:04:32.511 ************************************ 00:04:32.511 START TEST event_reactor 00:04:32.511 ************************************ 00:04:32.511 00:05:51 event.event_reactor -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:32.511 [2024-07-16 00:05:51.300672] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:04:32.511 [2024-07-16 00:05:51.300746] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1330983 ] 00:04:32.775 [2024-07-16 00:05:51.359445] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:32.775 [2024-07-16 00:05:51.439442] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.714 test_start 00:04:33.714 oneshot 00:04:33.714 tick 100 00:04:33.714 tick 100 00:04:33.714 tick 250 00:04:33.714 tick 100 00:04:33.714 tick 100 00:04:33.714 tick 250 00:04:33.714 tick 100 00:04:33.714 tick 500 00:04:33.714 tick 100 00:04:33.714 tick 100 00:04:33.714 tick 250 00:04:33.714 tick 100 00:04:33.714 tick 100 00:04:33.714 test_end 00:04:33.714 00:04:33.714 real 0m1.227s 00:04:33.714 user 0m1.158s 00:04:33.714 sys 0m0.066s 00:04:33.714 00:05:52 event.event_reactor -- common/autotest_common.sh@1118 -- # xtrace_disable 00:04:33.714 00:05:52 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:33.714 ************************************ 00:04:33.714 END TEST event_reactor 00:04:33.714 ************************************ 00:04:33.714 00:05:52 event -- common/autotest_common.sh@1136 -- # return 0 00:04:33.714 00:05:52 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:33.714 00:05:52 event -- common/autotest_common.sh@1093 -- # '[' 4 -le 1 ']' 00:04:33.714 00:05:52 event -- common/autotest_common.sh@1099 -- # xtrace_disable 00:04:33.714 00:05:52 event -- common/autotest_common.sh@10 -- # set +x 00:04:33.974 ************************************ 00:04:33.974 START TEST event_reactor_perf 00:04:33.974 ************************************ 00:04:33.974 00:05:52 event.event_reactor_perf -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:33.974 [2024-07-16 00:05:52.593364] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:04:33.974 [2024-07-16 00:05:52.593431] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1331206 ] 00:04:33.974 [2024-07-16 00:05:52.651357] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:33.974 [2024-07-16 00:05:52.724058] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:35.353 test_start 00:04:35.353 test_end 00:04:35.353 Performance: 505578 events per second 00:04:35.353 00:04:35.353 real 0m1.218s 00:04:35.353 user 0m1.138s 00:04:35.353 sys 0m0.077s 00:04:35.353 00:05:53 event.event_reactor_perf -- common/autotest_common.sh@1118 -- # xtrace_disable 00:04:35.353 00:05:53 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:35.353 ************************************ 00:04:35.353 END TEST event_reactor_perf 00:04:35.353 ************************************ 00:04:35.353 00:05:53 event -- common/autotest_common.sh@1136 -- # return 0 00:04:35.353 00:05:53 event -- event/event.sh@49 -- # uname -s 00:04:35.353 00:05:53 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:35.353 00:05:53 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:35.353 00:05:53 event -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:04:35.353 00:05:53 event -- common/autotest_common.sh@1099 -- # xtrace_disable 00:04:35.353 00:05:53 event -- common/autotest_common.sh@10 -- # set +x 00:04:35.353 ************************************ 00:04:35.353 START TEST event_scheduler 00:04:35.353 ************************************ 00:04:35.353 00:05:53 event.event_scheduler -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:35.353 * Looking for test storage... 00:04:35.353 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:35.353 00:05:53 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:35.353 00:05:53 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:35.353 00:05:53 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1331485 00:04:35.353 00:05:53 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:35.353 00:05:53 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1331485 00:04:35.353 00:05:53 event.event_scheduler -- common/autotest_common.sh@823 -- # '[' -z 1331485 ']' 00:04:35.353 00:05:53 event.event_scheduler -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:35.353 00:05:53 event.event_scheduler -- common/autotest_common.sh@828 -- # local max_retries=100 00:04:35.353 00:05:53 event.event_scheduler -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:35.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:35.353 00:05:53 event.event_scheduler -- common/autotest_common.sh@832 -- # xtrace_disable 00:04:35.353 00:05:53 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:35.353 [2024-07-16 00:05:53.978298] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:04:35.353 [2024-07-16 00:05:53.978344] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1331485 ] 00:04:35.353 [2024-07-16 00:05:54.029262] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:35.353 [2024-07-16 00:05:54.111889] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:35.353 [2024-07-16 00:05:54.111974] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:35.353 [2024-07-16 00:05:54.112064] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:35.353 [2024-07-16 00:05:54.112067] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:36.291 00:05:54 event.event_scheduler -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:04:36.291 00:05:54 event.event_scheduler -- common/autotest_common.sh@856 -- # return 0 00:04:36.291 00:05:54 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:36.291 00:05:54 event.event_scheduler -- common/autotest_common.sh@553 -- # xtrace_disable 00:04:36.291 00:05:54 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:36.291 [2024-07-16 00:05:54.806504] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:04:36.291 [2024-07-16 00:05:54.806522] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:04:36.291 [2024-07-16 00:05:54.806530] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:36.291 [2024-07-16 00:05:54.806535] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:36.291 [2024-07-16 00:05:54.806540] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:36.291 00:05:54 event.event_scheduler -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:04:36.291 00:05:54 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:36.291 00:05:54 event.event_scheduler -- common/autotest_common.sh@553 -- # xtrace_disable 00:04:36.291 00:05:54 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:36.291 [2024-07-16 00:05:54.878407] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:36.291 00:05:54 event.event_scheduler -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:04:36.291 00:05:54 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:36.291 00:05:54 event.event_scheduler -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:04:36.291 00:05:54 event.event_scheduler -- common/autotest_common.sh@1099 -- # xtrace_disable 00:04:36.291 00:05:54 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:36.291 ************************************ 00:04:36.291 START TEST scheduler_create_thread 00:04:36.291 ************************************ 00:04:36.291 00:05:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1117 -- # scheduler_create_thread 00:04:36.291 00:05:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:36.291 00:05:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@553 -- # xtrace_disable 00:04:36.291 00:05:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:36.291 2 00:04:36.291 00:05:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:04:36.291 00:05:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:36.291 00:05:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@553 -- # xtrace_disable 00:04:36.291 00:05:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:36.291 3 00:04:36.291 00:05:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:04:36.291 00:05:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:36.291 00:05:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@553 -- # xtrace_disable 00:04:36.291 00:05:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:36.291 4 00:04:36.291 00:05:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:04:36.291 00:05:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:36.291 00:05:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@553 -- # xtrace_disable 00:04:36.291 00:05:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:36.291 5 00:04:36.291 00:05:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:04:36.291 00:05:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:36.291 00:05:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@553 -- # xtrace_disable 00:04:36.291 00:05:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:36.291 6 00:04:36.291 00:05:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:04:36.291 00:05:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:36.291 00:05:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@553 -- # xtrace_disable 00:04:36.291 00:05:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:36.291 7 00:04:36.291 00:05:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:04:36.292 00:05:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:36.292 00:05:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@553 -- # xtrace_disable 00:04:36.292 00:05:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:36.292 8 00:04:36.292 00:05:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:04:36.292 00:05:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:36.292 00:05:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@553 -- # xtrace_disable 00:04:36.292 00:05:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:36.292 9 00:04:36.292 00:05:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:04:36.292 00:05:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:36.292 00:05:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@553 -- # xtrace_disable 00:04:36.292 00:05:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:36.292 10 00:04:36.292 00:05:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:04:36.292 00:05:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:36.292 00:05:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@553 -- # xtrace_disable 00:04:36.292 00:05:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:36.292 00:05:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:04:36.292 00:05:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:36.292 00:05:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:36.292 00:05:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@553 -- # xtrace_disable 00:04:36.292 00:05:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:36.860 00:05:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:04:36.860 00:05:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:36.860 00:05:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@553 -- # xtrace_disable 00:04:36.860 00:05:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:38.239 00:05:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:04:38.239 00:05:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:38.239 00:05:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:38.239 00:05:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@553 -- # xtrace_disable 00:04:38.239 00:05:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:39.178 00:05:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:04:39.178 00:04:39.178 real 0m3.099s 00:04:39.178 user 0m0.020s 00:04:39.178 sys 0m0.008s 00:04:39.178 00:05:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1118 -- # xtrace_disable 00:04:39.178 00:05:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:39.178 ************************************ 00:04:39.178 END TEST scheduler_create_thread 00:04:39.178 ************************************ 00:04:39.438 00:05:58 event.event_scheduler -- common/autotest_common.sh@1136 -- # return 0 00:04:39.438 00:05:58 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:39.438 00:05:58 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1331485 00:04:39.438 00:05:58 event.event_scheduler -- common/autotest_common.sh@942 -- # '[' -z 1331485 ']' 00:04:39.438 00:05:58 event.event_scheduler -- common/autotest_common.sh@946 -- # kill -0 1331485 00:04:39.438 00:05:58 event.event_scheduler -- common/autotest_common.sh@947 -- # uname 00:04:39.438 00:05:58 event.event_scheduler -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:04:39.438 00:05:58 event.event_scheduler -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1331485 00:04:39.438 00:05:58 event.event_scheduler -- common/autotest_common.sh@948 -- # process_name=reactor_2 00:04:39.438 00:05:58 event.event_scheduler -- common/autotest_common.sh@952 -- # '[' reactor_2 = sudo ']' 00:04:39.438 00:05:58 event.event_scheduler -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1331485' 00:04:39.438 killing process with pid 1331485 00:04:39.438 00:05:58 event.event_scheduler -- common/autotest_common.sh@961 -- # kill 1331485 00:04:39.438 00:05:58 event.event_scheduler -- common/autotest_common.sh@966 -- # wait 1331485 00:04:39.697 [2024-07-16 00:05:58.393609] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:39.957 00:04:39.957 real 0m4.742s 00:04:39.957 user 0m9.296s 00:04:39.957 sys 0m0.357s 00:04:39.957 00:05:58 event.event_scheduler -- common/autotest_common.sh@1118 -- # xtrace_disable 00:04:39.957 00:05:58 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:39.957 ************************************ 00:04:39.957 END TEST event_scheduler 00:04:39.957 ************************************ 00:04:39.957 00:05:58 event -- common/autotest_common.sh@1136 -- # return 0 00:04:39.957 00:05:58 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:39.957 00:05:58 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:39.957 00:05:58 event -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:04:39.957 00:05:58 event -- common/autotest_common.sh@1099 -- # xtrace_disable 00:04:39.957 00:05:58 event -- common/autotest_common.sh@10 -- # set +x 00:04:39.957 ************************************ 00:04:39.957 START TEST app_repeat 00:04:39.957 ************************************ 00:04:39.957 00:05:58 event.app_repeat -- common/autotest_common.sh@1117 -- # app_repeat_test 00:04:39.957 00:05:58 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:39.957 00:05:58 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:39.957 00:05:58 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:39.957 00:05:58 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:39.957 00:05:58 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:39.957 00:05:58 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:39.957 00:05:58 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:39.957 00:05:58 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1332404 00:04:39.957 00:05:58 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:39.957 00:05:58 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:39.957 00:05:58 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1332404' 00:04:39.957 Process app_repeat pid: 1332404 00:04:39.957 00:05:58 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:39.957 00:05:58 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:39.957 spdk_app_start Round 0 00:04:39.957 00:05:58 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1332404 /var/tmp/spdk-nbd.sock 00:04:39.957 00:05:58 event.app_repeat -- common/autotest_common.sh@823 -- # '[' -z 1332404 ']' 00:04:39.957 00:05:58 event.app_repeat -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:39.957 00:05:58 event.app_repeat -- common/autotest_common.sh@828 -- # local max_retries=100 00:04:39.957 00:05:58 event.app_repeat -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:39.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:39.957 00:05:58 event.app_repeat -- common/autotest_common.sh@832 -- # xtrace_disable 00:04:39.957 00:05:58 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:39.957 [2024-07-16 00:05:58.705109] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:04:39.957 [2024-07-16 00:05:58.705156] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1332404 ] 00:04:39.957 [2024-07-16 00:05:58.759467] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:40.216 [2024-07-16 00:05:58.839958] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:40.216 [2024-07-16 00:05:58.839961] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.216 00:05:58 event.app_repeat -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:04:40.216 00:05:58 event.app_repeat -- common/autotest_common.sh@856 -- # return 0 00:04:40.216 00:05:58 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:40.476 Malloc0 00:04:40.476 00:05:59 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:40.476 Malloc1 00:04:40.476 00:05:59 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:40.476 00:05:59 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:40.476 00:05:59 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:40.476 00:05:59 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:40.476 00:05:59 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:40.476 00:05:59 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:40.476 00:05:59 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:40.476 00:05:59 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:40.476 00:05:59 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:40.476 00:05:59 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:40.476 00:05:59 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:40.476 00:05:59 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:40.476 00:05:59 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:40.476 00:05:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:40.476 00:05:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:40.476 00:05:59 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:40.736 /dev/nbd0 00:04:40.736 00:05:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:40.736 00:05:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:40.736 00:05:59 event.app_repeat -- common/autotest_common.sh@860 -- # local nbd_name=nbd0 00:04:40.736 00:05:59 event.app_repeat -- common/autotest_common.sh@861 -- # local i 00:04:40.736 00:05:59 event.app_repeat -- common/autotest_common.sh@863 -- # (( i = 1 )) 00:04:40.736 00:05:59 event.app_repeat -- common/autotest_common.sh@863 -- # (( i <= 20 )) 00:04:40.736 00:05:59 event.app_repeat -- common/autotest_common.sh@864 -- # grep -q -w nbd0 /proc/partitions 00:04:40.736 00:05:59 event.app_repeat -- common/autotest_common.sh@865 -- # break 00:04:40.736 00:05:59 event.app_repeat -- common/autotest_common.sh@876 -- # (( i = 1 )) 00:04:40.736 00:05:59 event.app_repeat -- common/autotest_common.sh@876 -- # (( i <= 20 )) 00:04:40.736 00:05:59 event.app_repeat -- common/autotest_common.sh@877 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:40.736 1+0 records in 00:04:40.736 1+0 records out 00:04:40.736 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000222087 s, 18.4 MB/s 00:04:40.736 00:05:59 event.app_repeat -- common/autotest_common.sh@878 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:40.736 00:05:59 event.app_repeat -- common/autotest_common.sh@878 -- # size=4096 00:04:40.736 00:05:59 event.app_repeat -- common/autotest_common.sh@879 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:40.736 00:05:59 event.app_repeat -- common/autotest_common.sh@880 -- # '[' 4096 '!=' 0 ']' 00:04:40.736 00:05:59 event.app_repeat -- common/autotest_common.sh@881 -- # return 0 00:04:40.736 00:05:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:40.736 00:05:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:40.736 00:05:59 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:40.995 /dev/nbd1 00:04:40.995 00:05:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:40.995 00:05:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:40.995 00:05:59 event.app_repeat -- common/autotest_common.sh@860 -- # local nbd_name=nbd1 00:04:40.995 00:05:59 event.app_repeat -- common/autotest_common.sh@861 -- # local i 00:04:40.995 00:05:59 event.app_repeat -- common/autotest_common.sh@863 -- # (( i = 1 )) 00:04:40.995 00:05:59 event.app_repeat -- common/autotest_common.sh@863 -- # (( i <= 20 )) 00:04:40.995 00:05:59 event.app_repeat -- common/autotest_common.sh@864 -- # grep -q -w nbd1 /proc/partitions 00:04:40.995 00:05:59 event.app_repeat -- common/autotest_common.sh@865 -- # break 00:04:40.995 00:05:59 event.app_repeat -- common/autotest_common.sh@876 -- # (( i = 1 )) 00:04:40.995 00:05:59 event.app_repeat -- common/autotest_common.sh@876 -- # (( i <= 20 )) 00:04:40.995 00:05:59 event.app_repeat -- common/autotest_common.sh@877 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:40.995 1+0 records in 00:04:40.995 1+0 records out 00:04:40.995 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000201542 s, 20.3 MB/s 00:04:40.995 00:05:59 event.app_repeat -- common/autotest_common.sh@878 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:40.995 00:05:59 event.app_repeat -- common/autotest_common.sh@878 -- # size=4096 00:04:40.995 00:05:59 event.app_repeat -- common/autotest_common.sh@879 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:40.995 00:05:59 event.app_repeat -- common/autotest_common.sh@880 -- # '[' 4096 '!=' 0 ']' 00:04:40.995 00:05:59 event.app_repeat -- common/autotest_common.sh@881 -- # return 0 00:04:40.995 00:05:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:40.995 00:05:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:40.995 00:05:59 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:40.995 00:05:59 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:40.995 00:05:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:41.254 00:05:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:41.254 { 00:04:41.254 "nbd_device": "/dev/nbd0", 00:04:41.254 "bdev_name": "Malloc0" 00:04:41.254 }, 00:04:41.254 { 00:04:41.254 "nbd_device": "/dev/nbd1", 00:04:41.254 "bdev_name": "Malloc1" 00:04:41.254 } 00:04:41.254 ]' 00:04:41.254 00:05:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:41.254 { 00:04:41.254 "nbd_device": "/dev/nbd0", 00:04:41.254 "bdev_name": "Malloc0" 00:04:41.254 }, 00:04:41.254 { 00:04:41.254 "nbd_device": "/dev/nbd1", 00:04:41.254 "bdev_name": "Malloc1" 00:04:41.254 } 00:04:41.254 ]' 00:04:41.254 00:05:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:41.254 00:05:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:41.254 /dev/nbd1' 00:04:41.254 00:05:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:41.254 /dev/nbd1' 00:04:41.254 00:05:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:41.254 00:05:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:41.254 00:05:59 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:41.254 00:05:59 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:41.254 00:05:59 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:41.254 00:05:59 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:41.254 00:05:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:41.254 00:05:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:41.254 00:05:59 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:41.254 00:05:59 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:41.254 00:05:59 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:41.254 00:05:59 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:41.254 256+0 records in 00:04:41.254 256+0 records out 00:04:41.254 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00926726 s, 113 MB/s 00:04:41.254 00:05:59 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:41.254 00:05:59 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:41.254 256+0 records in 00:04:41.254 256+0 records out 00:04:41.254 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0138688 s, 75.6 MB/s 00:04:41.254 00:05:59 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:41.254 00:05:59 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:41.254 256+0 records in 00:04:41.254 256+0 records out 00:04:41.254 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0155345 s, 67.5 MB/s 00:04:41.254 00:06:00 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:41.254 00:06:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:41.254 00:06:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:41.254 00:06:00 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:41.254 00:06:00 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:41.254 00:06:00 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:41.254 00:06:00 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:41.254 00:06:00 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:41.254 00:06:00 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:41.254 00:06:00 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:41.254 00:06:00 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:41.254 00:06:00 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:41.254 00:06:00 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:41.254 00:06:00 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:41.254 00:06:00 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:41.254 00:06:00 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:41.254 00:06:00 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:41.255 00:06:00 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:41.255 00:06:00 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:41.513 00:06:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:41.513 00:06:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:41.513 00:06:00 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:41.513 00:06:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:41.513 00:06:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:41.513 00:06:00 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:41.513 00:06:00 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:41.513 00:06:00 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:41.513 00:06:00 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:41.513 00:06:00 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:41.772 00:06:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:41.772 00:06:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:41.772 00:06:00 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:41.772 00:06:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:41.772 00:06:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:41.772 00:06:00 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:41.772 00:06:00 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:41.772 00:06:00 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:41.772 00:06:00 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:41.772 00:06:00 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:41.772 00:06:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:41.772 00:06:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:41.772 00:06:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:41.772 00:06:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:42.031 00:06:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:42.031 00:06:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:42.031 00:06:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:42.031 00:06:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:42.031 00:06:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:42.031 00:06:00 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:42.031 00:06:00 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:42.031 00:06:00 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:42.031 00:06:00 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:42.031 00:06:00 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:42.031 00:06:00 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:42.289 [2024-07-16 00:06:01.030265] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:42.289 [2024-07-16 00:06:01.106562] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:42.289 [2024-07-16 00:06:01.106565] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.548 [2024-07-16 00:06:01.146587] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:42.548 [2024-07-16 00:06:01.146625] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:45.080 00:06:03 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:45.080 00:06:03 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:45.080 spdk_app_start Round 1 00:04:45.080 00:06:03 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1332404 /var/tmp/spdk-nbd.sock 00:04:45.080 00:06:03 event.app_repeat -- common/autotest_common.sh@823 -- # '[' -z 1332404 ']' 00:04:45.080 00:06:03 event.app_repeat -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:45.080 00:06:03 event.app_repeat -- common/autotest_common.sh@828 -- # local max_retries=100 00:04:45.080 00:06:03 event.app_repeat -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:45.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:45.080 00:06:03 event.app_repeat -- common/autotest_common.sh@832 -- # xtrace_disable 00:04:45.080 00:06:03 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:45.339 00:06:04 event.app_repeat -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:04:45.339 00:06:04 event.app_repeat -- common/autotest_common.sh@856 -- # return 0 00:04:45.339 00:06:04 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:45.339 Malloc0 00:04:45.598 00:06:04 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:45.598 Malloc1 00:04:45.598 00:06:04 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:45.598 00:06:04 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:45.598 00:06:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:45.598 00:06:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:45.598 00:06:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:45.598 00:06:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:45.598 00:06:04 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:45.598 00:06:04 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:45.598 00:06:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:45.598 00:06:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:45.598 00:06:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:45.598 00:06:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:45.598 00:06:04 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:45.598 00:06:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:45.598 00:06:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:45.598 00:06:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:45.858 /dev/nbd0 00:04:45.858 00:06:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:45.858 00:06:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:45.858 00:06:04 event.app_repeat -- common/autotest_common.sh@860 -- # local nbd_name=nbd0 00:04:45.858 00:06:04 event.app_repeat -- common/autotest_common.sh@861 -- # local i 00:04:45.858 00:06:04 event.app_repeat -- common/autotest_common.sh@863 -- # (( i = 1 )) 00:04:45.858 00:06:04 event.app_repeat -- common/autotest_common.sh@863 -- # (( i <= 20 )) 00:04:45.858 00:06:04 event.app_repeat -- common/autotest_common.sh@864 -- # grep -q -w nbd0 /proc/partitions 00:04:45.858 00:06:04 event.app_repeat -- common/autotest_common.sh@865 -- # break 00:04:45.858 00:06:04 event.app_repeat -- common/autotest_common.sh@876 -- # (( i = 1 )) 00:04:45.858 00:06:04 event.app_repeat -- common/autotest_common.sh@876 -- # (( i <= 20 )) 00:04:45.858 00:06:04 event.app_repeat -- common/autotest_common.sh@877 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:45.858 1+0 records in 00:04:45.858 1+0 records out 00:04:45.858 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00017583 s, 23.3 MB/s 00:04:45.858 00:06:04 event.app_repeat -- common/autotest_common.sh@878 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:45.858 00:06:04 event.app_repeat -- common/autotest_common.sh@878 -- # size=4096 00:04:45.858 00:06:04 event.app_repeat -- common/autotest_common.sh@879 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:45.858 00:06:04 event.app_repeat -- common/autotest_common.sh@880 -- # '[' 4096 '!=' 0 ']' 00:04:45.858 00:06:04 event.app_repeat -- common/autotest_common.sh@881 -- # return 0 00:04:45.858 00:06:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:45.858 00:06:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:45.858 00:06:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:46.117 /dev/nbd1 00:04:46.117 00:06:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:46.117 00:06:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:46.117 00:06:04 event.app_repeat -- common/autotest_common.sh@860 -- # local nbd_name=nbd1 00:04:46.117 00:06:04 event.app_repeat -- common/autotest_common.sh@861 -- # local i 00:04:46.117 00:06:04 event.app_repeat -- common/autotest_common.sh@863 -- # (( i = 1 )) 00:04:46.117 00:06:04 event.app_repeat -- common/autotest_common.sh@863 -- # (( i <= 20 )) 00:04:46.117 00:06:04 event.app_repeat -- common/autotest_common.sh@864 -- # grep -q -w nbd1 /proc/partitions 00:04:46.117 00:06:04 event.app_repeat -- common/autotest_common.sh@865 -- # break 00:04:46.117 00:06:04 event.app_repeat -- common/autotest_common.sh@876 -- # (( i = 1 )) 00:04:46.117 00:06:04 event.app_repeat -- common/autotest_common.sh@876 -- # (( i <= 20 )) 00:04:46.118 00:06:04 event.app_repeat -- common/autotest_common.sh@877 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:46.118 1+0 records in 00:04:46.118 1+0 records out 00:04:46.118 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000117479 s, 34.9 MB/s 00:04:46.118 00:06:04 event.app_repeat -- common/autotest_common.sh@878 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:46.118 00:06:04 event.app_repeat -- common/autotest_common.sh@878 -- # size=4096 00:04:46.118 00:06:04 event.app_repeat -- common/autotest_common.sh@879 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:46.118 00:06:04 event.app_repeat -- common/autotest_common.sh@880 -- # '[' 4096 '!=' 0 ']' 00:04:46.118 00:06:04 event.app_repeat -- common/autotest_common.sh@881 -- # return 0 00:04:46.118 00:06:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:46.118 00:06:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:46.118 00:06:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:46.118 00:06:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:46.118 00:06:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:46.118 00:06:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:46.118 { 00:04:46.118 "nbd_device": "/dev/nbd0", 00:04:46.118 "bdev_name": "Malloc0" 00:04:46.118 }, 00:04:46.118 { 00:04:46.118 "nbd_device": "/dev/nbd1", 00:04:46.118 "bdev_name": "Malloc1" 00:04:46.118 } 00:04:46.118 ]' 00:04:46.118 00:06:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:46.118 { 00:04:46.118 "nbd_device": "/dev/nbd0", 00:04:46.118 "bdev_name": "Malloc0" 00:04:46.118 }, 00:04:46.118 { 00:04:46.118 "nbd_device": "/dev/nbd1", 00:04:46.118 "bdev_name": "Malloc1" 00:04:46.118 } 00:04:46.118 ]' 00:04:46.118 00:06:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:46.377 00:06:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:46.377 /dev/nbd1' 00:04:46.377 00:06:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:46.377 /dev/nbd1' 00:04:46.377 00:06:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:46.377 00:06:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:46.377 00:06:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:46.377 00:06:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:46.377 00:06:04 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:46.377 00:06:04 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:46.377 00:06:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:46.377 00:06:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:46.377 00:06:04 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:46.377 00:06:04 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:46.377 00:06:04 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:46.377 00:06:04 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:46.377 256+0 records in 00:04:46.377 256+0 records out 00:04:46.377 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103724 s, 101 MB/s 00:04:46.377 00:06:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:46.377 00:06:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:46.377 256+0 records in 00:04:46.377 256+0 records out 00:04:46.377 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0136834 s, 76.6 MB/s 00:04:46.377 00:06:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:46.377 00:06:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:46.377 256+0 records in 00:04:46.377 256+0 records out 00:04:46.377 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0148103 s, 70.8 MB/s 00:04:46.377 00:06:05 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:46.377 00:06:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:46.377 00:06:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:46.377 00:06:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:46.378 00:06:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:46.378 00:06:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:46.378 00:06:05 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:46.378 00:06:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:46.378 00:06:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:46.378 00:06:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:46.378 00:06:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:46.378 00:06:05 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:46.378 00:06:05 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:46.378 00:06:05 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:46.378 00:06:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:46.378 00:06:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:46.378 00:06:05 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:46.378 00:06:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:46.378 00:06:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:46.637 00:06:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:46.637 00:06:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:46.637 00:06:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:46.638 00:06:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:46.638 00:06:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:46.638 00:06:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:46.638 00:06:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:46.638 00:06:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:46.638 00:06:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:46.638 00:06:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:46.638 00:06:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:46.638 00:06:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:46.638 00:06:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:46.638 00:06:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:46.638 00:06:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:46.638 00:06:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:46.638 00:06:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:46.638 00:06:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:46.638 00:06:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:46.638 00:06:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:46.638 00:06:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:46.896 00:06:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:46.896 00:06:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:46.896 00:06:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:46.896 00:06:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:46.896 00:06:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:46.896 00:06:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:46.896 00:06:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:46.896 00:06:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:46.896 00:06:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:46.896 00:06:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:46.896 00:06:05 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:46.896 00:06:05 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:46.896 00:06:05 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:47.155 00:06:05 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:47.414 [2024-07-16 00:06:06.055878] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:47.414 [2024-07-16 00:06:06.124032] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:47.414 [2024-07-16 00:06:06.124035] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.414 [2024-07-16 00:06:06.165701] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:47.414 [2024-07-16 00:06:06.165741] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:50.042 00:06:08 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:50.042 00:06:08 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:50.042 spdk_app_start Round 2 00:04:50.042 00:06:08 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1332404 /var/tmp/spdk-nbd.sock 00:04:50.042 00:06:08 event.app_repeat -- common/autotest_common.sh@823 -- # '[' -z 1332404 ']' 00:04:50.042 00:06:08 event.app_repeat -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:50.042 00:06:08 event.app_repeat -- common/autotest_common.sh@828 -- # local max_retries=100 00:04:50.042 00:06:08 event.app_repeat -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:50.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:50.042 00:06:08 event.app_repeat -- common/autotest_common.sh@832 -- # xtrace_disable 00:04:50.042 00:06:08 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:50.299 00:06:09 event.app_repeat -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:04:50.299 00:06:09 event.app_repeat -- common/autotest_common.sh@856 -- # return 0 00:04:50.299 00:06:09 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:50.556 Malloc0 00:04:50.556 00:06:09 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:50.814 Malloc1 00:04:50.814 00:06:09 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:50.814 00:06:09 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:50.814 00:06:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:50.814 00:06:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:50.814 00:06:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:50.814 00:06:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:50.814 00:06:09 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:50.814 00:06:09 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:50.814 00:06:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:50.814 00:06:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:50.814 00:06:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:50.814 00:06:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:50.814 00:06:09 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:50.814 00:06:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:50.814 00:06:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:50.814 00:06:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:50.814 /dev/nbd0 00:04:50.814 00:06:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:50.814 00:06:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:50.814 00:06:09 event.app_repeat -- common/autotest_common.sh@860 -- # local nbd_name=nbd0 00:04:50.814 00:06:09 event.app_repeat -- common/autotest_common.sh@861 -- # local i 00:04:50.814 00:06:09 event.app_repeat -- common/autotest_common.sh@863 -- # (( i = 1 )) 00:04:50.814 00:06:09 event.app_repeat -- common/autotest_common.sh@863 -- # (( i <= 20 )) 00:04:50.814 00:06:09 event.app_repeat -- common/autotest_common.sh@864 -- # grep -q -w nbd0 /proc/partitions 00:04:50.814 00:06:09 event.app_repeat -- common/autotest_common.sh@865 -- # break 00:04:50.814 00:06:09 event.app_repeat -- common/autotest_common.sh@876 -- # (( i = 1 )) 00:04:50.814 00:06:09 event.app_repeat -- common/autotest_common.sh@876 -- # (( i <= 20 )) 00:04:50.814 00:06:09 event.app_repeat -- common/autotest_common.sh@877 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:50.814 1+0 records in 00:04:50.814 1+0 records out 00:04:50.814 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000192524 s, 21.3 MB/s 00:04:50.814 00:06:09 event.app_repeat -- common/autotest_common.sh@878 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:50.814 00:06:09 event.app_repeat -- common/autotest_common.sh@878 -- # size=4096 00:04:50.814 00:06:09 event.app_repeat -- common/autotest_common.sh@879 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:50.814 00:06:09 event.app_repeat -- common/autotest_common.sh@880 -- # '[' 4096 '!=' 0 ']' 00:04:50.814 00:06:09 event.app_repeat -- common/autotest_common.sh@881 -- # return 0 00:04:50.814 00:06:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:50.814 00:06:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:50.814 00:06:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:51.130 /dev/nbd1 00:04:51.130 00:06:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:51.130 00:06:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:51.130 00:06:09 event.app_repeat -- common/autotest_common.sh@860 -- # local nbd_name=nbd1 00:04:51.130 00:06:09 event.app_repeat -- common/autotest_common.sh@861 -- # local i 00:04:51.130 00:06:09 event.app_repeat -- common/autotest_common.sh@863 -- # (( i = 1 )) 00:04:51.130 00:06:09 event.app_repeat -- common/autotest_common.sh@863 -- # (( i <= 20 )) 00:04:51.130 00:06:09 event.app_repeat -- common/autotest_common.sh@864 -- # grep -q -w nbd1 /proc/partitions 00:04:51.130 00:06:09 event.app_repeat -- common/autotest_common.sh@865 -- # break 00:04:51.130 00:06:09 event.app_repeat -- common/autotest_common.sh@876 -- # (( i = 1 )) 00:04:51.130 00:06:09 event.app_repeat -- common/autotest_common.sh@876 -- # (( i <= 20 )) 00:04:51.130 00:06:09 event.app_repeat -- common/autotest_common.sh@877 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:51.130 1+0 records in 00:04:51.130 1+0 records out 00:04:51.130 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000186596 s, 22.0 MB/s 00:04:51.130 00:06:09 event.app_repeat -- common/autotest_common.sh@878 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:51.130 00:06:09 event.app_repeat -- common/autotest_common.sh@878 -- # size=4096 00:04:51.130 00:06:09 event.app_repeat -- common/autotest_common.sh@879 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:51.130 00:06:09 event.app_repeat -- common/autotest_common.sh@880 -- # '[' 4096 '!=' 0 ']' 00:04:51.130 00:06:09 event.app_repeat -- common/autotest_common.sh@881 -- # return 0 00:04:51.130 00:06:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:51.130 00:06:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:51.130 00:06:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:51.130 00:06:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:51.130 00:06:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:51.389 00:06:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:51.389 { 00:04:51.389 "nbd_device": "/dev/nbd0", 00:04:51.389 "bdev_name": "Malloc0" 00:04:51.389 }, 00:04:51.389 { 00:04:51.389 "nbd_device": "/dev/nbd1", 00:04:51.389 "bdev_name": "Malloc1" 00:04:51.389 } 00:04:51.389 ]' 00:04:51.389 00:06:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:51.389 { 00:04:51.389 "nbd_device": "/dev/nbd0", 00:04:51.389 "bdev_name": "Malloc0" 00:04:51.389 }, 00:04:51.389 { 00:04:51.389 "nbd_device": "/dev/nbd1", 00:04:51.389 "bdev_name": "Malloc1" 00:04:51.389 } 00:04:51.389 ]' 00:04:51.389 00:06:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:51.389 00:06:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:51.389 /dev/nbd1' 00:04:51.389 00:06:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:51.389 /dev/nbd1' 00:04:51.389 00:06:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:51.389 00:06:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:51.389 00:06:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:51.389 00:06:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:51.389 00:06:10 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:51.389 00:06:10 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:51.389 00:06:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:51.389 00:06:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:51.389 00:06:10 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:51.389 00:06:10 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:51.389 00:06:10 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:51.389 00:06:10 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:51.389 256+0 records in 00:04:51.389 256+0 records out 00:04:51.389 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00385963 s, 272 MB/s 00:04:51.389 00:06:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:51.389 00:06:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:51.389 256+0 records in 00:04:51.389 256+0 records out 00:04:51.389 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0138181 s, 75.9 MB/s 00:04:51.389 00:06:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:51.389 00:06:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:51.389 256+0 records in 00:04:51.389 256+0 records out 00:04:51.389 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0146694 s, 71.5 MB/s 00:04:51.389 00:06:10 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:51.389 00:06:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:51.389 00:06:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:51.389 00:06:10 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:51.389 00:06:10 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:51.389 00:06:10 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:51.389 00:06:10 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:51.389 00:06:10 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:51.389 00:06:10 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:51.389 00:06:10 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:51.389 00:06:10 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:51.389 00:06:10 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:51.389 00:06:10 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:51.389 00:06:10 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:51.389 00:06:10 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:51.389 00:06:10 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:51.389 00:06:10 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:51.389 00:06:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:51.389 00:06:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:51.647 00:06:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:51.647 00:06:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:51.647 00:06:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:51.647 00:06:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:51.647 00:06:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:51.647 00:06:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:51.647 00:06:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:51.647 00:06:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:51.647 00:06:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:51.647 00:06:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:51.906 00:06:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:51.906 00:06:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:51.906 00:06:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:51.906 00:06:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:51.906 00:06:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:51.906 00:06:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:51.906 00:06:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:51.906 00:06:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:51.906 00:06:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:51.906 00:06:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:51.906 00:06:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:51.906 00:06:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:51.906 00:06:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:51.906 00:06:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:51.906 00:06:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:51.906 00:06:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:51.906 00:06:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:51.906 00:06:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:51.906 00:06:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:51.906 00:06:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:51.906 00:06:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:51.906 00:06:10 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:51.906 00:06:10 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:51.906 00:06:10 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:52.165 00:06:10 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:52.423 [2024-07-16 00:06:11.127242] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:52.423 [2024-07-16 00:06:11.193290] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:52.423 [2024-07-16 00:06:11.193294] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.423 [2024-07-16 00:06:11.233941] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:52.423 [2024-07-16 00:06:11.233980] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:55.712 00:06:13 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1332404 /var/tmp/spdk-nbd.sock 00:04:55.712 00:06:13 event.app_repeat -- common/autotest_common.sh@823 -- # '[' -z 1332404 ']' 00:04:55.712 00:06:13 event.app_repeat -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:55.712 00:06:13 event.app_repeat -- common/autotest_common.sh@828 -- # local max_retries=100 00:04:55.712 00:06:13 event.app_repeat -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:55.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:55.712 00:06:13 event.app_repeat -- common/autotest_common.sh@832 -- # xtrace_disable 00:04:55.712 00:06:13 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:55.712 00:06:14 event.app_repeat -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:04:55.712 00:06:14 event.app_repeat -- common/autotest_common.sh@856 -- # return 0 00:04:55.712 00:06:14 event.app_repeat -- event/event.sh@39 -- # killprocess 1332404 00:04:55.712 00:06:14 event.app_repeat -- common/autotest_common.sh@942 -- # '[' -z 1332404 ']' 00:04:55.712 00:06:14 event.app_repeat -- common/autotest_common.sh@946 -- # kill -0 1332404 00:04:55.712 00:06:14 event.app_repeat -- common/autotest_common.sh@947 -- # uname 00:04:55.712 00:06:14 event.app_repeat -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:04:55.712 00:06:14 event.app_repeat -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1332404 00:04:55.712 00:06:14 event.app_repeat -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:04:55.712 00:06:14 event.app_repeat -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:04:55.712 00:06:14 event.app_repeat -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1332404' 00:04:55.712 killing process with pid 1332404 00:04:55.712 00:06:14 event.app_repeat -- common/autotest_common.sh@961 -- # kill 1332404 00:04:55.712 00:06:14 event.app_repeat -- common/autotest_common.sh@966 -- # wait 1332404 00:04:55.712 spdk_app_start is called in Round 0. 00:04:55.712 Shutdown signal received, stop current app iteration 00:04:55.712 Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 reinitialization... 00:04:55.712 spdk_app_start is called in Round 1. 00:04:55.712 Shutdown signal received, stop current app iteration 00:04:55.712 Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 reinitialization... 00:04:55.712 spdk_app_start is called in Round 2. 00:04:55.712 Shutdown signal received, stop current app iteration 00:04:55.712 Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 reinitialization... 00:04:55.712 spdk_app_start is called in Round 3. 00:04:55.712 Shutdown signal received, stop current app iteration 00:04:55.712 00:06:14 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:55.712 00:06:14 event.app_repeat -- event/event.sh@42 -- # return 0 00:04:55.712 00:04:55.712 real 0m15.650s 00:04:55.712 user 0m33.883s 00:04:55.712 sys 0m2.320s 00:04:55.712 00:06:14 event.app_repeat -- common/autotest_common.sh@1118 -- # xtrace_disable 00:04:55.712 00:06:14 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:55.712 ************************************ 00:04:55.712 END TEST app_repeat 00:04:55.712 ************************************ 00:04:55.712 00:06:14 event -- common/autotest_common.sh@1136 -- # return 0 00:04:55.712 00:06:14 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:55.712 00:06:14 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:55.713 00:06:14 event -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:04:55.713 00:06:14 event -- common/autotest_common.sh@1099 -- # xtrace_disable 00:04:55.713 00:06:14 event -- common/autotest_common.sh@10 -- # set +x 00:04:55.713 ************************************ 00:04:55.713 START TEST cpu_locks 00:04:55.713 ************************************ 00:04:55.713 00:06:14 event.cpu_locks -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:55.713 * Looking for test storage... 00:04:55.713 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:55.713 00:06:14 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:55.713 00:06:14 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:55.713 00:06:14 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:55.713 00:06:14 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:55.713 00:06:14 event.cpu_locks -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:04:55.713 00:06:14 event.cpu_locks -- common/autotest_common.sh@1099 -- # xtrace_disable 00:04:55.713 00:06:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:55.713 ************************************ 00:04:55.713 START TEST default_locks 00:04:55.713 ************************************ 00:04:55.713 00:06:14 event.cpu_locks.default_locks -- common/autotest_common.sh@1117 -- # default_locks 00:04:55.713 00:06:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1335740 00:04:55.713 00:06:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1335740 00:04:55.713 00:06:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:55.713 00:06:14 event.cpu_locks.default_locks -- common/autotest_common.sh@823 -- # '[' -z 1335740 ']' 00:04:55.713 00:06:14 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:55.713 00:06:14 event.cpu_locks.default_locks -- common/autotest_common.sh@828 -- # local max_retries=100 00:04:55.713 00:06:14 event.cpu_locks.default_locks -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:55.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:55.713 00:06:14 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # xtrace_disable 00:04:55.713 00:06:14 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:55.713 [2024-07-16 00:06:14.550542] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:04:55.713 [2024-07-16 00:06:14.550592] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1335740 ] 00:04:55.972 [2024-07-16 00:06:14.603374] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.972 [2024-07-16 00:06:14.682906] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.540 00:06:15 event.cpu_locks.default_locks -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:04:56.540 00:06:15 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # return 0 00:04:56.540 00:06:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1335740 00:04:56.540 00:06:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1335740 00:04:56.540 00:06:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:56.799 lslocks: write error 00:04:56.799 00:06:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1335740 00:04:56.799 00:06:15 event.cpu_locks.default_locks -- common/autotest_common.sh@942 -- # '[' -z 1335740 ']' 00:04:56.799 00:06:15 event.cpu_locks.default_locks -- common/autotest_common.sh@946 -- # kill -0 1335740 00:04:56.799 00:06:15 event.cpu_locks.default_locks -- common/autotest_common.sh@947 -- # uname 00:04:56.799 00:06:15 event.cpu_locks.default_locks -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:04:56.799 00:06:15 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1335740 00:04:56.800 00:06:15 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:04:56.800 00:06:15 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:04:56.800 00:06:15 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1335740' 00:04:56.800 killing process with pid 1335740 00:04:56.800 00:06:15 event.cpu_locks.default_locks -- common/autotest_common.sh@961 -- # kill 1335740 00:04:56.800 00:06:15 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # wait 1335740 00:04:57.368 00:06:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1335740 00:04:57.368 00:06:15 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # local es=0 00:04:57.368 00:06:15 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # valid_exec_arg waitforlisten 1335740 00:04:57.368 00:06:15 event.cpu_locks.default_locks -- common/autotest_common.sh@630 -- # local arg=waitforlisten 00:04:57.368 00:06:15 event.cpu_locks.default_locks -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:04:57.368 00:06:15 event.cpu_locks.default_locks -- common/autotest_common.sh@634 -- # type -t waitforlisten 00:04:57.368 00:06:15 event.cpu_locks.default_locks -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:04:57.368 00:06:15 event.cpu_locks.default_locks -- common/autotest_common.sh@645 -- # waitforlisten 1335740 00:04:57.368 00:06:15 event.cpu_locks.default_locks -- common/autotest_common.sh@823 -- # '[' -z 1335740 ']' 00:04:57.368 00:06:15 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:57.369 00:06:15 event.cpu_locks.default_locks -- common/autotest_common.sh@828 -- # local max_retries=100 00:04:57.369 00:06:15 event.cpu_locks.default_locks -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:57.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:57.369 00:06:15 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # xtrace_disable 00:04:57.369 00:06:15 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:57.369 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 838: kill: (1335740) - No such process 00:04:57.369 ERROR: process (pid: 1335740) is no longer running 00:04:57.369 00:06:15 event.cpu_locks.default_locks -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:04:57.369 00:06:15 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # return 1 00:04:57.369 00:06:15 event.cpu_locks.default_locks -- common/autotest_common.sh@645 -- # es=1 00:04:57.369 00:06:15 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:04:57.369 00:06:15 event.cpu_locks.default_locks -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:04:57.369 00:06:15 event.cpu_locks.default_locks -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:04:57.369 00:06:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:04:57.369 00:06:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:57.369 00:06:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:04:57.369 00:06:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:57.369 00:04:57.369 real 0m1.446s 00:04:57.369 user 0m1.538s 00:04:57.369 sys 0m0.446s 00:04:57.369 00:06:15 event.cpu_locks.default_locks -- common/autotest_common.sh@1118 -- # xtrace_disable 00:04:57.369 00:06:15 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:57.369 ************************************ 00:04:57.369 END TEST default_locks 00:04:57.369 ************************************ 00:04:57.369 00:06:15 event.cpu_locks -- common/autotest_common.sh@1136 -- # return 0 00:04:57.369 00:06:15 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:04:57.369 00:06:15 event.cpu_locks -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:04:57.369 00:06:15 event.cpu_locks -- common/autotest_common.sh@1099 -- # xtrace_disable 00:04:57.369 00:06:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:57.369 ************************************ 00:04:57.369 START TEST default_locks_via_rpc 00:04:57.369 ************************************ 00:04:57.369 00:06:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1117 -- # default_locks_via_rpc 00:04:57.369 00:06:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1336080 00:04:57.369 00:06:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1336080 00:04:57.369 00:06:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@823 -- # '[' -z 1336080 ']' 00:04:57.369 00:06:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:57.369 00:06:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@828 -- # local max_retries=100 00:04:57.369 00:06:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:57.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:57.369 00:06:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@832 -- # xtrace_disable 00:04:57.369 00:06:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:57.369 00:06:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.369 [2024-07-16 00:06:16.055373] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:04:57.369 [2024-07-16 00:06:16.055413] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1336080 ] 00:04:57.369 [2024-07-16 00:06:16.107604] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.369 [2024-07-16 00:06:16.187807] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.305 00:06:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:04:58.305 00:06:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@856 -- # return 0 00:04:58.305 00:06:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:04:58.305 00:06:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:04:58.305 00:06:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:58.306 00:06:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:04:58.306 00:06:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:04:58.306 00:06:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:58.306 00:06:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:04:58.306 00:06:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:58.306 00:06:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:04:58.306 00:06:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:04:58.306 00:06:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:58.306 00:06:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:04:58.306 00:06:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1336080 00:04:58.306 00:06:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1336080 00:04:58.306 00:06:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:58.306 00:06:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1336080 00:04:58.306 00:06:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@942 -- # '[' -z 1336080 ']' 00:04:58.306 00:06:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@946 -- # kill -0 1336080 00:04:58.306 00:06:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@947 -- # uname 00:04:58.306 00:06:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:04:58.306 00:06:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1336080 00:04:58.306 00:06:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:04:58.306 00:06:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:04:58.306 00:06:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1336080' 00:04:58.306 killing process with pid 1336080 00:04:58.306 00:06:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@961 -- # kill 1336080 00:04:58.306 00:06:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # wait 1336080 00:04:58.564 00:04:58.564 real 0m1.335s 00:04:58.564 user 0m1.394s 00:04:58.564 sys 0m0.402s 00:04:58.564 00:06:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1118 -- # xtrace_disable 00:04:58.564 00:06:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:58.564 ************************************ 00:04:58.565 END TEST default_locks_via_rpc 00:04:58.565 ************************************ 00:04:58.565 00:06:17 event.cpu_locks -- common/autotest_common.sh@1136 -- # return 0 00:04:58.565 00:06:17 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:04:58.565 00:06:17 event.cpu_locks -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:04:58.565 00:06:17 event.cpu_locks -- common/autotest_common.sh@1099 -- # xtrace_disable 00:04:58.565 00:06:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:58.565 ************************************ 00:04:58.565 START TEST non_locking_app_on_locked_coremask 00:04:58.565 ************************************ 00:04:58.565 00:06:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1117 -- # non_locking_app_on_locked_coremask 00:04:58.565 00:06:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1336348 00:04:58.565 00:06:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1336348 /var/tmp/spdk.sock 00:04:58.565 00:06:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@823 -- # '[' -z 1336348 ']' 00:04:58.565 00:06:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:58.565 00:06:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@828 -- # local max_retries=100 00:04:58.565 00:06:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:58.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:58.565 00:06:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # xtrace_disable 00:04:58.565 00:06:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:58.565 00:06:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:58.823 [2024-07-16 00:06:17.443628] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:04:58.823 [2024-07-16 00:06:17.443667] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1336348 ] 00:04:58.823 [2024-07-16 00:06:17.498649] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:58.823 [2024-07-16 00:06:17.578434] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.391 00:06:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:04:59.391 00:06:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # return 0 00:04:59.391 00:06:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1336436 00:04:59.391 00:06:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:04:59.391 00:06:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1336436 /var/tmp/spdk2.sock 00:04:59.391 00:06:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@823 -- # '[' -z 1336436 ']' 00:04:59.391 00:06:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:59.391 00:06:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@828 -- # local max_retries=100 00:04:59.391 00:06:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:59.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:59.391 00:06:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # xtrace_disable 00:04:59.391 00:06:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:59.650 [2024-07-16 00:06:18.257182] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:04:59.650 [2024-07-16 00:06:18.257233] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1336436 ] 00:04:59.650 [2024-07-16 00:06:18.331349] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:59.650 [2024-07-16 00:06:18.331371] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.650 [2024-07-16 00:06:18.476527] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.587 00:06:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:05:00.588 00:06:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # return 0 00:05:00.588 00:06:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1336348 00:05:00.588 00:06:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1336348 00:05:00.588 00:06:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:00.846 lslocks: write error 00:05:00.846 00:06:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1336348 00:05:00.847 00:06:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@942 -- # '[' -z 1336348 ']' 00:05:00.847 00:06:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # kill -0 1336348 00:05:00.847 00:06:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@947 -- # uname 00:05:00.847 00:06:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:05:00.847 00:06:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1336348 00:05:00.847 00:06:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:05:00.847 00:06:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:05:00.847 00:06:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1336348' 00:05:00.847 killing process with pid 1336348 00:05:00.847 00:06:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@961 -- # kill 1336348 00:05:00.847 00:06:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # wait 1336348 00:05:01.414 00:06:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1336436 00:05:01.414 00:06:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@942 -- # '[' -z 1336436 ']' 00:05:01.414 00:06:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # kill -0 1336436 00:05:01.414 00:06:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@947 -- # uname 00:05:01.414 00:06:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:05:01.414 00:06:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1336436 00:05:01.414 00:06:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:05:01.414 00:06:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:05:01.414 00:06:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1336436' 00:05:01.414 killing process with pid 1336436 00:05:01.414 00:06:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@961 -- # kill 1336436 00:05:01.414 00:06:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # wait 1336436 00:05:01.674 00:05:01.674 real 0m3.123s 00:05:01.674 user 0m3.314s 00:05:01.674 sys 0m0.874s 00:05:01.674 00:06:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:01.674 00:06:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:01.674 ************************************ 00:05:01.674 END TEST non_locking_app_on_locked_coremask 00:05:01.674 ************************************ 00:05:01.934 00:06:20 event.cpu_locks -- common/autotest_common.sh@1136 -- # return 0 00:05:01.934 00:06:20 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:01.934 00:06:20 event.cpu_locks -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:05:01.934 00:06:20 event.cpu_locks -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:01.934 00:06:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:01.934 ************************************ 00:05:01.934 START TEST locking_app_on_unlocked_coremask 00:05:01.934 ************************************ 00:05:01.934 00:06:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1117 -- # locking_app_on_unlocked_coremask 00:05:01.934 00:06:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1336926 00:05:01.934 00:06:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1336926 /var/tmp/spdk.sock 00:05:01.934 00:06:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@823 -- # '[' -z 1336926 ']' 00:05:01.934 00:06:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:01.934 00:06:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@828 -- # local max_retries=100 00:05:01.934 00:06:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:01.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:01.934 00:06:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:01.934 00:06:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # xtrace_disable 00:05:01.934 00:06:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:01.934 [2024-07-16 00:06:20.621545] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:05:01.934 [2024-07-16 00:06:20.621590] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1336926 ] 00:05:01.934 [2024-07-16 00:06:20.673803] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:01.934 [2024-07-16 00:06:20.673826] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.934 [2024-07-16 00:06:20.752518] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.873 00:06:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:05:02.873 00:06:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # return 0 00:05:02.873 00:06:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1336999 00:05:02.873 00:06:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:02.873 00:06:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1336999 /var/tmp/spdk2.sock 00:05:02.873 00:06:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@823 -- # '[' -z 1336999 ']' 00:05:02.873 00:06:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:02.873 00:06:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@828 -- # local max_retries=100 00:05:02.873 00:06:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:02.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:02.873 00:06:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # xtrace_disable 00:05:02.873 00:06:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:02.873 [2024-07-16 00:06:21.429163] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:05:02.873 [2024-07-16 00:06:21.429210] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1336999 ] 00:05:02.873 [2024-07-16 00:06:21.505790] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.873 [2024-07-16 00:06:21.658703] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.441 00:06:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:05:03.441 00:06:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # return 0 00:05:03.441 00:06:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1336999 00:05:03.441 00:06:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1336999 00:05:03.441 00:06:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:04.009 lslocks: write error 00:05:04.009 00:06:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1336926 00:05:04.009 00:06:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@942 -- # '[' -z 1336926 ']' 00:05:04.009 00:06:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # kill -0 1336926 00:05:04.009 00:06:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@947 -- # uname 00:05:04.009 00:06:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:05:04.009 00:06:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1336926 00:05:04.009 00:06:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:05:04.009 00:06:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:05:04.009 00:06:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1336926' 00:05:04.009 killing process with pid 1336926 00:05:04.009 00:06:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@961 -- # kill 1336926 00:05:04.009 00:06:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # wait 1336926 00:05:04.944 00:06:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1336999 00:05:04.944 00:06:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@942 -- # '[' -z 1336999 ']' 00:05:04.944 00:06:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # kill -0 1336999 00:05:04.944 00:06:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@947 -- # uname 00:05:04.944 00:06:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:05:04.944 00:06:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1336999 00:05:04.944 00:06:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:05:04.944 00:06:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:05:04.944 00:06:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1336999' 00:05:04.944 killing process with pid 1336999 00:05:04.944 00:06:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@961 -- # kill 1336999 00:05:04.944 00:06:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # wait 1336999 00:05:05.203 00:05:05.203 real 0m3.234s 00:05:05.203 user 0m3.437s 00:05:05.203 sys 0m0.928s 00:05:05.203 00:06:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:05.203 00:06:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:05.203 ************************************ 00:05:05.203 END TEST locking_app_on_unlocked_coremask 00:05:05.203 ************************************ 00:05:05.203 00:06:23 event.cpu_locks -- common/autotest_common.sh@1136 -- # return 0 00:05:05.203 00:06:23 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:05.203 00:06:23 event.cpu_locks -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:05:05.203 00:06:23 event.cpu_locks -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:05.203 00:06:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:05.203 ************************************ 00:05:05.203 START TEST locking_app_on_locked_coremask 00:05:05.203 ************************************ 00:05:05.203 00:06:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1117 -- # locking_app_on_locked_coremask 00:05:05.203 00:06:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1337429 00:05:05.203 00:06:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1337429 /var/tmp/spdk.sock 00:05:05.203 00:06:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:05.203 00:06:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@823 -- # '[' -z 1337429 ']' 00:05:05.203 00:06:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:05.203 00:06:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@828 -- # local max_retries=100 00:05:05.203 00:06:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:05.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:05.203 00:06:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # xtrace_disable 00:05:05.203 00:06:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:05.203 [2024-07-16 00:06:23.919878] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:05:05.203 [2024-07-16 00:06:23.919920] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1337429 ] 00:05:05.203 [2024-07-16 00:06:23.972855] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.203 [2024-07-16 00:06:24.043048] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.166 00:06:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:05:06.166 00:06:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # return 0 00:05:06.166 00:06:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:06.166 00:06:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1337656 00:05:06.166 00:06:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1337656 /var/tmp/spdk2.sock 00:05:06.166 00:06:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # local es=0 00:05:06.166 00:06:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # valid_exec_arg waitforlisten 1337656 /var/tmp/spdk2.sock 00:05:06.166 00:06:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@630 -- # local arg=waitforlisten 00:05:06.166 00:06:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:05:06.166 00:06:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@634 -- # type -t waitforlisten 00:05:06.166 00:06:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:05:06.166 00:06:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@645 -- # waitforlisten 1337656 /var/tmp/spdk2.sock 00:05:06.166 00:06:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@823 -- # '[' -z 1337656 ']' 00:05:06.166 00:06:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:06.166 00:06:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@828 -- # local max_retries=100 00:05:06.166 00:06:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:06.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:06.166 00:06:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # xtrace_disable 00:05:06.166 00:06:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:06.166 [2024-07-16 00:06:24.742793] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:05:06.166 [2024-07-16 00:06:24.742840] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1337656 ] 00:05:06.166 [2024-07-16 00:06:24.818187] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1337429 has claimed it. 00:05:06.166 [2024-07-16 00:06:24.818231] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:06.745 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 838: kill: (1337656) - No such process 00:05:06.745 ERROR: process (pid: 1337656) is no longer running 00:05:06.745 00:06:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:05:06.745 00:06:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # return 1 00:05:06.745 00:06:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@645 -- # es=1 00:05:06.745 00:06:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:05:06.745 00:06:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:05:06.745 00:06:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:05:06.745 00:06:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1337429 00:05:06.745 00:06:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1337429 00:05:06.745 00:06:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:07.311 lslocks: write error 00:05:07.311 00:06:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1337429 00:05:07.311 00:06:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@942 -- # '[' -z 1337429 ']' 00:05:07.311 00:06:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # kill -0 1337429 00:05:07.311 00:06:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@947 -- # uname 00:05:07.311 00:06:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:05:07.311 00:06:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1337429 00:05:07.311 00:06:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:05:07.311 00:06:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:05:07.311 00:06:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1337429' 00:05:07.311 killing process with pid 1337429 00:05:07.311 00:06:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@961 -- # kill 1337429 00:05:07.311 00:06:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # wait 1337429 00:05:07.570 00:05:07.570 real 0m2.341s 00:05:07.570 user 0m2.570s 00:05:07.570 sys 0m0.623s 00:05:07.570 00:06:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:07.570 00:06:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:07.570 ************************************ 00:05:07.570 END TEST locking_app_on_locked_coremask 00:05:07.570 ************************************ 00:05:07.570 00:06:26 event.cpu_locks -- common/autotest_common.sh@1136 -- # return 0 00:05:07.570 00:06:26 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:07.570 00:06:26 event.cpu_locks -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:05:07.570 00:06:26 event.cpu_locks -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:07.570 00:06:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:07.570 ************************************ 00:05:07.570 START TEST locking_overlapped_coremask 00:05:07.570 ************************************ 00:05:07.570 00:06:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1117 -- # locking_overlapped_coremask 00:05:07.570 00:06:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1337925 00:05:07.570 00:06:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1337925 /var/tmp/spdk.sock 00:05:07.570 00:06:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:07.570 00:06:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@823 -- # '[' -z 1337925 ']' 00:05:07.570 00:06:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:07.570 00:06:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@828 -- # local max_retries=100 00:05:07.570 00:06:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:07.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:07.570 00:06:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # xtrace_disable 00:05:07.570 00:06:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:07.570 [2024-07-16 00:06:26.323754] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:05:07.570 [2024-07-16 00:06:26.323798] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1337925 ] 00:05:07.570 [2024-07-16 00:06:26.378289] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:07.829 [2024-07-16 00:06:26.449724] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:07.829 [2024-07-16 00:06:26.449808] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:07.829 [2024-07-16 00:06:26.449810] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.396 00:06:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:05:08.396 00:06:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # return 0 00:05:08.396 00:06:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:08.396 00:06:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1338158 00:05:08.396 00:06:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1338158 /var/tmp/spdk2.sock 00:05:08.396 00:06:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # local es=0 00:05:08.396 00:06:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # valid_exec_arg waitforlisten 1338158 /var/tmp/spdk2.sock 00:05:08.396 00:06:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@630 -- # local arg=waitforlisten 00:05:08.396 00:06:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:05:08.396 00:06:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@634 -- # type -t waitforlisten 00:05:08.396 00:06:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:05:08.396 00:06:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@645 -- # waitforlisten 1338158 /var/tmp/spdk2.sock 00:05:08.396 00:06:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@823 -- # '[' -z 1338158 ']' 00:05:08.396 00:06:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:08.396 00:06:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@828 -- # local max_retries=100 00:05:08.396 00:06:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:08.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:08.396 00:06:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # xtrace_disable 00:05:08.396 00:06:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:08.396 [2024-07-16 00:06:27.165134] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:05:08.396 [2024-07-16 00:06:27.165181] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1338158 ] 00:05:08.396 [2024-07-16 00:06:27.242213] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1337925 has claimed it. 00:05:08.396 [2024-07-16 00:06:27.242253] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:08.989 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 838: kill: (1338158) - No such process 00:05:08.989 ERROR: process (pid: 1338158) is no longer running 00:05:08.989 00:06:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:05:08.989 00:06:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # return 1 00:05:08.989 00:06:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@645 -- # es=1 00:05:08.989 00:06:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:05:08.989 00:06:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:05:08.989 00:06:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:05:08.989 00:06:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:08.989 00:06:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:08.989 00:06:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:08.989 00:06:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:08.989 00:06:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1337925 00:05:08.989 00:06:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@942 -- # '[' -z 1337925 ']' 00:05:08.989 00:06:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@946 -- # kill -0 1337925 00:05:08.989 00:06:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@947 -- # uname 00:05:08.989 00:06:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:05:08.989 00:06:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1337925 00:05:08.989 00:06:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:05:08.989 00:06:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:05:08.989 00:06:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1337925' 00:05:08.989 killing process with pid 1337925 00:05:08.989 00:06:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@961 -- # kill 1337925 00:05:08.989 00:06:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # wait 1337925 00:05:09.556 00:05:09.556 real 0m1.874s 00:05:09.556 user 0m5.302s 00:05:09.556 sys 0m0.384s 00:05:09.556 00:06:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:09.556 00:06:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:09.556 ************************************ 00:05:09.556 END TEST locking_overlapped_coremask 00:05:09.556 ************************************ 00:05:09.556 00:06:28 event.cpu_locks -- common/autotest_common.sh@1136 -- # return 0 00:05:09.556 00:06:28 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:09.556 00:06:28 event.cpu_locks -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:05:09.556 00:06:28 event.cpu_locks -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:09.556 00:06:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:09.556 ************************************ 00:05:09.556 START TEST locking_overlapped_coremask_via_rpc 00:05:09.556 ************************************ 00:05:09.556 00:06:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1117 -- # locking_overlapped_coremask_via_rpc 00:05:09.556 00:06:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1338254 00:05:09.556 00:06:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1338254 /var/tmp/spdk.sock 00:05:09.556 00:06:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:09.556 00:06:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@823 -- # '[' -z 1338254 ']' 00:05:09.556 00:06:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:09.556 00:06:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@828 -- # local max_retries=100 00:05:09.556 00:06:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:09.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:09.556 00:06:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # xtrace_disable 00:05:09.556 00:06:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:09.556 [2024-07-16 00:06:28.248866] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:05:09.556 [2024-07-16 00:06:28.248903] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1338254 ] 00:05:09.556 [2024-07-16 00:06:28.301705] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:09.556 [2024-07-16 00:06:28.301729] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:09.556 [2024-07-16 00:06:28.383090] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:09.556 [2024-07-16 00:06:28.383185] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.556 [2024-07-16 00:06:28.383185] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:10.491 00:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:05:10.491 00:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # return 0 00:05:10.491 00:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1338435 00:05:10.491 00:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1338435 /var/tmp/spdk2.sock 00:05:10.491 00:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:10.491 00:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@823 -- # '[' -z 1338435 ']' 00:05:10.491 00:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:10.491 00:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@828 -- # local max_retries=100 00:05:10.491 00:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:10.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:10.491 00:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # xtrace_disable 00:05:10.491 00:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.491 [2024-07-16 00:06:29.106455] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:05:10.491 [2024-07-16 00:06:29.106504] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1338435 ] 00:05:10.491 [2024-07-16 00:06:29.182473] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:10.491 [2024-07-16 00:06:29.182494] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:10.491 [2024-07-16 00:06:29.328930] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:10.491 [2024-07-16 00:06:29.329049] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:10.491 [2024-07-16 00:06:29.329049] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:05:11.056 00:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:05:11.315 00:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # return 0 00:05:11.315 00:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:11.315 00:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:11.315 00:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.315 00:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:11.315 00:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:11.315 00:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # local es=0 00:05:11.315 00:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:11.315 00:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@630 -- # local arg=rpc_cmd 00:05:11.315 00:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:05:11.315 00:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@634 -- # type -t rpc_cmd 00:05:11.315 00:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:05:11.315 00:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@645 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:11.316 00:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:11.316 00:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.316 [2024-07-16 00:06:29.928296] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1338254 has claimed it. 00:05:11.316 request: 00:05:11.316 { 00:05:11.316 "method": "framework_enable_cpumask_locks", 00:05:11.316 "req_id": 1 00:05:11.316 } 00:05:11.316 Got JSON-RPC error response 00:05:11.316 response: 00:05:11.316 { 00:05:11.316 "code": -32603, 00:05:11.316 "message": "Failed to claim CPU core: 2" 00:05:11.316 } 00:05:11.316 00:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@581 -- # [[ 1 == 0 ]] 00:05:11.316 00:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@645 -- # es=1 00:05:11.316 00:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:05:11.316 00:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:05:11.316 00:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:05:11.316 00:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1338254 /var/tmp/spdk.sock 00:05:11.316 00:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@823 -- # '[' -z 1338254 ']' 00:05:11.316 00:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:11.316 00:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@828 -- # local max_retries=100 00:05:11.316 00:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:11.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:11.316 00:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # xtrace_disable 00:05:11.316 00:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.316 00:06:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:05:11.316 00:06:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # return 0 00:05:11.316 00:06:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1338435 /var/tmp/spdk2.sock 00:05:11.316 00:06:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@823 -- # '[' -z 1338435 ']' 00:05:11.316 00:06:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:11.316 00:06:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@828 -- # local max_retries=100 00:05:11.316 00:06:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:11.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:11.316 00:06:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # xtrace_disable 00:05:11.316 00:06:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.575 00:06:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:05:11.575 00:06:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # return 0 00:05:11.575 00:06:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:11.575 00:06:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:11.575 00:06:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:11.575 00:06:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:11.575 00:05:11.575 real 0m2.102s 00:05:11.575 user 0m0.878s 00:05:11.575 sys 0m0.160s 00:05:11.575 00:06:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:11.575 00:06:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.575 ************************************ 00:05:11.575 END TEST locking_overlapped_coremask_via_rpc 00:05:11.575 ************************************ 00:05:11.575 00:06:30 event.cpu_locks -- common/autotest_common.sh@1136 -- # return 0 00:05:11.575 00:06:30 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:11.575 00:06:30 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1338254 ]] 00:05:11.575 00:06:30 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1338254 00:05:11.575 00:06:30 event.cpu_locks -- common/autotest_common.sh@942 -- # '[' -z 1338254 ']' 00:05:11.575 00:06:30 event.cpu_locks -- common/autotest_common.sh@946 -- # kill -0 1338254 00:05:11.575 00:06:30 event.cpu_locks -- common/autotest_common.sh@947 -- # uname 00:05:11.575 00:06:30 event.cpu_locks -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:05:11.575 00:06:30 event.cpu_locks -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1338254 00:05:11.575 00:06:30 event.cpu_locks -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:05:11.575 00:06:30 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:05:11.575 00:06:30 event.cpu_locks -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1338254' 00:05:11.575 killing process with pid 1338254 00:05:11.575 00:06:30 event.cpu_locks -- common/autotest_common.sh@961 -- # kill 1338254 00:05:11.575 00:06:30 event.cpu_locks -- common/autotest_common.sh@966 -- # wait 1338254 00:05:12.144 00:06:30 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1338435 ]] 00:05:12.144 00:06:30 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1338435 00:05:12.144 00:06:30 event.cpu_locks -- common/autotest_common.sh@942 -- # '[' -z 1338435 ']' 00:05:12.144 00:06:30 event.cpu_locks -- common/autotest_common.sh@946 -- # kill -0 1338435 00:05:12.144 00:06:30 event.cpu_locks -- common/autotest_common.sh@947 -- # uname 00:05:12.144 00:06:30 event.cpu_locks -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:05:12.144 00:06:30 event.cpu_locks -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1338435 00:05:12.144 00:06:30 event.cpu_locks -- common/autotest_common.sh@948 -- # process_name=reactor_2 00:05:12.144 00:06:30 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' reactor_2 = sudo ']' 00:05:12.144 00:06:30 event.cpu_locks -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1338435' 00:05:12.144 killing process with pid 1338435 00:05:12.144 00:06:30 event.cpu_locks -- common/autotest_common.sh@961 -- # kill 1338435 00:05:12.144 00:06:30 event.cpu_locks -- common/autotest_common.sh@966 -- # wait 1338435 00:05:12.404 00:06:31 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:12.404 00:06:31 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:12.404 00:06:31 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1338254 ]] 00:05:12.404 00:06:31 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1338254 00:05:12.404 00:06:31 event.cpu_locks -- common/autotest_common.sh@942 -- # '[' -z 1338254 ']' 00:05:12.404 00:06:31 event.cpu_locks -- common/autotest_common.sh@946 -- # kill -0 1338254 00:05:12.404 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 946: kill: (1338254) - No such process 00:05:12.404 00:06:31 event.cpu_locks -- common/autotest_common.sh@969 -- # echo 'Process with pid 1338254 is not found' 00:05:12.404 Process with pid 1338254 is not found 00:05:12.404 00:06:31 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1338435 ]] 00:05:12.404 00:06:31 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1338435 00:05:12.404 00:06:31 event.cpu_locks -- common/autotest_common.sh@942 -- # '[' -z 1338435 ']' 00:05:12.404 00:06:31 event.cpu_locks -- common/autotest_common.sh@946 -- # kill -0 1338435 00:05:12.404 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 946: kill: (1338435) - No such process 00:05:12.404 00:06:31 event.cpu_locks -- common/autotest_common.sh@969 -- # echo 'Process with pid 1338435 is not found' 00:05:12.404 Process with pid 1338435 is not found 00:05:12.404 00:06:31 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:12.404 00:05:12.404 real 0m16.687s 00:05:12.404 user 0m28.954s 00:05:12.404 sys 0m4.667s 00:05:12.404 00:06:31 event.cpu_locks -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:12.404 00:06:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:12.404 ************************************ 00:05:12.404 END TEST cpu_locks 00:05:12.404 ************************************ 00:05:12.404 00:06:31 event -- common/autotest_common.sh@1136 -- # return 0 00:05:12.404 00:05:12.404 real 0m41.259s 00:05:12.404 user 1m18.787s 00:05:12.404 sys 0m7.898s 00:05:12.404 00:06:31 event -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:12.404 00:06:31 event -- common/autotest_common.sh@10 -- # set +x 00:05:12.404 ************************************ 00:05:12.404 END TEST event 00:05:12.404 ************************************ 00:05:12.404 00:06:31 -- common/autotest_common.sh@1136 -- # return 0 00:05:12.404 00:06:31 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:12.404 00:06:31 -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:05:12.404 00:06:31 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:12.404 00:06:31 -- common/autotest_common.sh@10 -- # set +x 00:05:12.404 ************************************ 00:05:12.404 START TEST thread 00:05:12.404 ************************************ 00:05:12.404 00:06:31 thread -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:12.404 * Looking for test storage... 00:05:12.404 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:12.404 00:06:31 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:12.404 00:06:31 thread -- common/autotest_common.sh@1093 -- # '[' 8 -le 1 ']' 00:05:12.404 00:06:31 thread -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:12.404 00:06:31 thread -- common/autotest_common.sh@10 -- # set +x 00:05:12.664 ************************************ 00:05:12.664 START TEST thread_poller_perf 00:05:12.664 ************************************ 00:05:12.664 00:06:31 thread.thread_poller_perf -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:12.664 [2024-07-16 00:06:31.298609] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:05:12.664 [2024-07-16 00:06:31.298668] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1338982 ] 00:05:12.664 [2024-07-16 00:06:31.356668] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.664 [2024-07-16 00:06:31.429721] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.664 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:14.043 ====================================== 00:05:14.043 busy:2305873040 (cyc) 00:05:14.043 total_run_count: 410000 00:05:14.043 tsc_hz: 2300000000 (cyc) 00:05:14.043 ====================================== 00:05:14.043 poller_cost: 5624 (cyc), 2445 (nsec) 00:05:14.043 00:05:14.043 real 0m1.227s 00:05:14.043 user 0m1.146s 00:05:14.043 sys 0m0.078s 00:05:14.043 00:06:32 thread.thread_poller_perf -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:14.043 00:06:32 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:14.043 ************************************ 00:05:14.043 END TEST thread_poller_perf 00:05:14.043 ************************************ 00:05:14.043 00:06:32 thread -- common/autotest_common.sh@1136 -- # return 0 00:05:14.043 00:06:32 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:14.043 00:06:32 thread -- common/autotest_common.sh@1093 -- # '[' 8 -le 1 ']' 00:05:14.043 00:06:32 thread -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:14.043 00:06:32 thread -- common/autotest_common.sh@10 -- # set +x 00:05:14.043 ************************************ 00:05:14.043 START TEST thread_poller_perf 00:05:14.043 ************************************ 00:05:14.043 00:06:32 thread.thread_poller_perf -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:14.043 [2024-07-16 00:06:32.568478] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:05:14.043 [2024-07-16 00:06:32.568523] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1339217 ] 00:05:14.043 [2024-07-16 00:06:32.622146] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.043 [2024-07-16 00:06:32.696161] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.043 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:14.978 ====================================== 00:05:14.978 busy:2301816940 (cyc) 00:05:14.978 total_run_count: 5215000 00:05:14.978 tsc_hz: 2300000000 (cyc) 00:05:14.978 ====================================== 00:05:14.978 poller_cost: 441 (cyc), 191 (nsec) 00:05:14.978 00:05:14.978 real 0m1.211s 00:05:14.978 user 0m1.139s 00:05:14.978 sys 0m0.068s 00:05:14.978 00:06:33 thread.thread_poller_perf -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:14.978 00:06:33 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:14.978 ************************************ 00:05:14.978 END TEST thread_poller_perf 00:05:14.978 ************************************ 00:05:14.978 00:06:33 thread -- common/autotest_common.sh@1136 -- # return 0 00:05:14.978 00:06:33 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:14.978 00:05:14.978 real 0m2.632s 00:05:14.978 user 0m2.357s 00:05:14.979 sys 0m0.282s 00:05:14.979 00:06:33 thread -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:14.979 00:06:33 thread -- common/autotest_common.sh@10 -- # set +x 00:05:14.979 ************************************ 00:05:14.979 END TEST thread 00:05:14.979 ************************************ 00:05:14.979 00:06:33 -- common/autotest_common.sh@1136 -- # return 0 00:05:14.979 00:06:33 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:05:14.979 00:06:33 -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:05:15.238 00:06:33 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:15.238 00:06:33 -- common/autotest_common.sh@10 -- # set +x 00:05:15.238 ************************************ 00:05:15.238 START TEST accel 00:05:15.238 ************************************ 00:05:15.239 00:06:33 accel -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:05:15.239 * Looking for test storage... 00:05:15.239 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:05:15.239 00:06:33 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:05:15.239 00:06:33 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:05:15.239 00:06:33 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:15.239 00:06:33 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=1339515 00:05:15.239 00:06:33 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:05:15.239 00:06:33 accel -- accel/accel.sh@63 -- # waitforlisten 1339515 00:05:15.239 00:06:33 accel -- accel/accel.sh@61 -- # build_accel_config 00:05:15.239 00:06:33 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:15.239 00:06:33 accel -- common/autotest_common.sh@823 -- # '[' -z 1339515 ']' 00:05:15.239 00:06:33 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:15.239 00:06:33 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:15.239 00:06:33 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:15.239 00:06:33 accel -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:15.239 00:06:33 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:15.239 00:06:33 accel -- accel/accel.sh@40 -- # local IFS=, 00:05:15.239 00:06:33 accel -- common/autotest_common.sh@828 -- # local max_retries=100 00:05:15.239 00:06:33 accel -- accel/accel.sh@41 -- # jq -r . 00:05:15.239 00:06:33 accel -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:15.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:15.239 00:06:33 accel -- common/autotest_common.sh@832 -- # xtrace_disable 00:05:15.239 00:06:33 accel -- common/autotest_common.sh@10 -- # set +x 00:05:15.239 [2024-07-16 00:06:33.982382] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:05:15.239 [2024-07-16 00:06:33.982433] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1339515 ] 00:05:15.239 [2024-07-16 00:06:34.035221] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.498 [2024-07-16 00:06:34.117636] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.067 00:06:34 accel -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:05:16.067 00:06:34 accel -- common/autotest_common.sh@856 -- # return 0 00:05:16.067 00:06:34 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:05:16.067 00:06:34 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:05:16.067 00:06:34 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:05:16.067 00:06:34 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:05:16.067 00:06:34 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:05:16.067 00:06:34 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:05:16.067 00:06:34 accel -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:16.067 00:06:34 accel -- common/autotest_common.sh@10 -- # set +x 00:05:16.067 00:06:34 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:05:16.067 00:06:34 accel -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:16.067 00:06:34 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:16.067 00:06:34 accel -- accel/accel.sh@72 -- # IFS== 00:05:16.067 00:06:34 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:16.067 00:06:34 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:16.067 00:06:34 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:16.067 00:06:34 accel -- accel/accel.sh@72 -- # IFS== 00:05:16.067 00:06:34 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:16.067 00:06:34 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:16.067 00:06:34 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:16.067 00:06:34 accel -- accel/accel.sh@72 -- # IFS== 00:05:16.067 00:06:34 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:16.067 00:06:34 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:16.067 00:06:34 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:16.067 00:06:34 accel -- accel/accel.sh@72 -- # IFS== 00:05:16.067 00:06:34 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:16.067 00:06:34 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:16.067 00:06:34 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:16.067 00:06:34 accel -- accel/accel.sh@72 -- # IFS== 00:05:16.067 00:06:34 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:16.067 00:06:34 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:16.067 00:06:34 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:16.067 00:06:34 accel -- accel/accel.sh@72 -- # IFS== 00:05:16.067 00:06:34 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:16.067 00:06:34 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:16.067 00:06:34 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:16.067 00:06:34 accel -- accel/accel.sh@72 -- # IFS== 00:05:16.067 00:06:34 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:16.067 00:06:34 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:16.067 00:06:34 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:16.067 00:06:34 accel -- accel/accel.sh@72 -- # IFS== 00:05:16.067 00:06:34 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:16.067 00:06:34 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:16.067 00:06:34 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:16.067 00:06:34 accel -- accel/accel.sh@72 -- # IFS== 00:05:16.067 00:06:34 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:16.067 00:06:34 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:16.067 00:06:34 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:16.067 00:06:34 accel -- accel/accel.sh@72 -- # IFS== 00:05:16.067 00:06:34 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:16.067 00:06:34 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:16.067 00:06:34 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:16.067 00:06:34 accel -- accel/accel.sh@72 -- # IFS== 00:05:16.067 00:06:34 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:16.067 00:06:34 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:16.067 00:06:34 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:16.067 00:06:34 accel -- accel/accel.sh@72 -- # IFS== 00:05:16.067 00:06:34 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:16.067 00:06:34 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:16.067 00:06:34 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:16.067 00:06:34 accel -- accel/accel.sh@72 -- # IFS== 00:05:16.067 00:06:34 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:16.067 00:06:34 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:16.067 00:06:34 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:16.067 00:06:34 accel -- accel/accel.sh@72 -- # IFS== 00:05:16.067 00:06:34 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:16.067 00:06:34 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:16.067 00:06:34 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:16.067 00:06:34 accel -- accel/accel.sh@72 -- # IFS== 00:05:16.067 00:06:34 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:16.067 00:06:34 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:16.067 00:06:34 accel -- accel/accel.sh@75 -- # killprocess 1339515 00:05:16.067 00:06:34 accel -- common/autotest_common.sh@942 -- # '[' -z 1339515 ']' 00:05:16.067 00:06:34 accel -- common/autotest_common.sh@946 -- # kill -0 1339515 00:05:16.067 00:06:34 accel -- common/autotest_common.sh@947 -- # uname 00:05:16.067 00:06:34 accel -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:05:16.067 00:06:34 accel -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1339515 00:05:16.067 00:06:34 accel -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:05:16.067 00:06:34 accel -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:05:16.067 00:06:34 accel -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1339515' 00:05:16.067 killing process with pid 1339515 00:05:16.067 00:06:34 accel -- common/autotest_common.sh@961 -- # kill 1339515 00:05:16.067 00:06:34 accel -- common/autotest_common.sh@966 -- # wait 1339515 00:05:16.636 00:06:35 accel -- accel/accel.sh@76 -- # trap - ERR 00:05:16.636 00:06:35 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:05:16.636 00:06:35 accel -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:05:16.636 00:06:35 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:16.636 00:06:35 accel -- common/autotest_common.sh@10 -- # set +x 00:05:16.636 00:06:35 accel.accel_help -- common/autotest_common.sh@1117 -- # accel_perf -h 00:05:16.636 00:06:35 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:05:16.636 00:06:35 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:05:16.636 00:06:35 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:16.636 00:06:35 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:16.636 00:06:35 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:16.636 00:06:35 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:16.636 00:06:35 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:16.637 00:06:35 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:05:16.637 00:06:35 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:05:16.637 00:06:35 accel.accel_help -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:16.637 00:06:35 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:05:16.637 00:06:35 accel -- common/autotest_common.sh@1136 -- # return 0 00:05:16.637 00:06:35 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:05:16.637 00:06:35 accel -- common/autotest_common.sh@1093 -- # '[' 7 -le 1 ']' 00:05:16.637 00:06:35 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:16.637 00:06:35 accel -- common/autotest_common.sh@10 -- # set +x 00:05:16.637 ************************************ 00:05:16.637 START TEST accel_missing_filename 00:05:16.637 ************************************ 00:05:16.637 00:06:35 accel.accel_missing_filename -- common/autotest_common.sh@1117 -- # NOT accel_perf -t 1 -w compress 00:05:16.637 00:06:35 accel.accel_missing_filename -- common/autotest_common.sh@642 -- # local es=0 00:05:16.637 00:06:35 accel.accel_missing_filename -- common/autotest_common.sh@644 -- # valid_exec_arg accel_perf -t 1 -w compress 00:05:16.637 00:06:35 accel.accel_missing_filename -- common/autotest_common.sh@630 -- # local arg=accel_perf 00:05:16.637 00:06:35 accel.accel_missing_filename -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:05:16.637 00:06:35 accel.accel_missing_filename -- common/autotest_common.sh@634 -- # type -t accel_perf 00:05:16.637 00:06:35 accel.accel_missing_filename -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:05:16.637 00:06:35 accel.accel_missing_filename -- common/autotest_common.sh@645 -- # accel_perf -t 1 -w compress 00:05:16.637 00:06:35 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:05:16.637 00:06:35 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:05:16.637 00:06:35 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:16.637 00:06:35 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:16.637 00:06:35 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:16.637 00:06:35 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:16.637 00:06:35 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:16.637 00:06:35 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:05:16.637 00:06:35 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:05:16.637 [2024-07-16 00:06:35.365511] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:05:16.637 [2024-07-16 00:06:35.365565] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1339790 ] 00:05:16.637 [2024-07-16 00:06:35.420875] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.897 [2024-07-16 00:06:35.494233] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.897 [2024-07-16 00:06:35.535058] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:16.897 [2024-07-16 00:06:35.594751] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:05:16.897 A filename is required. 00:05:16.897 00:06:35 accel.accel_missing_filename -- common/autotest_common.sh@645 -- # es=234 00:05:16.897 00:06:35 accel.accel_missing_filename -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:05:16.897 00:06:35 accel.accel_missing_filename -- common/autotest_common.sh@654 -- # es=106 00:05:16.897 00:06:35 accel.accel_missing_filename -- common/autotest_common.sh@655 -- # case "$es" in 00:05:16.897 00:06:35 accel.accel_missing_filename -- common/autotest_common.sh@662 -- # es=1 00:05:16.897 00:06:35 accel.accel_missing_filename -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:05:16.897 00:05:16.897 real 0m0.330s 00:05:16.897 user 0m0.252s 00:05:16.897 sys 0m0.115s 00:05:16.897 00:06:35 accel.accel_missing_filename -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:16.897 00:06:35 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:05:16.897 ************************************ 00:05:16.897 END TEST accel_missing_filename 00:05:16.897 ************************************ 00:05:16.897 00:06:35 accel -- common/autotest_common.sh@1136 -- # return 0 00:05:16.897 00:06:35 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:16.897 00:06:35 accel -- common/autotest_common.sh@1093 -- # '[' 10 -le 1 ']' 00:05:16.897 00:06:35 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:16.897 00:06:35 accel -- common/autotest_common.sh@10 -- # set +x 00:05:16.897 ************************************ 00:05:16.897 START TEST accel_compress_verify 00:05:16.897 ************************************ 00:05:16.897 00:06:35 accel.accel_compress_verify -- common/autotest_common.sh@1117 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:16.897 00:06:35 accel.accel_compress_verify -- common/autotest_common.sh@642 -- # local es=0 00:05:16.897 00:06:35 accel.accel_compress_verify -- common/autotest_common.sh@644 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:16.897 00:06:35 accel.accel_compress_verify -- common/autotest_common.sh@630 -- # local arg=accel_perf 00:05:16.897 00:06:35 accel.accel_compress_verify -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:05:16.897 00:06:35 accel.accel_compress_verify -- common/autotest_common.sh@634 -- # type -t accel_perf 00:05:16.897 00:06:35 accel.accel_compress_verify -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:05:16.897 00:06:35 accel.accel_compress_verify -- common/autotest_common.sh@645 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:16.897 00:06:35 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:16.897 00:06:35 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:16.897 00:06:35 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:16.897 00:06:35 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:16.897 00:06:35 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:16.897 00:06:35 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:16.897 00:06:35 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:16.897 00:06:35 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:16.897 00:06:35 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:05:17.157 [2024-07-16 00:06:35.756109] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:05:17.157 [2024-07-16 00:06:35.756175] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1339815 ] 00:05:17.157 [2024-07-16 00:06:35.811186] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.157 [2024-07-16 00:06:35.883151] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.157 [2024-07-16 00:06:35.924094] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:17.157 [2024-07-16 00:06:35.983578] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:05:17.417 00:05:17.417 Compression does not support the verify option, aborting. 00:05:17.417 00:06:36 accel.accel_compress_verify -- common/autotest_common.sh@645 -- # es=161 00:05:17.417 00:06:36 accel.accel_compress_verify -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:05:17.417 00:06:36 accel.accel_compress_verify -- common/autotest_common.sh@654 -- # es=33 00:05:17.417 00:06:36 accel.accel_compress_verify -- common/autotest_common.sh@655 -- # case "$es" in 00:05:17.417 00:06:36 accel.accel_compress_verify -- common/autotest_common.sh@662 -- # es=1 00:05:17.417 00:06:36 accel.accel_compress_verify -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:05:17.417 00:05:17.417 real 0m0.328s 00:05:17.417 user 0m0.254s 00:05:17.417 sys 0m0.113s 00:05:17.417 00:06:36 accel.accel_compress_verify -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:17.417 00:06:36 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:05:17.417 ************************************ 00:05:17.417 END TEST accel_compress_verify 00:05:17.417 ************************************ 00:05:17.417 00:06:36 accel -- common/autotest_common.sh@1136 -- # return 0 00:05:17.417 00:06:36 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:05:17.417 00:06:36 accel -- common/autotest_common.sh@1093 -- # '[' 7 -le 1 ']' 00:05:17.417 00:06:36 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:17.417 00:06:36 accel -- common/autotest_common.sh@10 -- # set +x 00:05:17.417 ************************************ 00:05:17.417 START TEST accel_wrong_workload 00:05:17.417 ************************************ 00:05:17.418 00:06:36 accel.accel_wrong_workload -- common/autotest_common.sh@1117 -- # NOT accel_perf -t 1 -w foobar 00:05:17.418 00:06:36 accel.accel_wrong_workload -- common/autotest_common.sh@642 -- # local es=0 00:05:17.418 00:06:36 accel.accel_wrong_workload -- common/autotest_common.sh@644 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:05:17.418 00:06:36 accel.accel_wrong_workload -- common/autotest_common.sh@630 -- # local arg=accel_perf 00:05:17.418 00:06:36 accel.accel_wrong_workload -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:05:17.418 00:06:36 accel.accel_wrong_workload -- common/autotest_common.sh@634 -- # type -t accel_perf 00:05:17.418 00:06:36 accel.accel_wrong_workload -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:05:17.418 00:06:36 accel.accel_wrong_workload -- common/autotest_common.sh@645 -- # accel_perf -t 1 -w foobar 00:05:17.418 00:06:36 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:05:17.418 00:06:36 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:05:17.418 00:06:36 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:17.418 00:06:36 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:17.418 00:06:36 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:17.418 00:06:36 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:17.418 00:06:36 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:17.418 00:06:36 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:05:17.418 00:06:36 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:05:17.418 Unsupported workload type: foobar 00:05:17.418 [2024-07-16 00:06:36.141288] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:05:17.418 accel_perf options: 00:05:17.418 [-h help message] 00:05:17.418 [-q queue depth per core] 00:05:17.418 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:17.418 [-T number of threads per core 00:05:17.418 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:17.418 [-t time in seconds] 00:05:17.418 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:17.418 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:05:17.418 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:17.418 [-l for compress/decompress workloads, name of uncompressed input file 00:05:17.418 [-S for crc32c workload, use this seed value (default 0) 00:05:17.418 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:17.418 [-f for fill workload, use this BYTE value (default 255) 00:05:17.418 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:17.418 [-y verify result if this switch is on] 00:05:17.418 [-a tasks to allocate per core (default: same value as -q)] 00:05:17.418 Can be used to spread operations across a wider range of memory. 00:05:17.418 00:06:36 accel.accel_wrong_workload -- common/autotest_common.sh@645 -- # es=1 00:05:17.418 00:06:36 accel.accel_wrong_workload -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:05:17.418 00:06:36 accel.accel_wrong_workload -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:05:17.418 00:06:36 accel.accel_wrong_workload -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:05:17.418 00:05:17.418 real 0m0.031s 00:05:17.418 user 0m0.017s 00:05:17.418 sys 0m0.014s 00:05:17.418 00:06:36 accel.accel_wrong_workload -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:17.418 00:06:36 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:05:17.418 ************************************ 00:05:17.418 END TEST accel_wrong_workload 00:05:17.418 ************************************ 00:05:17.418 Error: writing output failed: Broken pipe 00:05:17.418 00:06:36 accel -- common/autotest_common.sh@1136 -- # return 0 00:05:17.418 00:06:36 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:05:17.418 00:06:36 accel -- common/autotest_common.sh@1093 -- # '[' 10 -le 1 ']' 00:05:17.418 00:06:36 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:17.418 00:06:36 accel -- common/autotest_common.sh@10 -- # set +x 00:05:17.418 ************************************ 00:05:17.418 START TEST accel_negative_buffers 00:05:17.418 ************************************ 00:05:17.418 00:06:36 accel.accel_negative_buffers -- common/autotest_common.sh@1117 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:05:17.418 00:06:36 accel.accel_negative_buffers -- common/autotest_common.sh@642 -- # local es=0 00:05:17.418 00:06:36 accel.accel_negative_buffers -- common/autotest_common.sh@644 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:05:17.418 00:06:36 accel.accel_negative_buffers -- common/autotest_common.sh@630 -- # local arg=accel_perf 00:05:17.418 00:06:36 accel.accel_negative_buffers -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:05:17.418 00:06:36 accel.accel_negative_buffers -- common/autotest_common.sh@634 -- # type -t accel_perf 00:05:17.418 00:06:36 accel.accel_negative_buffers -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:05:17.418 00:06:36 accel.accel_negative_buffers -- common/autotest_common.sh@645 -- # accel_perf -t 1 -w xor -y -x -1 00:05:17.418 00:06:36 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:05:17.418 00:06:36 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:05:17.418 00:06:36 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:17.418 00:06:36 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:17.418 00:06:36 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:17.418 00:06:36 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:17.418 00:06:36 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:17.418 00:06:36 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:05:17.418 00:06:36 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:05:17.418 -x option must be non-negative. 00:05:17.418 [2024-07-16 00:06:36.239276] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:05:17.418 accel_perf options: 00:05:17.418 [-h help message] 00:05:17.418 [-q queue depth per core] 00:05:17.418 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:17.418 [-T number of threads per core 00:05:17.418 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:17.418 [-t time in seconds] 00:05:17.418 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:17.418 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:05:17.418 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:17.418 [-l for compress/decompress workloads, name of uncompressed input file 00:05:17.418 [-S for crc32c workload, use this seed value (default 0) 00:05:17.418 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:17.418 [-f for fill workload, use this BYTE value (default 255) 00:05:17.418 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:17.418 [-y verify result if this switch is on] 00:05:17.418 [-a tasks to allocate per core (default: same value as -q)] 00:05:17.418 Can be used to spread operations across a wider range of memory. 00:05:17.418 00:06:36 accel.accel_negative_buffers -- common/autotest_common.sh@645 -- # es=1 00:05:17.418 00:06:36 accel.accel_negative_buffers -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:05:17.418 00:06:36 accel.accel_negative_buffers -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:05:17.418 00:06:36 accel.accel_negative_buffers -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:05:17.418 00:05:17.418 real 0m0.032s 00:05:17.418 user 0m0.023s 00:05:17.418 sys 0m0.009s 00:05:17.418 00:06:36 accel.accel_negative_buffers -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:17.418 00:06:36 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:05:17.418 ************************************ 00:05:17.418 END TEST accel_negative_buffers 00:05:17.418 ************************************ 00:05:17.418 Error: writing output failed: Broken pipe 00:05:17.679 00:06:36 accel -- common/autotest_common.sh@1136 -- # return 0 00:05:17.679 00:06:36 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:05:17.679 00:06:36 accel -- common/autotest_common.sh@1093 -- # '[' 9 -le 1 ']' 00:05:17.679 00:06:36 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:17.679 00:06:36 accel -- common/autotest_common.sh@10 -- # set +x 00:05:17.679 ************************************ 00:05:17.679 START TEST accel_crc32c 00:05:17.679 ************************************ 00:05:17.679 00:06:36 accel.accel_crc32c -- common/autotest_common.sh@1117 -- # accel_test -t 1 -w crc32c -S 32 -y 00:05:17.679 00:06:36 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:17.679 00:06:36 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:17.679 00:06:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:17.679 00:06:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:17.679 00:06:36 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:17.679 00:06:36 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:17.679 00:06:36 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:17.679 00:06:36 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:17.679 00:06:36 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:17.679 00:06:36 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:17.679 00:06:36 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:17.679 00:06:36 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:17.679 00:06:36 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:17.679 00:06:36 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:17.679 [2024-07-16 00:06:36.334028] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:05:17.679 [2024-07-16 00:06:36.334080] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1339880 ] 00:05:17.679 [2024-07-16 00:06:36.391620] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.679 [2024-07-16 00:06:36.467333] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.679 00:06:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:17.679 00:06:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:17.679 00:06:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:17.679 00:06:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:17.679 00:06:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:17.679 00:06:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:17.679 00:06:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:17.679 00:06:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:17.679 00:06:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:17.679 00:06:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:17.679 00:06:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:17.679 00:06:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:17.679 00:06:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:17.679 00:06:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:17.679 00:06:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:17.679 00:06:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:17.679 00:06:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:17.679 00:06:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:17.679 00:06:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:17.679 00:06:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:17.679 00:06:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:05:17.679 00:06:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:17.679 00:06:36 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:17.679 00:06:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:17.679 00:06:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:17.679 00:06:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:17.679 00:06:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:17.679 00:06:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:17.679 00:06:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:17.679 00:06:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:17.679 00:06:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:17.679 00:06:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:17.679 00:06:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:17.679 00:06:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:17.679 00:06:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:17.679 00:06:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:17.679 00:06:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:17.679 00:06:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:05:17.679 00:06:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:17.679 00:06:36 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:17.679 00:06:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:17.679 00:06:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:17.679 00:06:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:17.679 00:06:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:17.679 00:06:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:17.679 00:06:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:17.679 00:06:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:17.679 00:06:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:17.679 00:06:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:17.679 00:06:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:17.679 00:06:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:05:17.679 00:06:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:17.679 00:06:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:17.679 00:06:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:17.679 00:06:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:17.679 00:06:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:17.680 00:06:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:17.680 00:06:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:17.680 00:06:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:17.680 00:06:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:17.680 00:06:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:17.680 00:06:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:17.680 00:06:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:17.680 00:06:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:17.680 00:06:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:17.680 00:06:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:17.680 00:06:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:17.680 00:06:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:17.680 00:06:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:17.680 00:06:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:19.076 00:06:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:19.076 00:06:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:19.076 00:06:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:19.076 00:06:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:19.076 00:06:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:19.076 00:06:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:19.076 00:06:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:19.076 00:06:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:19.076 00:06:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:19.076 00:06:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:19.076 00:06:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:19.076 00:06:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:19.076 00:06:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:19.076 00:06:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:19.076 00:06:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:19.076 00:06:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:19.076 00:06:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:19.076 00:06:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:19.076 00:06:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:19.076 00:06:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:19.076 00:06:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:19.076 00:06:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:19.076 00:06:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:19.076 00:06:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:19.076 00:06:37 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:19.076 00:06:37 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:19.076 00:06:37 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:19.076 00:05:19.076 real 0m1.340s 00:05:19.076 user 0m1.234s 00:05:19.076 sys 0m0.121s 00:05:19.076 00:06:37 accel.accel_crc32c -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:19.076 00:06:37 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:19.076 ************************************ 00:05:19.076 END TEST accel_crc32c 00:05:19.076 ************************************ 00:05:19.076 00:06:37 accel -- common/autotest_common.sh@1136 -- # return 0 00:05:19.076 00:06:37 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:05:19.076 00:06:37 accel -- common/autotest_common.sh@1093 -- # '[' 9 -le 1 ']' 00:05:19.076 00:06:37 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:19.076 00:06:37 accel -- common/autotest_common.sh@10 -- # set +x 00:05:19.076 ************************************ 00:05:19.076 START TEST accel_crc32c_C2 00:05:19.076 ************************************ 00:05:19.076 00:06:37 accel.accel_crc32c_C2 -- common/autotest_common.sh@1117 -- # accel_test -t 1 -w crc32c -y -C 2 00:05:19.076 00:06:37 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:19.076 00:06:37 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:19.076 00:06:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:19.076 00:06:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:19.076 00:06:37 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:19.076 00:06:37 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:19.076 00:06:37 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:19.076 00:06:37 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:19.076 00:06:37 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:19.076 00:06:37 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:19.076 00:06:37 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:19.076 00:06:37 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:19.076 00:06:37 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:19.076 00:06:37 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:19.076 [2024-07-16 00:06:37.738806] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:05:19.076 [2024-07-16 00:06:37.738873] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1340136 ] 00:05:19.076 [2024-07-16 00:06:37.794936] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.076 [2024-07-16 00:06:37.867936] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.076 00:06:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:19.076 00:06:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:19.076 00:06:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:19.076 00:06:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:19.076 00:06:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:19.076 00:06:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:19.076 00:06:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:19.076 00:06:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:19.077 00:06:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:19.077 00:06:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:19.077 00:06:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:19.077 00:06:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:19.077 00:06:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:19.077 00:06:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:19.077 00:06:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:19.077 00:06:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:19.077 00:06:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:19.077 00:06:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:19.077 00:06:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:19.077 00:06:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:19.077 00:06:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:05:19.077 00:06:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:19.077 00:06:37 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:19.077 00:06:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:19.077 00:06:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:19.077 00:06:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:19.077 00:06:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:19.077 00:06:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:19.077 00:06:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:19.077 00:06:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:19.077 00:06:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:19.077 00:06:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:19.077 00:06:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:19.077 00:06:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:19.077 00:06:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:19.077 00:06:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:19.077 00:06:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:19.077 00:06:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:19.077 00:06:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:19.077 00:06:37 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:19.077 00:06:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:19.077 00:06:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:19.077 00:06:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:19.077 00:06:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:19.077 00:06:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:19.077 00:06:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:19.077 00:06:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:19.077 00:06:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:19.077 00:06:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:19.077 00:06:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:19.077 00:06:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:19.077 00:06:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:19.077 00:06:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:19.077 00:06:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:19.077 00:06:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:19.077 00:06:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:19.077 00:06:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:19.077 00:06:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:19.077 00:06:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:19.077 00:06:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:19.077 00:06:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:19.077 00:06:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:19.077 00:06:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:19.077 00:06:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:19.077 00:06:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:19.077 00:06:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:19.077 00:06:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:19.077 00:06:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:19.077 00:06:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:19.077 00:06:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:20.455 00:06:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:20.455 00:06:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.455 00:06:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:20.455 00:06:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:20.455 00:06:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:20.455 00:06:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.455 00:06:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:20.455 00:06:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:20.455 00:06:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:20.455 00:06:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.455 00:06:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:20.455 00:06:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:20.455 00:06:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:20.455 00:06:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.455 00:06:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:20.455 00:06:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:20.455 00:06:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:20.455 00:06:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.455 00:06:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:20.455 00:06:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:20.455 00:06:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:20.455 00:06:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.455 00:06:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:20.455 00:06:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:20.455 00:06:39 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:20.455 00:06:39 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:20.455 00:06:39 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:20.455 00:05:20.455 real 0m1.335s 00:05:20.455 user 0m1.231s 00:05:20.455 sys 0m0.117s 00:05:20.455 00:06:39 accel.accel_crc32c_C2 -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:20.455 00:06:39 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:20.455 ************************************ 00:05:20.455 END TEST accel_crc32c_C2 00:05:20.455 ************************************ 00:05:20.455 00:06:39 accel -- common/autotest_common.sh@1136 -- # return 0 00:05:20.455 00:06:39 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:05:20.456 00:06:39 accel -- common/autotest_common.sh@1093 -- # '[' 7 -le 1 ']' 00:05:20.456 00:06:39 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:20.456 00:06:39 accel -- common/autotest_common.sh@10 -- # set +x 00:05:20.456 ************************************ 00:05:20.456 START TEST accel_copy 00:05:20.456 ************************************ 00:05:20.456 00:06:39 accel.accel_copy -- common/autotest_common.sh@1117 -- # accel_test -t 1 -w copy -y 00:05:20.456 00:06:39 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:20.456 00:06:39 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:05:20.456 00:06:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:20.456 00:06:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:20.456 00:06:39 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:05:20.456 00:06:39 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:20.456 00:06:39 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:20.456 00:06:39 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:20.456 00:06:39 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:20.456 00:06:39 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:20.456 00:06:39 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:20.456 00:06:39 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:20.456 00:06:39 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:20.456 00:06:39 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:05:20.456 [2024-07-16 00:06:39.136235] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:05:20.456 [2024-07-16 00:06:39.136282] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1340389 ] 00:05:20.456 [2024-07-16 00:06:39.190900] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.456 [2024-07-16 00:06:39.263710] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.456 00:06:39 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:20.456 00:06:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:20.456 00:06:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:20.456 00:06:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:20.456 00:06:39 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:20.456 00:06:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:20.456 00:06:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:20.456 00:06:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:20.456 00:06:39 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:05:20.715 00:06:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:20.715 00:06:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:20.715 00:06:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:20.715 00:06:39 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:20.715 00:06:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:20.715 00:06:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:20.715 00:06:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:20.715 00:06:39 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:20.715 00:06:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:20.715 00:06:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:20.715 00:06:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:20.715 00:06:39 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:05:20.715 00:06:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:20.715 00:06:39 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:05:20.715 00:06:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:20.715 00:06:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:20.715 00:06:39 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:20.715 00:06:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:20.715 00:06:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:20.715 00:06:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:20.715 00:06:39 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:20.715 00:06:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:20.715 00:06:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:20.715 00:06:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:20.715 00:06:39 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:05:20.715 00:06:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:20.715 00:06:39 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:20.715 00:06:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:20.715 00:06:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:20.715 00:06:39 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:20.715 00:06:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:20.715 00:06:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:20.715 00:06:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:20.715 00:06:39 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:20.715 00:06:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:20.715 00:06:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:20.715 00:06:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:20.715 00:06:39 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:05:20.715 00:06:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:20.715 00:06:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:20.715 00:06:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:20.715 00:06:39 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:20.715 00:06:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:20.715 00:06:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:20.715 00:06:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:20.715 00:06:39 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:05:20.715 00:06:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:20.715 00:06:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:20.715 00:06:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:20.715 00:06:39 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:20.715 00:06:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:20.715 00:06:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:20.715 00:06:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:20.715 00:06:39 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:20.715 00:06:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:20.715 00:06:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:20.715 00:06:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:21.652 00:06:40 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:21.652 00:06:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:21.652 00:06:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:21.652 00:06:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:21.652 00:06:40 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:21.652 00:06:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:21.652 00:06:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:21.652 00:06:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:21.652 00:06:40 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:21.652 00:06:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:21.652 00:06:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:21.652 00:06:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:21.652 00:06:40 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:21.652 00:06:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:21.652 00:06:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:21.652 00:06:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:21.652 00:06:40 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:21.652 00:06:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:21.652 00:06:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:21.652 00:06:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:21.652 00:06:40 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:21.652 00:06:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:21.652 00:06:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:21.652 00:06:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:21.652 00:06:40 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:21.652 00:06:40 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:05:21.652 00:06:40 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:21.652 00:05:21.652 real 0m1.334s 00:05:21.652 user 0m1.240s 00:05:21.652 sys 0m0.108s 00:05:21.652 00:06:40 accel.accel_copy -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:21.652 00:06:40 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:05:21.652 ************************************ 00:05:21.652 END TEST accel_copy 00:05:21.652 ************************************ 00:05:21.652 00:06:40 accel -- common/autotest_common.sh@1136 -- # return 0 00:05:21.652 00:06:40 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:21.652 00:06:40 accel -- common/autotest_common.sh@1093 -- # '[' 13 -le 1 ']' 00:05:21.652 00:06:40 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:21.652 00:06:40 accel -- common/autotest_common.sh@10 -- # set +x 00:05:21.653 ************************************ 00:05:21.653 START TEST accel_fill 00:05:21.653 ************************************ 00:05:21.653 00:06:40 accel.accel_fill -- common/autotest_common.sh@1117 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:21.653 00:06:40 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:05:21.653 00:06:40 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:05:21.653 00:06:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:21.653 00:06:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:21.653 00:06:40 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:21.653 00:06:40 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:21.653 00:06:40 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:05:21.653 00:06:40 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:21.653 00:06:40 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:21.653 00:06:40 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:21.653 00:06:40 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:21.653 00:06:40 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:21.653 00:06:40 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:05:21.653 00:06:40 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:05:21.946 [2024-07-16 00:06:40.520036] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:05:21.946 [2024-07-16 00:06:40.520109] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1340642 ] 00:05:21.946 [2024-07-16 00:06:40.575098] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.946 [2024-07-16 00:06:40.648234] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.946 00:06:40 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:21.946 00:06:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:21.946 00:06:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:21.946 00:06:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:21.946 00:06:40 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:21.946 00:06:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:21.946 00:06:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:21.946 00:06:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:21.946 00:06:40 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:05:21.946 00:06:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:21.946 00:06:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:21.946 00:06:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:21.946 00:06:40 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:21.946 00:06:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:21.946 00:06:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:21.946 00:06:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:21.946 00:06:40 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:21.946 00:06:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:21.946 00:06:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:21.946 00:06:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:21.946 00:06:40 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:05:21.946 00:06:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:21.946 00:06:40 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:05:21.946 00:06:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:21.946 00:06:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:21.946 00:06:40 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:05:21.947 00:06:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:21.947 00:06:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:21.947 00:06:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:21.947 00:06:40 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:21.947 00:06:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:21.947 00:06:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:21.947 00:06:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:21.947 00:06:40 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:21.947 00:06:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:21.947 00:06:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:21.947 00:06:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:21.947 00:06:40 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:05:21.947 00:06:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:21.947 00:06:40 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:05:21.947 00:06:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:21.947 00:06:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:21.947 00:06:40 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:21.947 00:06:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:21.947 00:06:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:21.947 00:06:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:21.947 00:06:40 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:21.947 00:06:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:21.947 00:06:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:21.947 00:06:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:21.947 00:06:40 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:05:21.947 00:06:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:21.947 00:06:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:21.947 00:06:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:21.947 00:06:40 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:05:21.947 00:06:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:21.947 00:06:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:21.947 00:06:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:21.947 00:06:40 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:05:21.947 00:06:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:21.947 00:06:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:21.947 00:06:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:21.947 00:06:40 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:21.947 00:06:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:21.947 00:06:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:21.947 00:06:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:21.947 00:06:40 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:21.947 00:06:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:21.947 00:06:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:21.947 00:06:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:23.327 00:06:41 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:23.327 00:06:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:23.327 00:06:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:23.327 00:06:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:23.327 00:06:41 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:23.327 00:06:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:23.327 00:06:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:23.327 00:06:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:23.327 00:06:41 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:23.327 00:06:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:23.327 00:06:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:23.327 00:06:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:23.327 00:06:41 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:23.327 00:06:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:23.327 00:06:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:23.327 00:06:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:23.327 00:06:41 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:23.327 00:06:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:23.327 00:06:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:23.327 00:06:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:23.327 00:06:41 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:23.327 00:06:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:23.327 00:06:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:23.327 00:06:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:23.327 00:06:41 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:23.327 00:06:41 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:05:23.327 00:06:41 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:23.327 00:05:23.327 real 0m1.338s 00:05:23.327 user 0m1.239s 00:05:23.327 sys 0m0.112s 00:05:23.327 00:06:41 accel.accel_fill -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:23.327 00:06:41 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:05:23.327 ************************************ 00:05:23.327 END TEST accel_fill 00:05:23.327 ************************************ 00:05:23.327 00:06:41 accel -- common/autotest_common.sh@1136 -- # return 0 00:05:23.327 00:06:41 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:05:23.327 00:06:41 accel -- common/autotest_common.sh@1093 -- # '[' 7 -le 1 ']' 00:05:23.327 00:06:41 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:23.327 00:06:41 accel -- common/autotest_common.sh@10 -- # set +x 00:05:23.327 ************************************ 00:05:23.327 START TEST accel_copy_crc32c 00:05:23.327 ************************************ 00:05:23.327 00:06:41 accel.accel_copy_crc32c -- common/autotest_common.sh@1117 -- # accel_test -t 1 -w copy_crc32c -y 00:05:23.327 00:06:41 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:23.327 00:06:41 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:23.327 00:06:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:23.327 00:06:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:23.327 00:06:41 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:23.327 00:06:41 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:05:23.327 00:06:41 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:23.327 00:06:41 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:23.327 00:06:41 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:23.327 00:06:41 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:23.327 00:06:41 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:23.327 00:06:41 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:23.327 00:06:41 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:23.327 00:06:41 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:23.327 [2024-07-16 00:06:41.907402] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:05:23.327 [2024-07-16 00:06:41.907469] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1340892 ] 00:05:23.327 [2024-07-16 00:06:41.962070] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.327 [2024-07-16 00:06:42.038331] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.327 00:06:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:23.327 00:06:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:23.327 00:06:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:23.327 00:06:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:23.327 00:06:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:23.327 00:06:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:23.327 00:06:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:23.327 00:06:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:23.327 00:06:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:23.327 00:06:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:23.327 00:06:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:23.327 00:06:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:23.327 00:06:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:23.327 00:06:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:23.327 00:06:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:23.327 00:06:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:23.327 00:06:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:23.327 00:06:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:23.327 00:06:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:23.327 00:06:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:23.327 00:06:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:23.327 00:06:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:23.327 00:06:42 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:23.327 00:06:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:23.327 00:06:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:23.327 00:06:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:05:23.327 00:06:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:23.327 00:06:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:23.327 00:06:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:23.327 00:06:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:23.327 00:06:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:23.327 00:06:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:23.327 00:06:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:23.327 00:06:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:23.327 00:06:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:23.327 00:06:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:23.327 00:06:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:23.327 00:06:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:23.327 00:06:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:23.327 00:06:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:23.327 00:06:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:23.327 00:06:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:05:23.327 00:06:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:23.327 00:06:42 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:23.327 00:06:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:23.327 00:06:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:23.327 00:06:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:23.327 00:06:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:23.327 00:06:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:23.327 00:06:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:23.327 00:06:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:23.327 00:06:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:23.327 00:06:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:23.327 00:06:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:23.327 00:06:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:05:23.327 00:06:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:23.327 00:06:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:23.327 00:06:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:23.327 00:06:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:23.327 00:06:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:23.327 00:06:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:23.327 00:06:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:23.327 00:06:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:23.327 00:06:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:23.327 00:06:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:23.327 00:06:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:23.327 00:06:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:23.327 00:06:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:23.327 00:06:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:23.327 00:06:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:23.327 00:06:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:23.327 00:06:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:23.327 00:06:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:23.327 00:06:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:24.704 00:06:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:24.704 00:06:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:24.704 00:06:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:24.704 00:06:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:24.704 00:06:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:24.704 00:06:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:24.704 00:06:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:24.704 00:06:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:24.704 00:06:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:24.704 00:06:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:24.704 00:06:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:24.704 00:06:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:24.704 00:06:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:24.704 00:06:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:24.704 00:06:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:24.704 00:06:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:24.704 00:06:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:24.704 00:06:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:24.704 00:06:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:24.704 00:06:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:24.704 00:06:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:24.704 00:06:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:24.704 00:06:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:24.704 00:06:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:24.704 00:06:43 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:24.704 00:06:43 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:24.704 00:06:43 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:24.704 00:05:24.704 real 0m1.341s 00:05:24.704 user 0m1.237s 00:05:24.704 sys 0m0.118s 00:05:24.704 00:06:43 accel.accel_copy_crc32c -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:24.704 00:06:43 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:24.704 ************************************ 00:05:24.704 END TEST accel_copy_crc32c 00:05:24.704 ************************************ 00:05:24.704 00:06:43 accel -- common/autotest_common.sh@1136 -- # return 0 00:05:24.704 00:06:43 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:05:24.704 00:06:43 accel -- common/autotest_common.sh@1093 -- # '[' 9 -le 1 ']' 00:05:24.704 00:06:43 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:24.704 00:06:43 accel -- common/autotest_common.sh@10 -- # set +x 00:05:24.704 ************************************ 00:05:24.704 START TEST accel_copy_crc32c_C2 00:05:24.704 ************************************ 00:05:24.704 00:06:43 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1117 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:05:24.704 00:06:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:24.704 00:06:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:24.704 00:06:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:24.704 00:06:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:24.704 00:06:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:05:24.704 00:06:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:05:24.704 00:06:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:24.704 00:06:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:24.704 00:06:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:24.704 00:06:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:24.704 00:06:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:24.704 00:06:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:24.704 00:06:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:24.704 00:06:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:24.704 [2024-07-16 00:06:43.312508] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:05:24.704 [2024-07-16 00:06:43.312573] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1341153 ] 00:05:24.704 [2024-07-16 00:06:43.368281] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.704 [2024-07-16 00:06:43.441071] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.704 00:06:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:24.704 00:06:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:24.704 00:06:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:24.704 00:06:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:24.704 00:06:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:24.704 00:06:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:24.704 00:06:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:24.704 00:06:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:24.704 00:06:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:24.705 00:06:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:24.705 00:06:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:24.705 00:06:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:24.705 00:06:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:24.705 00:06:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:24.705 00:06:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:24.705 00:06:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:24.705 00:06:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:24.705 00:06:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:24.705 00:06:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:24.705 00:06:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:24.705 00:06:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:24.705 00:06:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:24.705 00:06:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:24.705 00:06:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:24.705 00:06:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:24.705 00:06:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:24.705 00:06:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:24.705 00:06:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:24.705 00:06:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:24.705 00:06:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:24.705 00:06:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:24.705 00:06:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:24.705 00:06:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:24.705 00:06:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:05:24.705 00:06:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:24.705 00:06:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:24.705 00:06:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:24.705 00:06:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:24.705 00:06:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:24.705 00:06:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:24.705 00:06:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:24.705 00:06:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:24.705 00:06:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:24.705 00:06:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:24.705 00:06:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:24.705 00:06:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:24.705 00:06:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:24.705 00:06:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:24.705 00:06:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:24.705 00:06:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:24.705 00:06:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:24.705 00:06:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:24.705 00:06:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:24.705 00:06:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:24.705 00:06:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:24.705 00:06:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:24.705 00:06:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:24.705 00:06:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:24.705 00:06:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:24.705 00:06:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:24.705 00:06:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:24.705 00:06:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:24.705 00:06:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:24.705 00:06:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:24.705 00:06:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:24.705 00:06:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:24.705 00:06:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:24.705 00:06:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:24.705 00:06:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:24.705 00:06:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:24.705 00:06:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:24.705 00:06:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:24.705 00:06:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:24.705 00:06:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:26.087 00:06:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:26.087 00:06:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.087 00:06:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:26.087 00:06:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:26.087 00:06:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:26.087 00:06:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.087 00:06:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:26.087 00:06:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:26.087 00:06:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:26.087 00:06:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.087 00:06:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:26.087 00:06:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:26.087 00:06:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:26.087 00:06:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.088 00:06:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:26.088 00:06:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:26.088 00:06:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:26.088 00:06:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.088 00:06:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:26.088 00:06:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:26.088 00:06:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:26.088 00:06:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.088 00:06:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:26.088 00:06:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:26.088 00:06:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:26.088 00:06:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:26.088 00:06:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:26.088 00:05:26.088 real 0m1.337s 00:05:26.088 user 0m1.233s 00:05:26.088 sys 0m0.119s 00:05:26.088 00:06:44 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:26.088 00:06:44 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:26.088 ************************************ 00:05:26.088 END TEST accel_copy_crc32c_C2 00:05:26.088 ************************************ 00:05:26.088 00:06:44 accel -- common/autotest_common.sh@1136 -- # return 0 00:05:26.088 00:06:44 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:05:26.088 00:06:44 accel -- common/autotest_common.sh@1093 -- # '[' 7 -le 1 ']' 00:05:26.088 00:06:44 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:26.088 00:06:44 accel -- common/autotest_common.sh@10 -- # set +x 00:05:26.088 ************************************ 00:05:26.088 START TEST accel_dualcast 00:05:26.088 ************************************ 00:05:26.088 00:06:44 accel.accel_dualcast -- common/autotest_common.sh@1117 -- # accel_test -t 1 -w dualcast -y 00:05:26.088 00:06:44 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:05:26.088 00:06:44 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:05:26.088 00:06:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:26.088 00:06:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:26.088 00:06:44 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:05:26.088 00:06:44 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:05:26.088 00:06:44 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:05:26.088 00:06:44 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:26.088 00:06:44 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:26.088 00:06:44 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:26.088 00:06:44 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:26.088 00:06:44 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:26.088 00:06:44 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:05:26.088 00:06:44 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:05:26.088 [2024-07-16 00:06:44.701040] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:05:26.088 [2024-07-16 00:06:44.701089] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1341405 ] 00:05:26.088 [2024-07-16 00:06:44.754993] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.088 [2024-07-16 00:06:44.827966] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.088 00:06:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:26.088 00:06:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:26.088 00:06:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:26.088 00:06:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:26.088 00:06:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:26.088 00:06:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:26.088 00:06:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:26.088 00:06:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:26.088 00:06:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:05:26.088 00:06:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:26.088 00:06:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:26.088 00:06:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:26.088 00:06:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:26.088 00:06:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:26.088 00:06:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:26.088 00:06:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:26.088 00:06:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:26.088 00:06:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:26.088 00:06:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:26.088 00:06:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:26.088 00:06:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:05:26.088 00:06:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:26.088 00:06:44 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:05:26.088 00:06:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:26.088 00:06:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:26.088 00:06:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:26.088 00:06:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:26.088 00:06:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:26.088 00:06:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:26.088 00:06:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:26.088 00:06:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:26.088 00:06:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:26.088 00:06:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:26.088 00:06:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:05:26.088 00:06:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:26.088 00:06:44 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:05:26.088 00:06:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:26.088 00:06:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:26.088 00:06:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:26.088 00:06:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:26.088 00:06:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:26.088 00:06:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:26.088 00:06:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:26.088 00:06:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:26.088 00:06:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:26.088 00:06:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:26.088 00:06:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:05:26.088 00:06:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:26.088 00:06:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:26.088 00:06:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:26.088 00:06:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:05:26.088 00:06:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:26.088 00:06:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:26.088 00:06:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:26.088 00:06:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:05:26.088 00:06:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:26.088 00:06:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:26.088 00:06:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:26.088 00:06:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:26.088 00:06:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:26.088 00:06:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:26.088 00:06:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:26.088 00:06:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:26.088 00:06:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:26.088 00:06:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:26.088 00:06:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:27.466 00:06:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:27.466 00:06:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:27.466 00:06:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:27.466 00:06:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:27.466 00:06:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:27.466 00:06:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:27.466 00:06:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:27.466 00:06:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:27.466 00:06:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:27.466 00:06:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:27.466 00:06:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:27.466 00:06:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:27.466 00:06:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:27.466 00:06:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:27.466 00:06:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:27.466 00:06:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:27.466 00:06:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:27.466 00:06:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:27.466 00:06:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:27.466 00:06:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:27.466 00:06:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:27.466 00:06:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:27.466 00:06:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:27.466 00:06:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:27.466 00:06:46 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:27.466 00:06:46 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:05:27.466 00:06:46 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:27.466 00:05:27.466 real 0m1.333s 00:05:27.466 user 0m1.229s 00:05:27.466 sys 0m0.116s 00:05:27.466 00:06:46 accel.accel_dualcast -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:27.466 00:06:46 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:05:27.466 ************************************ 00:05:27.466 END TEST accel_dualcast 00:05:27.466 ************************************ 00:05:27.466 00:06:46 accel -- common/autotest_common.sh@1136 -- # return 0 00:05:27.467 00:06:46 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:05:27.467 00:06:46 accel -- common/autotest_common.sh@1093 -- # '[' 7 -le 1 ']' 00:05:27.467 00:06:46 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:27.467 00:06:46 accel -- common/autotest_common.sh@10 -- # set +x 00:05:27.467 ************************************ 00:05:27.467 START TEST accel_compare 00:05:27.467 ************************************ 00:05:27.467 00:06:46 accel.accel_compare -- common/autotest_common.sh@1117 -- # accel_test -t 1 -w compare -y 00:05:27.467 00:06:46 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:05:27.467 00:06:46 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:05:27.467 00:06:46 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:27.467 00:06:46 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:27.467 00:06:46 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:05:27.467 00:06:46 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:05:27.467 00:06:46 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:05:27.467 00:06:46 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:27.467 00:06:46 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:27.467 00:06:46 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:27.467 00:06:46 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:27.467 00:06:46 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:27.467 00:06:46 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:05:27.467 00:06:46 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:05:27.467 [2024-07-16 00:06:46.094473] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:05:27.467 [2024-07-16 00:06:46.094522] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1341657 ] 00:05:27.467 [2024-07-16 00:06:46.148764] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.467 [2024-07-16 00:06:46.220851] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.467 00:06:46 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:27.467 00:06:46 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:27.467 00:06:46 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:27.467 00:06:46 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:27.467 00:06:46 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:27.467 00:06:46 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:27.467 00:06:46 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:27.467 00:06:46 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:27.467 00:06:46 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:05:27.467 00:06:46 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:27.467 00:06:46 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:27.467 00:06:46 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:27.467 00:06:46 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:27.467 00:06:46 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:27.467 00:06:46 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:27.467 00:06:46 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:27.467 00:06:46 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:27.467 00:06:46 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:27.467 00:06:46 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:27.467 00:06:46 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:27.467 00:06:46 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:05:27.467 00:06:46 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:27.467 00:06:46 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:05:27.467 00:06:46 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:27.467 00:06:46 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:27.467 00:06:46 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:27.467 00:06:46 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:27.467 00:06:46 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:27.467 00:06:46 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:27.467 00:06:46 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:27.467 00:06:46 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:27.467 00:06:46 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:27.467 00:06:46 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:27.467 00:06:46 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:05:27.467 00:06:46 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:27.467 00:06:46 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:05:27.467 00:06:46 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:27.467 00:06:46 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:27.467 00:06:46 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:27.467 00:06:46 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:27.467 00:06:46 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:27.467 00:06:46 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:27.467 00:06:46 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:27.467 00:06:46 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:27.467 00:06:46 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:27.467 00:06:46 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:27.467 00:06:46 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:05:27.467 00:06:46 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:27.467 00:06:46 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:27.467 00:06:46 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:27.467 00:06:46 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:05:27.467 00:06:46 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:27.467 00:06:46 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:27.467 00:06:46 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:27.467 00:06:46 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:05:27.467 00:06:46 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:27.467 00:06:46 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:27.467 00:06:46 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:27.467 00:06:46 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:27.467 00:06:46 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:27.467 00:06:46 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:27.467 00:06:46 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:27.467 00:06:46 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:27.467 00:06:46 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:27.467 00:06:46 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:27.467 00:06:46 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:28.840 00:06:47 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:28.840 00:06:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:28.840 00:06:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:28.840 00:06:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:28.840 00:06:47 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:28.840 00:06:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:28.840 00:06:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:28.840 00:06:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:28.840 00:06:47 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:28.840 00:06:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:28.840 00:06:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:28.840 00:06:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:28.840 00:06:47 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:28.840 00:06:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:28.840 00:06:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:28.840 00:06:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:28.840 00:06:47 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:28.840 00:06:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:28.840 00:06:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:28.840 00:06:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:28.840 00:06:47 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:28.840 00:06:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:28.840 00:06:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:28.840 00:06:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:28.840 00:06:47 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:28.840 00:06:47 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:05:28.840 00:06:47 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:28.840 00:05:28.840 real 0m1.332s 00:05:28.840 user 0m1.235s 00:05:28.840 sys 0m0.110s 00:05:28.840 00:06:47 accel.accel_compare -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:28.840 00:06:47 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:05:28.840 ************************************ 00:05:28.840 END TEST accel_compare 00:05:28.840 ************************************ 00:05:28.840 00:06:47 accel -- common/autotest_common.sh@1136 -- # return 0 00:05:28.840 00:06:47 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:05:28.840 00:06:47 accel -- common/autotest_common.sh@1093 -- # '[' 7 -le 1 ']' 00:05:28.840 00:06:47 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:28.840 00:06:47 accel -- common/autotest_common.sh@10 -- # set +x 00:05:28.840 ************************************ 00:05:28.840 START TEST accel_xor 00:05:28.840 ************************************ 00:05:28.840 00:06:47 accel.accel_xor -- common/autotest_common.sh@1117 -- # accel_test -t 1 -w xor -y 00:05:28.840 00:06:47 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:28.840 00:06:47 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:28.840 00:06:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:28.840 00:06:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:28.840 00:06:47 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:05:28.840 00:06:47 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:05:28.840 00:06:47 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:28.840 00:06:47 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:28.840 00:06:47 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:28.840 00:06:47 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:28.840 00:06:47 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:28.840 00:06:47 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:28.840 00:06:47 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:28.840 00:06:47 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:28.840 [2024-07-16 00:06:47.489108] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:05:28.840 [2024-07-16 00:06:47.489160] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1341929 ] 00:05:28.840 [2024-07-16 00:06:47.542557] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.840 [2024-07-16 00:06:47.615252] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.840 00:06:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:28.840 00:06:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:28.840 00:06:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:28.840 00:06:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:28.840 00:06:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:28.840 00:06:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:28.840 00:06:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:28.840 00:06:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:28.840 00:06:47 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:28.840 00:06:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:28.840 00:06:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:28.840 00:06:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:28.840 00:06:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:28.840 00:06:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:28.840 00:06:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:28.840 00:06:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:28.840 00:06:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:28.840 00:06:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:28.840 00:06:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:28.840 00:06:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:28.840 00:06:47 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:28.840 00:06:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:28.840 00:06:47 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:28.840 00:06:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:28.840 00:06:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:28.840 00:06:47 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:05:28.840 00:06:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:28.840 00:06:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:28.840 00:06:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:28.840 00:06:47 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:28.840 00:06:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:28.840 00:06:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:28.840 00:06:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:28.840 00:06:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:28.840 00:06:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:28.841 00:06:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:28.841 00:06:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:28.841 00:06:47 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:28.841 00:06:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:28.841 00:06:47 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:28.841 00:06:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:28.841 00:06:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:28.841 00:06:47 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:28.841 00:06:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:28.841 00:06:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:28.841 00:06:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:28.841 00:06:47 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:28.841 00:06:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:28.841 00:06:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:28.841 00:06:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:28.841 00:06:47 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:28.841 00:06:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:28.841 00:06:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:28.841 00:06:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:28.841 00:06:47 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:28.841 00:06:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:28.841 00:06:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:28.841 00:06:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:28.841 00:06:47 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:28.841 00:06:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:28.841 00:06:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:28.841 00:06:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:28.841 00:06:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:28.841 00:06:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:28.841 00:06:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:28.841 00:06:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:28.841 00:06:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:28.841 00:06:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:28.841 00:06:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:28.841 00:06:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:30.216 00:06:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:30.216 00:06:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:30.216 00:06:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:30.216 00:06:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:30.216 00:06:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:30.216 00:06:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:30.216 00:06:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:30.216 00:06:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:30.216 00:06:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:30.216 00:06:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:30.216 00:06:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:30.216 00:06:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:30.216 00:06:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:30.216 00:06:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:30.216 00:06:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:30.216 00:06:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:30.216 00:06:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:30.216 00:06:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:30.216 00:06:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:30.216 00:06:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:30.216 00:06:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:30.216 00:06:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:30.216 00:06:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:30.216 00:06:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:30.216 00:06:48 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:30.216 00:06:48 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:30.216 00:06:48 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:30.216 00:05:30.216 real 0m1.332s 00:05:30.216 user 0m1.237s 00:05:30.216 sys 0m0.109s 00:05:30.216 00:06:48 accel.accel_xor -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:30.216 00:06:48 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:30.216 ************************************ 00:05:30.216 END TEST accel_xor 00:05:30.216 ************************************ 00:05:30.217 00:06:48 accel -- common/autotest_common.sh@1136 -- # return 0 00:05:30.217 00:06:48 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:05:30.217 00:06:48 accel -- common/autotest_common.sh@1093 -- # '[' 9 -le 1 ']' 00:05:30.217 00:06:48 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:30.217 00:06:48 accel -- common/autotest_common.sh@10 -- # set +x 00:05:30.217 ************************************ 00:05:30.217 START TEST accel_xor 00:05:30.217 ************************************ 00:05:30.217 00:06:48 accel.accel_xor -- common/autotest_common.sh@1117 -- # accel_test -t 1 -w xor -y -x 3 00:05:30.217 00:06:48 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:30.217 00:06:48 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:30.217 00:06:48 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:05:30.217 00:06:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:30.217 00:06:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:30.217 00:06:48 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:05:30.217 00:06:48 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:30.217 00:06:48 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:30.217 00:06:48 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:30.217 00:06:48 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:30.217 00:06:48 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:30.217 00:06:48 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:30.217 00:06:48 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:30.217 00:06:48 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:30.217 [2024-07-16 00:06:48.873387] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:05:30.217 [2024-07-16 00:06:48.873423] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1342187 ] 00:05:30.217 [2024-07-16 00:06:48.925993] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.217 [2024-07-16 00:06:48.998336] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.217 00:06:49 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:30.217 00:06:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:30.217 00:06:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:30.217 00:06:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:30.217 00:06:49 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:30.217 00:06:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:30.217 00:06:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:30.217 00:06:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:30.217 00:06:49 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:30.217 00:06:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:30.217 00:06:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:30.217 00:06:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:30.217 00:06:49 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:30.217 00:06:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:30.217 00:06:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:30.217 00:06:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:30.217 00:06:49 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:30.217 00:06:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:30.217 00:06:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:30.217 00:06:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:30.217 00:06:49 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:30.217 00:06:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:30.217 00:06:49 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:30.217 00:06:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:30.217 00:06:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:30.217 00:06:49 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:05:30.217 00:06:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:30.217 00:06:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:30.217 00:06:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:30.217 00:06:49 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:30.217 00:06:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:30.217 00:06:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:30.217 00:06:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:30.217 00:06:49 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:30.217 00:06:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:30.217 00:06:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:30.217 00:06:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:30.217 00:06:49 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:30.217 00:06:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:30.217 00:06:49 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:30.217 00:06:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:30.217 00:06:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:30.217 00:06:49 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:30.217 00:06:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:30.217 00:06:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:30.217 00:06:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:30.217 00:06:49 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:30.217 00:06:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:30.217 00:06:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:30.217 00:06:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:30.217 00:06:49 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:30.217 00:06:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:30.217 00:06:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:30.217 00:06:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:30.217 00:06:49 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:30.217 00:06:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:30.217 00:06:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:30.217 00:06:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:30.217 00:06:49 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:30.217 00:06:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:30.217 00:06:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:30.217 00:06:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:30.217 00:06:49 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:30.217 00:06:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:30.217 00:06:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:30.217 00:06:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:30.217 00:06:49 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:30.217 00:06:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:30.217 00:06:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:30.217 00:06:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:31.594 00:06:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:31.594 00:06:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:31.594 00:06:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:31.594 00:06:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:31.594 00:06:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:31.594 00:06:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:31.594 00:06:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:31.594 00:06:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:31.594 00:06:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:31.594 00:06:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:31.594 00:06:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:31.594 00:06:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:31.594 00:06:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:31.594 00:06:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:31.594 00:06:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:31.594 00:06:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:31.594 00:06:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:31.594 00:06:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:31.594 00:06:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:31.594 00:06:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:31.594 00:06:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:31.594 00:06:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:31.594 00:06:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:31.594 00:06:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:31.594 00:06:50 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:31.594 00:06:50 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:31.594 00:06:50 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:31.594 00:05:31.594 real 0m1.322s 00:05:31.594 user 0m1.224s 00:05:31.594 sys 0m0.112s 00:05:31.594 00:06:50 accel.accel_xor -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:31.594 00:06:50 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:31.594 ************************************ 00:05:31.594 END TEST accel_xor 00:05:31.594 ************************************ 00:05:31.594 00:06:50 accel -- common/autotest_common.sh@1136 -- # return 0 00:05:31.594 00:06:50 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:05:31.594 00:06:50 accel -- common/autotest_common.sh@1093 -- # '[' 6 -le 1 ']' 00:05:31.594 00:06:50 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:31.594 00:06:50 accel -- common/autotest_common.sh@10 -- # set +x 00:05:31.594 ************************************ 00:05:31.594 START TEST accel_dif_verify 00:05:31.594 ************************************ 00:05:31.594 00:06:50 accel.accel_dif_verify -- common/autotest_common.sh@1117 -- # accel_test -t 1 -w dif_verify 00:05:31.594 00:06:50 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:05:31.594 00:06:50 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:05:31.594 00:06:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:31.594 00:06:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:31.594 00:06:50 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:05:31.594 00:06:50 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:05:31.594 00:06:50 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:31.594 00:06:50 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:31.594 00:06:50 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:31.594 00:06:50 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:31.594 00:06:50 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:31.594 00:06:50 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:31.595 00:06:50 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:31.595 00:06:50 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:05:31.595 [2024-07-16 00:06:50.268547] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:05:31.595 [2024-07-16 00:06:50.268612] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1342435 ] 00:05:31.595 [2024-07-16 00:06:50.323785] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.595 [2024-07-16 00:06:50.396935] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.595 00:06:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:31.595 00:06:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:31.595 00:06:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:31.595 00:06:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:31.595 00:06:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:31.595 00:06:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:31.595 00:06:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:31.595 00:06:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:31.595 00:06:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:05:31.595 00:06:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:31.595 00:06:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:31.595 00:06:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:31.595 00:06:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:31.595 00:06:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:31.595 00:06:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:31.595 00:06:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:31.595 00:06:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:31.595 00:06:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:31.595 00:06:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:31.595 00:06:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:31.595 00:06:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:05:31.595 00:06:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:31.595 00:06:50 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:05:31.595 00:06:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:31.595 00:06:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:31.595 00:06:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:31.595 00:06:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:31.595 00:06:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:31.595 00:06:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:31.595 00:06:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:31.854 00:06:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:31.854 00:06:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:31.854 00:06:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:31.854 00:06:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:05:31.854 00:06:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:31.854 00:06:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:31.854 00:06:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:31.854 00:06:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:05:31.854 00:06:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:31.854 00:06:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:31.854 00:06:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:31.854 00:06:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:31.854 00:06:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:31.854 00:06:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:31.854 00:06:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:31.854 00:06:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:05:31.854 00:06:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:31.854 00:06:50 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:05:31.854 00:06:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:31.854 00:06:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:31.854 00:06:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:31.854 00:06:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:31.854 00:06:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:31.854 00:06:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:31.854 00:06:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:31.854 00:06:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:31.854 00:06:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:31.854 00:06:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:31.854 00:06:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:05:31.854 00:06:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:31.854 00:06:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:31.854 00:06:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:31.854 00:06:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:05:31.854 00:06:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:31.854 00:06:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:31.854 00:06:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:31.854 00:06:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:05:31.854 00:06:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:31.854 00:06:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:31.854 00:06:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:31.854 00:06:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:31.854 00:06:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:31.854 00:06:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:31.854 00:06:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:31.854 00:06:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:31.854 00:06:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:31.854 00:06:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:31.854 00:06:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:32.788 00:06:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:32.788 00:06:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:32.788 00:06:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:32.788 00:06:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:32.788 00:06:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:32.788 00:06:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:32.788 00:06:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:32.788 00:06:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:32.788 00:06:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:32.788 00:06:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:32.788 00:06:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:32.788 00:06:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:32.788 00:06:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:32.788 00:06:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:32.788 00:06:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:32.788 00:06:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:32.788 00:06:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:32.788 00:06:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:32.788 00:06:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:32.788 00:06:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:32.788 00:06:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:32.788 00:06:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:32.788 00:06:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:32.788 00:06:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:32.788 00:06:51 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:32.788 00:06:51 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:05:32.788 00:06:51 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:32.788 00:05:32.788 real 0m1.336s 00:05:32.788 user 0m1.236s 00:05:32.788 sys 0m0.115s 00:05:32.788 00:06:51 accel.accel_dif_verify -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:32.788 00:06:51 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:05:32.788 ************************************ 00:05:32.788 END TEST accel_dif_verify 00:05:32.788 ************************************ 00:05:32.788 00:06:51 accel -- common/autotest_common.sh@1136 -- # return 0 00:05:32.788 00:06:51 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:05:32.788 00:06:51 accel -- common/autotest_common.sh@1093 -- # '[' 6 -le 1 ']' 00:05:32.788 00:06:51 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:32.788 00:06:51 accel -- common/autotest_common.sh@10 -- # set +x 00:05:32.788 ************************************ 00:05:32.788 START TEST accel_dif_generate 00:05:32.788 ************************************ 00:05:32.788 00:06:51 accel.accel_dif_generate -- common/autotest_common.sh@1117 -- # accel_test -t 1 -w dif_generate 00:05:32.788 00:06:51 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:05:32.788 00:06:51 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:05:32.788 00:06:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:32.788 00:06:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:32.788 00:06:51 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:05:32.788 00:06:51 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:05:32.788 00:06:51 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:05:32.788 00:06:51 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:32.788 00:06:51 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:32.788 00:06:51 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:32.788 00:06:51 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:32.788 00:06:51 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:32.788 00:06:51 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:05:32.788 00:06:51 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:05:33.046 [2024-07-16 00:06:51.658909] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:05:33.046 [2024-07-16 00:06:51.658979] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1342693 ] 00:05:33.046 [2024-07-16 00:06:51.715795] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.046 [2024-07-16 00:06:51.789055] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.046 00:06:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:33.046 00:06:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:33.046 00:06:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:33.046 00:06:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:33.046 00:06:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:33.046 00:06:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:33.046 00:06:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:33.046 00:06:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:33.046 00:06:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:05:33.046 00:06:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:33.046 00:06:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:33.046 00:06:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:33.046 00:06:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:33.046 00:06:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:33.046 00:06:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:33.046 00:06:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:33.046 00:06:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:33.046 00:06:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:33.046 00:06:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:33.046 00:06:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:33.046 00:06:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:05:33.046 00:06:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:33.047 00:06:51 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:05:33.047 00:06:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:33.047 00:06:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:33.047 00:06:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:33.047 00:06:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:33.047 00:06:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:33.047 00:06:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:33.047 00:06:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:33.047 00:06:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:33.047 00:06:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:33.047 00:06:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:33.047 00:06:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:05:33.047 00:06:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:33.047 00:06:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:33.047 00:06:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:33.047 00:06:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:05:33.047 00:06:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:33.047 00:06:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:33.047 00:06:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:33.047 00:06:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:33.047 00:06:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:33.047 00:06:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:33.047 00:06:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:33.047 00:06:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:05:33.047 00:06:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:33.047 00:06:51 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:05:33.047 00:06:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:33.047 00:06:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:33.047 00:06:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:05:33.047 00:06:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:33.047 00:06:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:33.047 00:06:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:33.047 00:06:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:05:33.047 00:06:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:33.047 00:06:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:33.047 00:06:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:33.047 00:06:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:05:33.047 00:06:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:33.047 00:06:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:33.047 00:06:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:33.047 00:06:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:05:33.047 00:06:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:33.047 00:06:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:33.047 00:06:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:33.047 00:06:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:05:33.047 00:06:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:33.047 00:06:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:33.047 00:06:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:33.047 00:06:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:33.047 00:06:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:33.047 00:06:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:33.047 00:06:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:33.047 00:06:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:33.047 00:06:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:33.047 00:06:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:33.047 00:06:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:34.427 00:06:52 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:34.427 00:06:52 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:34.427 00:06:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:34.427 00:06:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:34.427 00:06:52 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:34.427 00:06:52 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:34.427 00:06:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:34.427 00:06:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:34.427 00:06:52 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:34.427 00:06:52 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:34.427 00:06:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:34.427 00:06:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:34.427 00:06:52 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:34.427 00:06:52 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:34.427 00:06:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:34.427 00:06:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:34.427 00:06:52 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:34.427 00:06:52 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:34.427 00:06:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:34.427 00:06:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:34.427 00:06:52 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:34.427 00:06:52 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:34.427 00:06:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:34.427 00:06:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:34.427 00:06:52 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:34.427 00:06:52 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:05:34.427 00:06:52 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:34.427 00:05:34.427 real 0m1.340s 00:05:34.427 user 0m1.239s 00:05:34.427 sys 0m0.115s 00:05:34.427 00:06:52 accel.accel_dif_generate -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:34.427 00:06:52 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:05:34.427 ************************************ 00:05:34.427 END TEST accel_dif_generate 00:05:34.427 ************************************ 00:05:34.427 00:06:53 accel -- common/autotest_common.sh@1136 -- # return 0 00:05:34.427 00:06:53 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:05:34.427 00:06:53 accel -- common/autotest_common.sh@1093 -- # '[' 6 -le 1 ']' 00:05:34.427 00:06:53 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:34.427 00:06:53 accel -- common/autotest_common.sh@10 -- # set +x 00:05:34.427 ************************************ 00:05:34.427 START TEST accel_dif_generate_copy 00:05:34.427 ************************************ 00:05:34.427 00:06:53 accel.accel_dif_generate_copy -- common/autotest_common.sh@1117 -- # accel_test -t 1 -w dif_generate_copy 00:05:34.427 00:06:53 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:34.427 00:06:53 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:05:34.427 00:06:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:34.427 00:06:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:34.427 00:06:53 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:05:34.427 00:06:53 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:34.427 00:06:53 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:05:34.427 00:06:53 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:34.427 00:06:53 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:34.427 00:06:53 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:34.427 00:06:53 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:34.427 00:06:53 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:34.427 00:06:53 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:34.427 00:06:53 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:05:34.427 [2024-07-16 00:06:53.055638] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:05:34.427 [2024-07-16 00:06:53.055706] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1342951 ] 00:05:34.427 [2024-07-16 00:06:53.111567] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.427 [2024-07-16 00:06:53.184530] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.427 00:06:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:34.427 00:06:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:34.427 00:06:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:34.427 00:06:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:34.427 00:06:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:34.427 00:06:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:34.427 00:06:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:34.427 00:06:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:34.427 00:06:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:05:34.427 00:06:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:34.427 00:06:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:34.427 00:06:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:34.427 00:06:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:34.427 00:06:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:34.427 00:06:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:34.427 00:06:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:34.427 00:06:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:34.427 00:06:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:34.427 00:06:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:34.428 00:06:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:34.428 00:06:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:05:34.428 00:06:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:34.428 00:06:53 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:05:34.428 00:06:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:34.428 00:06:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:34.428 00:06:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:34.428 00:06:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:34.428 00:06:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:34.428 00:06:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:34.428 00:06:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:34.428 00:06:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:34.428 00:06:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:34.428 00:06:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:34.428 00:06:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:34.428 00:06:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:34.428 00:06:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:34.428 00:06:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:34.428 00:06:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:05:34.428 00:06:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:34.428 00:06:53 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:34.428 00:06:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:34.428 00:06:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:34.428 00:06:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:05:34.428 00:06:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:34.428 00:06:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:34.428 00:06:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:34.428 00:06:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:05:34.428 00:06:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:34.428 00:06:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:34.428 00:06:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:34.428 00:06:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:05:34.428 00:06:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:34.428 00:06:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:34.428 00:06:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:34.428 00:06:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:34.428 00:06:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:34.428 00:06:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:34.428 00:06:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:34.428 00:06:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:05:34.428 00:06:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:34.428 00:06:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:34.428 00:06:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:34.428 00:06:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:34.428 00:06:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:34.428 00:06:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:34.428 00:06:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:34.428 00:06:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:34.428 00:06:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:34.428 00:06:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:34.428 00:06:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:35.805 00:06:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:35.805 00:06:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:35.805 00:06:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:35.805 00:06:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:35.805 00:06:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:35.805 00:06:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:35.805 00:06:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:35.805 00:06:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:35.805 00:06:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:35.805 00:06:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:35.805 00:06:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:35.805 00:06:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:35.805 00:06:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:35.805 00:06:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:35.805 00:06:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:35.805 00:06:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:35.805 00:06:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:35.805 00:06:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:35.805 00:06:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:35.805 00:06:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:35.805 00:06:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:35.805 00:06:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:35.805 00:06:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:35.805 00:06:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:35.805 00:06:54 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:35.805 00:06:54 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:05:35.805 00:06:54 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:35.806 00:05:35.806 real 0m1.339s 00:05:35.806 user 0m1.241s 00:05:35.806 sys 0m0.112s 00:05:35.806 00:06:54 accel.accel_dif_generate_copy -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:35.806 00:06:54 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:05:35.806 ************************************ 00:05:35.806 END TEST accel_dif_generate_copy 00:05:35.806 ************************************ 00:05:35.806 00:06:54 accel -- common/autotest_common.sh@1136 -- # return 0 00:05:35.806 00:06:54 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:05:35.806 00:06:54 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:35.806 00:06:54 accel -- common/autotest_common.sh@1093 -- # '[' 8 -le 1 ']' 00:05:35.806 00:06:54 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:35.806 00:06:54 accel -- common/autotest_common.sh@10 -- # set +x 00:05:35.806 ************************************ 00:05:35.806 START TEST accel_comp 00:05:35.806 ************************************ 00:05:35.806 00:06:54 accel.accel_comp -- common/autotest_common.sh@1117 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:35.806 00:06:54 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:05:35.806 00:06:54 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:05:35.806 00:06:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:35.806 00:06:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:35.806 00:06:54 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:35.806 00:06:54 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:05:35.806 00:06:54 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:35.806 00:06:54 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:35.806 00:06:54 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:35.806 00:06:54 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:35.806 00:06:54 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:35.806 00:06:54 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:35.806 00:06:54 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:05:35.806 00:06:54 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:05:35.806 [2024-07-16 00:06:54.454650] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:05:35.806 [2024-07-16 00:06:54.454696] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1343207 ] 00:05:35.806 [2024-07-16 00:06:54.508734] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.806 [2024-07-16 00:06:54.581467] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.806 00:06:54 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:35.806 00:06:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:35.806 00:06:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:35.806 00:06:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:35.806 00:06:54 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:35.806 00:06:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:35.806 00:06:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:35.806 00:06:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:35.806 00:06:54 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:35.806 00:06:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:35.806 00:06:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:35.806 00:06:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:35.806 00:06:54 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:05:35.806 00:06:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:35.806 00:06:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:35.806 00:06:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:35.806 00:06:54 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:35.806 00:06:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:35.806 00:06:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:35.806 00:06:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:35.806 00:06:54 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:35.806 00:06:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:35.806 00:06:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:35.806 00:06:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:35.806 00:06:54 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:05:35.806 00:06:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:35.806 00:06:54 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:05:35.806 00:06:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:35.806 00:06:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:35.806 00:06:54 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:35.806 00:06:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:35.806 00:06:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:35.806 00:06:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:35.806 00:06:54 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:35.806 00:06:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:35.806 00:06:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:35.806 00:06:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:35.806 00:06:54 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:05:35.806 00:06:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:35.806 00:06:54 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:05:35.806 00:06:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:35.806 00:06:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:35.806 00:06:54 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:35.806 00:06:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:35.806 00:06:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:35.806 00:06:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:35.806 00:06:54 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:05:35.806 00:06:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:35.806 00:06:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:35.806 00:06:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:35.806 00:06:54 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:05:35.806 00:06:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:35.806 00:06:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:35.806 00:06:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:35.806 00:06:54 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:05:35.806 00:06:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:35.806 00:06:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:35.806 00:06:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:35.806 00:06:54 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:05:35.806 00:06:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:35.806 00:06:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:35.806 00:06:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:35.806 00:06:54 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:05:35.806 00:06:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:35.806 00:06:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:35.806 00:06:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:35.806 00:06:54 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:35.806 00:06:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:35.806 00:06:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:35.806 00:06:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:35.806 00:06:54 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:35.806 00:06:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:35.806 00:06:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:35.806 00:06:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:37.185 00:06:55 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:37.185 00:06:55 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:37.185 00:06:55 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:37.185 00:06:55 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:37.185 00:06:55 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:37.185 00:06:55 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:37.185 00:06:55 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:37.185 00:06:55 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:37.185 00:06:55 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:37.185 00:06:55 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:37.185 00:06:55 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:37.185 00:06:55 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:37.185 00:06:55 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:37.185 00:06:55 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:37.185 00:06:55 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:37.185 00:06:55 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:37.185 00:06:55 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:37.185 00:06:55 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:37.185 00:06:55 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:37.185 00:06:55 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:37.185 00:06:55 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:37.185 00:06:55 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:37.185 00:06:55 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:37.185 00:06:55 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:37.185 00:06:55 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:37.185 00:06:55 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:05:37.185 00:06:55 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:37.185 00:05:37.185 real 0m1.337s 00:05:37.185 user 0m1.238s 00:05:37.185 sys 0m0.113s 00:05:37.185 00:06:55 accel.accel_comp -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:37.185 00:06:55 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:05:37.185 ************************************ 00:05:37.185 END TEST accel_comp 00:05:37.185 ************************************ 00:05:37.185 00:06:55 accel -- common/autotest_common.sh@1136 -- # return 0 00:05:37.185 00:06:55 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:37.185 00:06:55 accel -- common/autotest_common.sh@1093 -- # '[' 9 -le 1 ']' 00:05:37.185 00:06:55 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:37.185 00:06:55 accel -- common/autotest_common.sh@10 -- # set +x 00:05:37.185 ************************************ 00:05:37.185 START TEST accel_decomp 00:05:37.185 ************************************ 00:05:37.185 00:06:55 accel.accel_decomp -- common/autotest_common.sh@1117 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:37.185 00:06:55 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:05:37.185 00:06:55 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:05:37.185 00:06:55 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:37.185 00:06:55 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:37.185 00:06:55 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:37.185 00:06:55 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:37.185 00:06:55 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:05:37.185 00:06:55 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:37.185 00:06:55 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:37.185 00:06:55 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:37.185 00:06:55 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:37.185 00:06:55 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:37.185 00:06:55 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:05:37.185 00:06:55 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:05:37.185 [2024-07-16 00:06:55.840484] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:05:37.185 [2024-07-16 00:06:55.840536] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1343461 ] 00:05:37.185 [2024-07-16 00:06:55.894866] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.185 [2024-07-16 00:06:55.967569] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.185 00:06:56 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:37.185 00:06:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:37.185 00:06:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:37.185 00:06:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:37.185 00:06:56 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:37.185 00:06:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:37.185 00:06:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:37.185 00:06:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:37.185 00:06:56 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:37.185 00:06:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:37.185 00:06:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:37.185 00:06:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:37.185 00:06:56 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:05:37.185 00:06:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:37.185 00:06:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:37.185 00:06:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:37.185 00:06:56 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:37.185 00:06:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:37.185 00:06:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:37.185 00:06:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:37.185 00:06:56 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:37.185 00:06:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:37.185 00:06:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:37.185 00:06:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:37.185 00:06:56 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:05:37.185 00:06:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:37.185 00:06:56 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:37.185 00:06:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:37.185 00:06:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:37.185 00:06:56 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:37.185 00:06:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:37.185 00:06:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:37.185 00:06:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:37.185 00:06:56 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:37.185 00:06:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:37.185 00:06:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:37.185 00:06:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:37.185 00:06:56 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:05:37.185 00:06:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:37.185 00:06:56 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:05:37.185 00:06:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:37.185 00:06:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:37.185 00:06:56 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:37.185 00:06:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:37.185 00:06:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:37.185 00:06:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:37.185 00:06:56 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:05:37.185 00:06:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:37.185 00:06:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:37.185 00:06:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:37.185 00:06:56 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:05:37.185 00:06:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:37.185 00:06:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:37.185 00:06:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:37.185 00:06:56 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:05:37.185 00:06:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:37.185 00:06:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:37.185 00:06:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:37.185 00:06:56 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:05:37.185 00:06:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:37.185 00:06:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:37.185 00:06:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:37.185 00:06:56 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:05:37.185 00:06:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:37.185 00:06:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:37.185 00:06:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:37.185 00:06:56 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:37.185 00:06:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:37.185 00:06:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:37.185 00:06:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:37.185 00:06:56 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:37.185 00:06:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:37.185 00:06:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:37.185 00:06:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:38.564 00:06:57 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:38.564 00:06:57 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:38.564 00:06:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:38.564 00:06:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:38.564 00:06:57 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:38.564 00:06:57 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:38.564 00:06:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:38.564 00:06:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:38.564 00:06:57 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:38.564 00:06:57 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:38.564 00:06:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:38.564 00:06:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:38.564 00:06:57 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:38.564 00:06:57 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:38.564 00:06:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:38.564 00:06:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:38.564 00:06:57 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:38.564 00:06:57 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:38.564 00:06:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:38.564 00:06:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:38.564 00:06:57 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:38.564 00:06:57 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:38.564 00:06:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:38.564 00:06:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:38.564 00:06:57 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:38.564 00:06:57 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:38.564 00:06:57 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:38.564 00:05:38.564 real 0m1.336s 00:05:38.564 user 0m1.234s 00:05:38.564 sys 0m0.117s 00:05:38.564 00:06:57 accel.accel_decomp -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:38.564 00:06:57 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:05:38.564 ************************************ 00:05:38.564 END TEST accel_decomp 00:05:38.564 ************************************ 00:05:38.564 00:06:57 accel -- common/autotest_common.sh@1136 -- # return 0 00:05:38.564 00:06:57 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:38.564 00:06:57 accel -- common/autotest_common.sh@1093 -- # '[' 11 -le 1 ']' 00:05:38.564 00:06:57 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:38.564 00:06:57 accel -- common/autotest_common.sh@10 -- # set +x 00:05:38.564 ************************************ 00:05:38.564 START TEST accel_decomp_full 00:05:38.564 ************************************ 00:05:38.564 00:06:57 accel.accel_decomp_full -- common/autotest_common.sh@1117 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:38.564 00:06:57 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:05:38.564 00:06:57 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:05:38.564 00:06:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:38.564 00:06:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:38.564 00:06:57 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:38.564 00:06:57 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:05:38.564 00:06:57 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:38.564 00:06:57 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:38.564 00:06:57 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:38.564 00:06:57 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:38.564 00:06:57 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:38.564 00:06:57 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:38.564 00:06:57 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:05:38.564 00:06:57 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:05:38.564 [2024-07-16 00:06:57.244258] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:05:38.564 [2024-07-16 00:06:57.244324] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1343720 ] 00:05:38.564 [2024-07-16 00:06:57.299211] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.564 [2024-07-16 00:06:57.371890] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.564 00:06:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:38.564 00:06:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:38.564 00:06:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:38.824 00:06:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:38.824 00:06:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:38.824 00:06:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:38.824 00:06:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:38.824 00:06:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:38.824 00:06:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:38.824 00:06:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:38.824 00:06:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:38.824 00:06:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:38.824 00:06:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:05:38.824 00:06:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:38.824 00:06:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:38.824 00:06:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:38.824 00:06:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:38.824 00:06:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:38.824 00:06:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:38.824 00:06:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:38.824 00:06:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:38.824 00:06:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:38.824 00:06:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:38.824 00:06:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:38.824 00:06:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:05:38.824 00:06:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:38.824 00:06:57 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:38.824 00:06:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:38.824 00:06:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:38.824 00:06:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:38.824 00:06:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:38.824 00:06:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:38.824 00:06:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:38.824 00:06:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:38.824 00:06:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:38.824 00:06:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:38.824 00:06:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:38.824 00:06:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:05:38.824 00:06:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:38.824 00:06:57 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:05:38.824 00:06:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:38.824 00:06:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:38.824 00:06:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:38.824 00:06:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:38.824 00:06:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:38.824 00:06:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:38.824 00:06:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:05:38.824 00:06:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:38.824 00:06:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:38.824 00:06:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:38.824 00:06:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:05:38.824 00:06:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:38.824 00:06:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:38.824 00:06:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:38.824 00:06:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:05:38.824 00:06:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:38.824 00:06:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:38.824 00:06:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:38.824 00:06:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:05:38.824 00:06:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:38.824 00:06:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:38.824 00:06:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:38.824 00:06:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:05:38.824 00:06:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:38.824 00:06:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:38.824 00:06:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:38.824 00:06:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:38.824 00:06:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:38.824 00:06:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:38.824 00:06:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:38.824 00:06:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:38.824 00:06:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:38.824 00:06:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:38.824 00:06:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:39.801 00:06:58 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:39.801 00:06:58 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:39.801 00:06:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:39.801 00:06:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:39.801 00:06:58 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:39.801 00:06:58 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:39.801 00:06:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:39.801 00:06:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:39.801 00:06:58 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:39.801 00:06:58 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:39.801 00:06:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:39.801 00:06:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:39.801 00:06:58 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:39.801 00:06:58 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:39.801 00:06:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:39.801 00:06:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:39.801 00:06:58 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:39.801 00:06:58 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:39.801 00:06:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:39.801 00:06:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:39.801 00:06:58 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:39.801 00:06:58 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:39.801 00:06:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:39.801 00:06:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:39.801 00:06:58 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:39.801 00:06:58 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:39.801 00:06:58 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:39.801 00:05:39.801 real 0m1.348s 00:05:39.801 user 0m1.245s 00:05:39.801 sys 0m0.116s 00:05:39.801 00:06:58 accel.accel_decomp_full -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:39.802 00:06:58 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:05:39.802 ************************************ 00:05:39.802 END TEST accel_decomp_full 00:05:39.802 ************************************ 00:05:39.802 00:06:58 accel -- common/autotest_common.sh@1136 -- # return 0 00:05:39.802 00:06:58 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:39.802 00:06:58 accel -- common/autotest_common.sh@1093 -- # '[' 11 -le 1 ']' 00:05:39.802 00:06:58 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:39.802 00:06:58 accel -- common/autotest_common.sh@10 -- # set +x 00:05:39.802 ************************************ 00:05:39.802 START TEST accel_decomp_mcore 00:05:39.802 ************************************ 00:05:39.802 00:06:58 accel.accel_decomp_mcore -- common/autotest_common.sh@1117 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:39.802 00:06:58 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:05:39.802 00:06:58 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:05:39.802 00:06:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:39.802 00:06:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:39.802 00:06:58 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:39.802 00:06:58 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:39.802 00:06:58 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:05:39.802 00:06:58 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:39.802 00:06:58 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:39.802 00:06:58 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:39.802 00:06:58 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:39.802 00:06:58 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:39.802 00:06:58 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:05:39.802 00:06:58 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:05:40.084 [2024-07-16 00:06:58.654459] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:05:40.084 [2024-07-16 00:06:58.654521] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1343989 ] 00:05:40.084 [2024-07-16 00:06:58.711562] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:40.084 [2024-07-16 00:06:58.792816] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:40.084 [2024-07-16 00:06:58.792912] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:40.084 [2024-07-16 00:06:58.792972] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:40.084 [2024-07-16 00:06:58.792974] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.084 00:06:58 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:40.084 00:06:58 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:40.084 00:06:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:40.084 00:06:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:40.084 00:06:58 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:40.084 00:06:58 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:40.084 00:06:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:40.084 00:06:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:40.084 00:06:58 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:40.084 00:06:58 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:40.084 00:06:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:40.084 00:06:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:40.084 00:06:58 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:05:40.084 00:06:58 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:40.084 00:06:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:40.084 00:06:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:40.085 00:06:58 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:40.085 00:06:58 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:40.085 00:06:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:40.085 00:06:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:40.085 00:06:58 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:40.085 00:06:58 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:40.085 00:06:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:40.085 00:06:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:40.085 00:06:58 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:05:40.085 00:06:58 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:40.085 00:06:58 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:40.085 00:06:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:40.085 00:06:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:40.085 00:06:58 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:40.085 00:06:58 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:40.085 00:06:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:40.085 00:06:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:40.085 00:06:58 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:40.085 00:06:58 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:40.085 00:06:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:40.085 00:06:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:40.085 00:06:58 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:05:40.085 00:06:58 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:40.085 00:06:58 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:05:40.085 00:06:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:40.085 00:06:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:40.085 00:06:58 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:40.085 00:06:58 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:40.085 00:06:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:40.085 00:06:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:40.085 00:06:58 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:05:40.085 00:06:58 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:40.085 00:06:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:40.085 00:06:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:40.085 00:06:58 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:05:40.085 00:06:58 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:40.085 00:06:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:40.085 00:06:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:40.085 00:06:58 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:05:40.085 00:06:58 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:40.085 00:06:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:40.085 00:06:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:40.085 00:06:58 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:05:40.085 00:06:58 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:40.085 00:06:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:40.085 00:06:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:40.085 00:06:58 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:05:40.085 00:06:58 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:40.085 00:06:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:40.085 00:06:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:40.085 00:06:58 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:40.085 00:06:58 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:40.085 00:06:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:40.085 00:06:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:40.085 00:06:58 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:40.085 00:06:58 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:40.085 00:06:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:40.085 00:06:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:41.466 00:06:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:41.466 00:06:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:41.466 00:06:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:41.466 00:06:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:41.466 00:06:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:41.466 00:06:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:41.466 00:06:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:41.466 00:06:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:41.466 00:06:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:41.466 00:06:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:41.466 00:06:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:41.466 00:06:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:41.466 00:06:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:41.466 00:06:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:41.466 00:06:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:41.466 00:06:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:41.466 00:06:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:41.466 00:06:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:41.466 00:06:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:41.466 00:06:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:41.466 00:06:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:41.466 00:06:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:41.466 00:06:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:41.466 00:06:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:41.466 00:06:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:41.466 00:06:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:41.466 00:06:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:41.466 00:06:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:41.466 00:06:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:41.466 00:06:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:41.466 00:06:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:41.466 00:06:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:41.466 00:06:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:41.466 00:06:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:41.466 00:06:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:41.466 00:06:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:41.466 00:06:59 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:41.466 00:06:59 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:41.466 00:06:59 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:41.466 00:05:41.466 real 0m1.357s 00:05:41.466 user 0m4.587s 00:05:41.466 sys 0m0.115s 00:05:41.466 00:06:59 accel.accel_decomp_mcore -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:41.466 00:06:59 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:05:41.466 ************************************ 00:05:41.466 END TEST accel_decomp_mcore 00:05:41.466 ************************************ 00:05:41.466 00:07:00 accel -- common/autotest_common.sh@1136 -- # return 0 00:05:41.466 00:07:00 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:41.466 00:07:00 accel -- common/autotest_common.sh@1093 -- # '[' 13 -le 1 ']' 00:05:41.466 00:07:00 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:41.466 00:07:00 accel -- common/autotest_common.sh@10 -- # set +x 00:05:41.466 ************************************ 00:05:41.466 START TEST accel_decomp_full_mcore 00:05:41.466 ************************************ 00:05:41.466 00:07:00 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1117 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:41.466 00:07:00 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:05:41.466 00:07:00 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:05:41.466 00:07:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:41.466 00:07:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:41.466 00:07:00 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:41.466 00:07:00 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:41.466 00:07:00 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:05:41.466 00:07:00 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:41.466 00:07:00 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:41.466 00:07:00 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:41.466 00:07:00 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:41.466 00:07:00 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:41.466 00:07:00 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:05:41.466 00:07:00 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:05:41.466 [2024-07-16 00:07:00.076081] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:05:41.466 [2024-07-16 00:07:00.076129] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1344265 ] 00:05:41.466 [2024-07-16 00:07:00.130466] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:41.466 [2024-07-16 00:07:00.206066] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:41.466 [2024-07-16 00:07:00.206164] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:41.466 [2024-07-16 00:07:00.206249] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:41.466 [2024-07-16 00:07:00.206266] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.466 00:07:00 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:41.466 00:07:00 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:41.466 00:07:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:41.466 00:07:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:41.466 00:07:00 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:41.466 00:07:00 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:41.466 00:07:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:41.466 00:07:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:41.466 00:07:00 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:41.466 00:07:00 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:41.466 00:07:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:41.466 00:07:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:41.466 00:07:00 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:05:41.466 00:07:00 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:41.466 00:07:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:41.466 00:07:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:41.466 00:07:00 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:41.466 00:07:00 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:41.466 00:07:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:41.467 00:07:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:41.467 00:07:00 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:41.467 00:07:00 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:41.467 00:07:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:41.467 00:07:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:41.467 00:07:00 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:05:41.467 00:07:00 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:41.467 00:07:00 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:41.467 00:07:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:41.467 00:07:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:41.467 00:07:00 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:41.467 00:07:00 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:41.467 00:07:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:41.467 00:07:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:41.467 00:07:00 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:41.467 00:07:00 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:41.467 00:07:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:41.467 00:07:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:41.467 00:07:00 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:05:41.467 00:07:00 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:41.467 00:07:00 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:05:41.467 00:07:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:41.467 00:07:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:41.467 00:07:00 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:41.467 00:07:00 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:41.467 00:07:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:41.467 00:07:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:41.467 00:07:00 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:05:41.467 00:07:00 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:41.467 00:07:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:41.467 00:07:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:41.467 00:07:00 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:05:41.467 00:07:00 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:41.467 00:07:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:41.467 00:07:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:41.467 00:07:00 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:05:41.467 00:07:00 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:41.467 00:07:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:41.467 00:07:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:41.467 00:07:00 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:05:41.467 00:07:00 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:41.467 00:07:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:41.467 00:07:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:41.467 00:07:00 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:05:41.467 00:07:00 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:41.467 00:07:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:41.467 00:07:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:41.467 00:07:00 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:41.467 00:07:00 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:41.467 00:07:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:41.467 00:07:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:41.467 00:07:00 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:41.467 00:07:00 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:41.467 00:07:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:41.467 00:07:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:42.846 00:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:42.846 00:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:42.846 00:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:42.846 00:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:42.846 00:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:42.846 00:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:42.846 00:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:42.846 00:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:42.846 00:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:42.846 00:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:42.846 00:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:42.846 00:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:42.846 00:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:42.846 00:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:42.846 00:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:42.846 00:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:42.846 00:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:42.846 00:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:42.846 00:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:42.846 00:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:42.846 00:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:42.846 00:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:42.846 00:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:42.846 00:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:42.846 00:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:42.846 00:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:42.846 00:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:42.846 00:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:42.846 00:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:42.846 00:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:42.846 00:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:42.846 00:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:42.846 00:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:42.846 00:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:42.846 00:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:42.846 00:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:42.846 00:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:42.846 00:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:42.846 00:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:42.846 00:05:42.846 real 0m1.354s 00:05:42.846 user 0m4.594s 00:05:42.846 sys 0m0.122s 00:05:42.846 00:07:01 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:42.846 00:07:01 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:05:42.846 ************************************ 00:05:42.846 END TEST accel_decomp_full_mcore 00:05:42.846 ************************************ 00:05:42.846 00:07:01 accel -- common/autotest_common.sh@1136 -- # return 0 00:05:42.846 00:07:01 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:42.846 00:07:01 accel -- common/autotest_common.sh@1093 -- # '[' 11 -le 1 ']' 00:05:42.846 00:07:01 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:42.846 00:07:01 accel -- common/autotest_common.sh@10 -- # set +x 00:05:42.846 ************************************ 00:05:42.846 START TEST accel_decomp_mthread 00:05:42.846 ************************************ 00:05:42.846 00:07:01 accel.accel_decomp_mthread -- common/autotest_common.sh@1117 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:42.846 00:07:01 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:05:42.846 00:07:01 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:05:42.846 00:07:01 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:42.846 00:07:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:42.846 00:07:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:42.846 00:07:01 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:42.846 00:07:01 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:05:42.846 00:07:01 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:42.846 00:07:01 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:42.846 00:07:01 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:42.846 00:07:01 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:42.846 00:07:01 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:42.846 00:07:01 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:05:42.846 00:07:01 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:05:42.846 [2024-07-16 00:07:01.476572] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:05:42.846 [2024-07-16 00:07:01.476619] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1344543 ] 00:05:42.846 [2024-07-16 00:07:01.529656] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.846 [2024-07-16 00:07:01.602803] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.846 00:07:01 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:42.846 00:07:01 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:42.846 00:07:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:42.846 00:07:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:42.846 00:07:01 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:42.846 00:07:01 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:42.846 00:07:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:42.846 00:07:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:42.846 00:07:01 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:42.846 00:07:01 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:42.846 00:07:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:42.846 00:07:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:42.846 00:07:01 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:05:42.846 00:07:01 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:42.846 00:07:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:42.846 00:07:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:42.846 00:07:01 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:42.846 00:07:01 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:42.846 00:07:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:42.846 00:07:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:42.846 00:07:01 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:42.846 00:07:01 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:42.846 00:07:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:42.846 00:07:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:42.846 00:07:01 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:05:42.846 00:07:01 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:42.846 00:07:01 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:42.846 00:07:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:42.847 00:07:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:42.847 00:07:01 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:42.847 00:07:01 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:42.847 00:07:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:42.847 00:07:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:42.847 00:07:01 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:42.847 00:07:01 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:42.847 00:07:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:42.847 00:07:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:42.847 00:07:01 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:05:42.847 00:07:01 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:42.847 00:07:01 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:05:42.847 00:07:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:42.847 00:07:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:42.847 00:07:01 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:42.847 00:07:01 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:42.847 00:07:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:42.847 00:07:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:42.847 00:07:01 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:05:42.847 00:07:01 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:42.847 00:07:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:42.847 00:07:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:42.847 00:07:01 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:05:42.847 00:07:01 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:42.847 00:07:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:42.847 00:07:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:42.847 00:07:01 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:05:42.847 00:07:01 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:42.847 00:07:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:42.847 00:07:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:42.847 00:07:01 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:05:42.847 00:07:01 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:42.847 00:07:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:42.847 00:07:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:42.847 00:07:01 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:05:42.847 00:07:01 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:42.847 00:07:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:42.847 00:07:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:42.847 00:07:01 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:42.847 00:07:01 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:42.847 00:07:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:42.847 00:07:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:42.847 00:07:01 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:42.847 00:07:01 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:42.847 00:07:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:42.847 00:07:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:44.224 00:07:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:44.224 00:07:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:44.224 00:07:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:44.224 00:07:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:44.224 00:07:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:44.224 00:07:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:44.224 00:07:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:44.224 00:07:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:44.224 00:07:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:44.224 00:07:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:44.224 00:07:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:44.224 00:07:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:44.224 00:07:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:44.224 00:07:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:44.225 00:07:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:44.225 00:07:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:44.225 00:07:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:44.225 00:07:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:44.225 00:07:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:44.225 00:07:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:44.225 00:07:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:44.225 00:07:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:44.225 00:07:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:44.225 00:07:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:44.225 00:07:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:44.225 00:07:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:44.225 00:07:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:44.225 00:07:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:44.225 00:07:02 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:44.225 00:07:02 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:44.225 00:07:02 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:44.225 00:05:44.225 real 0m1.333s 00:05:44.225 user 0m1.241s 00:05:44.225 sys 0m0.108s 00:05:44.225 00:07:02 accel.accel_decomp_mthread -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:44.225 00:07:02 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:05:44.225 ************************************ 00:05:44.225 END TEST accel_decomp_mthread 00:05:44.225 ************************************ 00:05:44.225 00:07:02 accel -- common/autotest_common.sh@1136 -- # return 0 00:05:44.225 00:07:02 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:44.225 00:07:02 accel -- common/autotest_common.sh@1093 -- # '[' 13 -le 1 ']' 00:05:44.225 00:07:02 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:44.225 00:07:02 accel -- common/autotest_common.sh@10 -- # set +x 00:05:44.225 ************************************ 00:05:44.225 START TEST accel_decomp_full_mthread 00:05:44.225 ************************************ 00:05:44.225 00:07:02 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1117 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:44.225 00:07:02 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:05:44.225 00:07:02 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:05:44.225 00:07:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:44.225 00:07:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:44.225 00:07:02 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:44.225 00:07:02 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:05:44.225 00:07:02 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:44.225 00:07:02 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:44.225 00:07:02 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:44.225 00:07:02 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:44.225 00:07:02 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:44.225 00:07:02 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:44.225 00:07:02 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:05:44.225 00:07:02 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:05:44.225 [2024-07-16 00:07:02.874718] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:05:44.225 [2024-07-16 00:07:02.874767] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1344803 ] 00:05:44.225 [2024-07-16 00:07:02.929000] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.225 [2024-07-16 00:07:03.001246] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.225 00:07:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:44.225 00:07:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:44.225 00:07:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:44.225 00:07:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:44.225 00:07:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:44.225 00:07:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:44.225 00:07:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:44.225 00:07:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:44.225 00:07:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:44.225 00:07:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:44.225 00:07:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:44.225 00:07:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:44.225 00:07:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:05:44.225 00:07:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:44.225 00:07:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:44.225 00:07:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:44.225 00:07:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:44.225 00:07:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:44.225 00:07:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:44.225 00:07:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:44.225 00:07:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:44.225 00:07:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:44.225 00:07:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:44.225 00:07:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:44.225 00:07:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:05:44.225 00:07:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:44.225 00:07:03 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:44.225 00:07:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:44.225 00:07:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:44.225 00:07:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:44.225 00:07:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:44.225 00:07:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:44.225 00:07:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:44.225 00:07:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:44.225 00:07:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:44.225 00:07:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:44.225 00:07:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:44.225 00:07:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:05:44.225 00:07:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:44.225 00:07:03 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:05:44.225 00:07:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:44.225 00:07:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:44.225 00:07:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:44.225 00:07:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:44.225 00:07:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:44.225 00:07:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:44.225 00:07:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:05:44.225 00:07:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:44.225 00:07:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:44.225 00:07:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:44.225 00:07:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:05:44.225 00:07:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:44.225 00:07:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:44.225 00:07:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:44.225 00:07:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:05:44.225 00:07:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:44.225 00:07:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:44.225 00:07:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:44.225 00:07:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:05:44.225 00:07:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:44.225 00:07:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:44.225 00:07:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:44.225 00:07:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:05:44.225 00:07:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:44.225 00:07:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:44.225 00:07:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:44.225 00:07:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:44.225 00:07:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:44.225 00:07:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:44.225 00:07:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:44.225 00:07:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:44.225 00:07:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:44.225 00:07:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:44.225 00:07:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:45.602 00:07:04 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:45.602 00:07:04 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:45.602 00:07:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:45.602 00:07:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:45.602 00:07:04 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:45.602 00:07:04 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:45.602 00:07:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:45.602 00:07:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:45.602 00:07:04 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:45.602 00:07:04 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:45.602 00:07:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:45.602 00:07:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:45.602 00:07:04 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:45.602 00:07:04 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:45.602 00:07:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:45.602 00:07:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:45.602 00:07:04 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:45.602 00:07:04 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:45.602 00:07:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:45.602 00:07:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:45.602 00:07:04 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:45.602 00:07:04 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:45.602 00:07:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:45.602 00:07:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:45.602 00:07:04 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:45.602 00:07:04 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:45.602 00:07:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:45.602 00:07:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:45.602 00:07:04 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:45.602 00:07:04 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:45.602 00:07:04 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:45.602 00:05:45.602 real 0m1.363s 00:05:45.602 user 0m1.265s 00:05:45.602 sys 0m0.112s 00:05:45.602 00:07:04 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:45.602 00:07:04 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:05:45.602 ************************************ 00:05:45.602 END TEST accel_decomp_full_mthread 00:05:45.602 ************************************ 00:05:45.602 00:07:04 accel -- common/autotest_common.sh@1136 -- # return 0 00:05:45.602 00:07:04 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:05:45.602 00:07:04 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:05:45.602 00:07:04 accel -- common/autotest_common.sh@1093 -- # '[' 4 -le 1 ']' 00:05:45.602 00:07:04 accel -- accel/accel.sh@137 -- # build_accel_config 00:05:45.602 00:07:04 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:45.602 00:07:04 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:45.602 00:07:04 accel -- common/autotest_common.sh@10 -- # set +x 00:05:45.602 00:07:04 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:45.602 00:07:04 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:45.602 00:07:04 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:45.602 00:07:04 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:45.602 00:07:04 accel -- accel/accel.sh@40 -- # local IFS=, 00:05:45.602 00:07:04 accel -- accel/accel.sh@41 -- # jq -r . 00:05:45.602 ************************************ 00:05:45.602 START TEST accel_dif_functional_tests 00:05:45.602 ************************************ 00:05:45.602 00:07:04 accel.accel_dif_functional_tests -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:05:45.602 [2024-07-16 00:07:04.317466] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:05:45.602 [2024-07-16 00:07:04.317500] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1345074 ] 00:05:45.602 [2024-07-16 00:07:04.369881] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:45.602 [2024-07-16 00:07:04.443815] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:45.602 [2024-07-16 00:07:04.443911] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.602 [2024-07-16 00:07:04.443913] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:45.862 00:05:45.862 00:05:45.862 CUnit - A unit testing framework for C - Version 2.1-3 00:05:45.862 http://cunit.sourceforge.net/ 00:05:45.862 00:05:45.862 00:05:45.862 Suite: accel_dif 00:05:45.862 Test: verify: DIF generated, GUARD check ...passed 00:05:45.862 Test: verify: DIF generated, APPTAG check ...passed 00:05:45.862 Test: verify: DIF generated, REFTAG check ...passed 00:05:45.862 Test: verify: DIF not generated, GUARD check ...[2024-07-16 00:07:04.512082] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:05:45.862 passed 00:05:45.862 Test: verify: DIF not generated, APPTAG check ...[2024-07-16 00:07:04.512130] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:05:45.862 passed 00:05:45.862 Test: verify: DIF not generated, REFTAG check ...[2024-07-16 00:07:04.512152] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:05:45.862 passed 00:05:45.862 Test: verify: APPTAG correct, APPTAG check ...passed 00:05:45.862 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-16 00:07:04.512198] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:05:45.862 passed 00:05:45.862 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:05:45.862 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:05:45.862 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:05:45.862 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-16 00:07:04.512323] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:05:45.862 passed 00:05:45.862 Test: verify copy: DIF generated, GUARD check ...passed 00:05:45.862 Test: verify copy: DIF generated, APPTAG check ...passed 00:05:45.862 Test: verify copy: DIF generated, REFTAG check ...passed 00:05:45.862 Test: verify copy: DIF not generated, GUARD check ...[2024-07-16 00:07:04.512444] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:05:45.862 passed 00:05:45.862 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-16 00:07:04.512472] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:05:45.862 passed 00:05:45.862 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-16 00:07:04.512496] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:05:45.862 passed 00:05:45.862 Test: generate copy: DIF generated, GUARD check ...passed 00:05:45.862 Test: generate copy: DIF generated, APTTAG check ...passed 00:05:45.862 Test: generate copy: DIF generated, REFTAG check ...passed 00:05:45.862 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:05:45.862 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:05:45.862 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:05:45.862 Test: generate copy: iovecs-len validate ...[2024-07-16 00:07:04.512661] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:05:45.862 passed 00:05:45.862 Test: generate copy: buffer alignment validate ...passed 00:05:45.862 00:05:45.862 Run Summary: Type Total Ran Passed Failed Inactive 00:05:45.862 suites 1 1 n/a 0 0 00:05:45.862 tests 26 26 26 0 0 00:05:45.862 asserts 115 115 115 0 n/a 00:05:45.862 00:05:45.862 Elapsed time = 0.002 seconds 00:05:45.862 00:05:45.862 real 0m0.408s 00:05:45.862 user 0m0.626s 00:05:45.862 sys 0m0.140s 00:05:45.862 00:07:04 accel.accel_dif_functional_tests -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:45.862 00:07:04 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:05:45.862 ************************************ 00:05:45.862 END TEST accel_dif_functional_tests 00:05:45.862 ************************************ 00:05:45.862 00:07:04 accel -- common/autotest_common.sh@1136 -- # return 0 00:05:45.862 00:05:45.862 real 0m30.852s 00:05:45.862 user 0m34.829s 00:05:45.862 sys 0m4.094s 00:05:46.122 00:07:04 accel -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:46.122 00:07:04 accel -- common/autotest_common.sh@10 -- # set +x 00:05:46.122 ************************************ 00:05:46.122 END TEST accel 00:05:46.122 ************************************ 00:05:46.122 00:07:04 -- common/autotest_common.sh@1136 -- # return 0 00:05:46.122 00:07:04 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:05:46.122 00:07:04 -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:05:46.122 00:07:04 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:46.122 00:07:04 -- common/autotest_common.sh@10 -- # set +x 00:05:46.122 ************************************ 00:05:46.122 START TEST accel_rpc 00:05:46.122 ************************************ 00:05:46.122 00:07:04 accel_rpc -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:05:46.122 * Looking for test storage... 00:05:46.122 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:05:46.122 00:07:04 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:46.122 00:07:04 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=1345172 00:05:46.122 00:07:04 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 1345172 00:05:46.122 00:07:04 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:05:46.122 00:07:04 accel_rpc -- common/autotest_common.sh@823 -- # '[' -z 1345172 ']' 00:05:46.122 00:07:04 accel_rpc -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.122 00:07:04 accel_rpc -- common/autotest_common.sh@828 -- # local max_retries=100 00:05:46.122 00:07:04 accel_rpc -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.122 00:07:04 accel_rpc -- common/autotest_common.sh@832 -- # xtrace_disable 00:05:46.122 00:07:04 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.122 [2024-07-16 00:07:04.919459] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:05:46.122 [2024-07-16 00:07:04.919502] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1345172 ] 00:05:46.122 [2024-07-16 00:07:04.970900] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.381 [2024-07-16 00:07:05.045164] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.948 00:07:05 accel_rpc -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:05:46.948 00:07:05 accel_rpc -- common/autotest_common.sh@856 -- # return 0 00:05:46.948 00:07:05 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:05:46.948 00:07:05 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:05:46.948 00:07:05 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:05:46.948 00:07:05 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:05:46.948 00:07:05 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:05:46.948 00:07:05 accel_rpc -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:05:46.948 00:07:05 accel_rpc -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:46.948 00:07:05 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.948 ************************************ 00:05:46.948 START TEST accel_assign_opcode 00:05:46.948 ************************************ 00:05:46.948 00:07:05 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1117 -- # accel_assign_opcode_test_suite 00:05:46.948 00:07:05 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:05:46.948 00:07:05 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:46.948 00:07:05 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:46.948 [2024-07-16 00:07:05.743244] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:05:46.948 00:07:05 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:46.948 00:07:05 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:05:46.948 00:07:05 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:46.948 00:07:05 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:46.948 [2024-07-16 00:07:05.751255] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:05:46.948 00:07:05 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:46.948 00:07:05 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:05:46.948 00:07:05 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:46.948 00:07:05 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:47.233 00:07:05 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:47.233 00:07:05 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:05:47.233 00:07:05 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:47.233 00:07:05 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:47.233 00:07:05 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:05:47.233 00:07:05 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:05:47.233 00:07:05 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:47.233 software 00:05:47.233 00:05:47.233 real 0m0.234s 00:05:47.233 user 0m0.042s 00:05:47.233 sys 0m0.009s 00:05:47.233 00:07:05 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:47.233 00:07:05 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:47.233 ************************************ 00:05:47.233 END TEST accel_assign_opcode 00:05:47.233 ************************************ 00:05:47.233 00:07:06 accel_rpc -- common/autotest_common.sh@1136 -- # return 0 00:05:47.233 00:07:06 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 1345172 00:05:47.233 00:07:06 accel_rpc -- common/autotest_common.sh@942 -- # '[' -z 1345172 ']' 00:05:47.233 00:07:06 accel_rpc -- common/autotest_common.sh@946 -- # kill -0 1345172 00:05:47.233 00:07:06 accel_rpc -- common/autotest_common.sh@947 -- # uname 00:05:47.233 00:07:06 accel_rpc -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:05:47.233 00:07:06 accel_rpc -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1345172 00:05:47.233 00:07:06 accel_rpc -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:05:47.233 00:07:06 accel_rpc -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:05:47.233 00:07:06 accel_rpc -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1345172' 00:05:47.233 killing process with pid 1345172 00:05:47.233 00:07:06 accel_rpc -- common/autotest_common.sh@961 -- # kill 1345172 00:05:47.233 00:07:06 accel_rpc -- common/autotest_common.sh@966 -- # wait 1345172 00:05:47.799 00:05:47.799 real 0m1.571s 00:05:47.799 user 0m1.640s 00:05:47.799 sys 0m0.412s 00:05:47.799 00:07:06 accel_rpc -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:47.799 00:07:06 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.799 ************************************ 00:05:47.799 END TEST accel_rpc 00:05:47.799 ************************************ 00:05:47.799 00:07:06 -- common/autotest_common.sh@1136 -- # return 0 00:05:47.799 00:07:06 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:47.799 00:07:06 -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:05:47.799 00:07:06 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:47.799 00:07:06 -- common/autotest_common.sh@10 -- # set +x 00:05:47.799 ************************************ 00:05:47.799 START TEST app_cmdline 00:05:47.799 ************************************ 00:05:47.799 00:07:06 app_cmdline -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:47.799 * Looking for test storage... 00:05:47.799 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:47.799 00:07:06 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:47.799 00:07:06 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1345476 00:05:47.799 00:07:06 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1345476 00:05:47.799 00:07:06 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:47.799 00:07:06 app_cmdline -- common/autotest_common.sh@823 -- # '[' -z 1345476 ']' 00:05:47.799 00:07:06 app_cmdline -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:47.799 00:07:06 app_cmdline -- common/autotest_common.sh@828 -- # local max_retries=100 00:05:47.800 00:07:06 app_cmdline -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:47.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:47.800 00:07:06 app_cmdline -- common/autotest_common.sh@832 -- # xtrace_disable 00:05:47.800 00:07:06 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:47.800 [2024-07-16 00:07:06.550663] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:05:47.800 [2024-07-16 00:07:06.550714] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1345476 ] 00:05:47.800 [2024-07-16 00:07:06.604500] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.058 [2024-07-16 00:07:06.685444] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.624 00:07:07 app_cmdline -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:05:48.624 00:07:07 app_cmdline -- common/autotest_common.sh@856 -- # return 0 00:05:48.624 00:07:07 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:05:48.881 { 00:05:48.881 "version": "SPDK v24.09-pre git sha1 ba0567a82", 00:05:48.881 "fields": { 00:05:48.881 "major": 24, 00:05:48.881 "minor": 9, 00:05:48.881 "patch": 0, 00:05:48.881 "suffix": "-pre", 00:05:48.881 "commit": "ba0567a82" 00:05:48.881 } 00:05:48.881 } 00:05:48.881 00:07:07 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:48.881 00:07:07 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:48.881 00:07:07 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:48.882 00:07:07 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:48.882 00:07:07 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:48.882 00:07:07 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:48.882 00:07:07 app_cmdline -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:48.882 00:07:07 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:48.882 00:07:07 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:48.882 00:07:07 app_cmdline -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:48.882 00:07:07 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:48.882 00:07:07 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:48.882 00:07:07 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:48.882 00:07:07 app_cmdline -- common/autotest_common.sh@642 -- # local es=0 00:05:48.882 00:07:07 app_cmdline -- common/autotest_common.sh@644 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:48.882 00:07:07 app_cmdline -- common/autotest_common.sh@630 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:48.882 00:07:07 app_cmdline -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:05:48.882 00:07:07 app_cmdline -- common/autotest_common.sh@634 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:48.882 00:07:07 app_cmdline -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:05:48.882 00:07:07 app_cmdline -- common/autotest_common.sh@636 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:48.882 00:07:07 app_cmdline -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:05:48.882 00:07:07 app_cmdline -- common/autotest_common.sh@636 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:48.882 00:07:07 app_cmdline -- common/autotest_common.sh@636 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:05:48.882 00:07:07 app_cmdline -- common/autotest_common.sh@645 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:48.882 request: 00:05:48.882 { 00:05:48.882 "method": "env_dpdk_get_mem_stats", 00:05:48.882 "req_id": 1 00:05:48.882 } 00:05:48.882 Got JSON-RPC error response 00:05:48.882 response: 00:05:48.882 { 00:05:48.882 "code": -32601, 00:05:48.882 "message": "Method not found" 00:05:48.882 } 00:05:49.175 00:07:07 app_cmdline -- common/autotest_common.sh@645 -- # es=1 00:05:49.175 00:07:07 app_cmdline -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:05:49.175 00:07:07 app_cmdline -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:05:49.175 00:07:07 app_cmdline -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:05:49.175 00:07:07 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1345476 00:05:49.175 00:07:07 app_cmdline -- common/autotest_common.sh@942 -- # '[' -z 1345476 ']' 00:05:49.175 00:07:07 app_cmdline -- common/autotest_common.sh@946 -- # kill -0 1345476 00:05:49.175 00:07:07 app_cmdline -- common/autotest_common.sh@947 -- # uname 00:05:49.175 00:07:07 app_cmdline -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:05:49.175 00:07:07 app_cmdline -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1345476 00:05:49.175 00:07:07 app_cmdline -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:05:49.175 00:07:07 app_cmdline -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:05:49.175 00:07:07 app_cmdline -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1345476' 00:05:49.175 killing process with pid 1345476 00:05:49.175 00:07:07 app_cmdline -- common/autotest_common.sh@961 -- # kill 1345476 00:05:49.175 00:07:07 app_cmdline -- common/autotest_common.sh@966 -- # wait 1345476 00:05:49.434 00:05:49.434 real 0m1.671s 00:05:49.434 user 0m1.993s 00:05:49.434 sys 0m0.430s 00:05:49.434 00:07:08 app_cmdline -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:49.434 00:07:08 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:49.434 ************************************ 00:05:49.434 END TEST app_cmdline 00:05:49.434 ************************************ 00:05:49.434 00:07:08 -- common/autotest_common.sh@1136 -- # return 0 00:05:49.434 00:07:08 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:49.434 00:07:08 -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:05:49.434 00:07:08 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:49.434 00:07:08 -- common/autotest_common.sh@10 -- # set +x 00:05:49.434 ************************************ 00:05:49.434 START TEST version 00:05:49.434 ************************************ 00:05:49.434 00:07:08 version -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:49.434 * Looking for test storage... 00:05:49.434 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:49.434 00:07:08 version -- app/version.sh@17 -- # get_header_version major 00:05:49.434 00:07:08 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:49.434 00:07:08 version -- app/version.sh@14 -- # tr -d '"' 00:05:49.434 00:07:08 version -- app/version.sh@14 -- # cut -f2 00:05:49.434 00:07:08 version -- app/version.sh@17 -- # major=24 00:05:49.434 00:07:08 version -- app/version.sh@18 -- # get_header_version minor 00:05:49.434 00:07:08 version -- app/version.sh@14 -- # tr -d '"' 00:05:49.434 00:07:08 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:49.434 00:07:08 version -- app/version.sh@14 -- # cut -f2 00:05:49.434 00:07:08 version -- app/version.sh@18 -- # minor=9 00:05:49.434 00:07:08 version -- app/version.sh@19 -- # get_header_version patch 00:05:49.434 00:07:08 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:49.434 00:07:08 version -- app/version.sh@14 -- # cut -f2 00:05:49.434 00:07:08 version -- app/version.sh@14 -- # tr -d '"' 00:05:49.434 00:07:08 version -- app/version.sh@19 -- # patch=0 00:05:49.434 00:07:08 version -- app/version.sh@20 -- # get_header_version suffix 00:05:49.434 00:07:08 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:49.434 00:07:08 version -- app/version.sh@14 -- # cut -f2 00:05:49.434 00:07:08 version -- app/version.sh@14 -- # tr -d '"' 00:05:49.434 00:07:08 version -- app/version.sh@20 -- # suffix=-pre 00:05:49.434 00:07:08 version -- app/version.sh@22 -- # version=24.9 00:05:49.434 00:07:08 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:49.434 00:07:08 version -- app/version.sh@28 -- # version=24.9rc0 00:05:49.434 00:07:08 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:49.434 00:07:08 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:49.693 00:07:08 version -- app/version.sh@30 -- # py_version=24.9rc0 00:05:49.693 00:07:08 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:05:49.693 00:05:49.693 real 0m0.151s 00:05:49.693 user 0m0.082s 00:05:49.693 sys 0m0.102s 00:05:49.693 00:07:08 version -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:49.693 00:07:08 version -- common/autotest_common.sh@10 -- # set +x 00:05:49.693 ************************************ 00:05:49.693 END TEST version 00:05:49.693 ************************************ 00:05:49.693 00:07:08 -- common/autotest_common.sh@1136 -- # return 0 00:05:49.693 00:07:08 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:05:49.693 00:07:08 -- spdk/autotest.sh@198 -- # uname -s 00:05:49.693 00:07:08 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:05:49.693 00:07:08 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:05:49.693 00:07:08 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:05:49.693 00:07:08 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:05:49.693 00:07:08 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:05:49.693 00:07:08 -- spdk/autotest.sh@260 -- # timing_exit lib 00:05:49.693 00:07:08 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:49.693 00:07:08 -- common/autotest_common.sh@10 -- # set +x 00:05:49.693 00:07:08 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:05:49.693 00:07:08 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:05:49.693 00:07:08 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:05:49.693 00:07:08 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:05:49.693 00:07:08 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:05:49.693 00:07:08 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:05:49.693 00:07:08 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:49.693 00:07:08 -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:05:49.693 00:07:08 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:49.693 00:07:08 -- common/autotest_common.sh@10 -- # set +x 00:05:49.693 ************************************ 00:05:49.693 START TEST nvmf_tcp 00:05:49.693 ************************************ 00:05:49.693 00:07:08 nvmf_tcp -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:49.693 * Looking for test storage... 00:05:49.693 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:49.693 00:07:08 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:05:49.693 00:07:08 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:49.693 00:07:08 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:49.693 00:07:08 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:05:49.693 00:07:08 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:49.693 00:07:08 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:49.693 00:07:08 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:49.693 00:07:08 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:49.693 00:07:08 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:49.693 00:07:08 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:49.693 00:07:08 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:49.693 00:07:08 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:49.693 00:07:08 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:49.693 00:07:08 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:49.693 00:07:08 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:05:49.693 00:07:08 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:05:49.693 00:07:08 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:49.693 00:07:08 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:49.693 00:07:08 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:49.693 00:07:08 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:49.693 00:07:08 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:49.693 00:07:08 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:49.693 00:07:08 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:49.693 00:07:08 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:49.693 00:07:08 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:49.693 00:07:08 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:49.693 00:07:08 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:49.693 00:07:08 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:05:49.693 00:07:08 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:49.693 00:07:08 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:05:49.693 00:07:08 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:49.693 00:07:08 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:49.693 00:07:08 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:49.693 00:07:08 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:49.693 00:07:08 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:49.693 00:07:08 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:49.693 00:07:08 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:49.693 00:07:08 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:49.693 00:07:08 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:05:49.693 00:07:08 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:05:49.693 00:07:08 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:05:49.693 00:07:08 nvmf_tcp -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:49.693 00:07:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:49.693 00:07:08 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:05:49.693 00:07:08 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:05:49.693 00:07:08 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:05:49.693 00:07:08 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:49.693 00:07:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:49.952 ************************************ 00:05:49.952 START TEST nvmf_example 00:05:49.952 ************************************ 00:05:49.952 00:07:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:05:49.952 * Looking for test storage... 00:05:49.952 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:49.952 00:07:08 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:49.952 00:07:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:05:49.952 00:07:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:49.952 00:07:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:49.952 00:07:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:49.952 00:07:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:49.952 00:07:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:49.952 00:07:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:49.952 00:07:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:49.952 00:07:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:49.952 00:07:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:49.952 00:07:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:49.952 00:07:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:05:49.952 00:07:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:05:49.952 00:07:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:49.952 00:07:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:49.952 00:07:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:49.952 00:07:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:49.952 00:07:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:49.952 00:07:08 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:49.952 00:07:08 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:49.952 00:07:08 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:49.952 00:07:08 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:49.952 00:07:08 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:49.952 00:07:08 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:49.952 00:07:08 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:05:49.952 00:07:08 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:49.952 00:07:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:05:49.952 00:07:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:49.952 00:07:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:49.952 00:07:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:49.952 00:07:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:49.952 00:07:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:49.952 00:07:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:49.952 00:07:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:49.952 00:07:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:49.952 00:07:08 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:05:49.952 00:07:08 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:05:49.952 00:07:08 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:05:49.952 00:07:08 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:05:49.952 00:07:08 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:05:49.952 00:07:08 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:05:49.952 00:07:08 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:05:49.952 00:07:08 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:05:49.953 00:07:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:49.953 00:07:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:49.953 00:07:08 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:05:49.953 00:07:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:05:49.953 00:07:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:49.953 00:07:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:05:49.953 00:07:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:05:49.953 00:07:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:05:49.953 00:07:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:49.953 00:07:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:05:49.953 00:07:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:49.953 00:07:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:05:49.953 00:07:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:05:49.953 00:07:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:05:49.953 00:07:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:55.224 00:07:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:55.224 00:07:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:05:55.224 00:07:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:05:55.224 00:07:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:05:55.224 00:07:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:05:55.224 00:07:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:05:55.224 00:07:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:05:55.224 00:07:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:05:55.224 00:07:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:05:55.224 00:07:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:05:55.224 00:07:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:05:55.224 00:07:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:05:55.224 00:07:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:05:55.224 00:07:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:05:55.224 00:07:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:05:55.224 00:07:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:55.224 00:07:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:55.224 00:07:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:55.224 00:07:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:55.224 00:07:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:55.224 00:07:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:55.224 00:07:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:55.224 00:07:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:55.224 00:07:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:55.224 00:07:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:55.224 00:07:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:55.224 00:07:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:05:55.224 00:07:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:05:55.224 00:07:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:05:55.224 00:07:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:05:55.224 00:07:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:05:55.224 00:07:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:05:55.224 00:07:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:55.224 00:07:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:05:55.224 Found 0000:86:00.0 (0x8086 - 0x159b) 00:05:55.224 00:07:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:05:55.224 00:07:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:05:55.224 00:07:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:55.224 00:07:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:55.224 00:07:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:05:55.224 00:07:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:55.224 00:07:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:05:55.224 Found 0000:86:00.1 (0x8086 - 0x159b) 00:05:55.224 00:07:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:05:55.224 00:07:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:05:55.224 00:07:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:55.224 00:07:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:55.224 00:07:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:05:55.224 00:07:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:05:55.224 00:07:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:05:55.224 00:07:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:05:55.224 00:07:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:55.224 00:07:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:55.224 00:07:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:05:55.224 00:07:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:55.224 00:07:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:05:55.224 00:07:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:05:55.224 00:07:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:55.224 00:07:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:05:55.224 Found net devices under 0000:86:00.0: cvl_0_0 00:05:55.224 00:07:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:05:55.224 00:07:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:55.224 00:07:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:55.224 00:07:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:05:55.224 00:07:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:55.224 00:07:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:05:55.224 00:07:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:05:55.224 00:07:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:55.224 00:07:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:05:55.224 Found net devices under 0000:86:00.1: cvl_0_1 00:05:55.224 00:07:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:05:55.224 00:07:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:05:55.224 00:07:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:05:55.224 00:07:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:05:55.224 00:07:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:05:55.224 00:07:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:05:55.224 00:07:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:55.224 00:07:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:55.224 00:07:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:55.224 00:07:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:05:55.224 00:07:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:55.224 00:07:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:55.224 00:07:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:05:55.224 00:07:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:55.224 00:07:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:55.224 00:07:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:05:55.224 00:07:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:05:55.224 00:07:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:05:55.224 00:07:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:55.224 00:07:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:55.224 00:07:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:55.224 00:07:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:05:55.224 00:07:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:55.224 00:07:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:55.224 00:07:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:55.225 00:07:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:05:55.225 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:55.225 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:05:55.225 00:05:55.225 --- 10.0.0.2 ping statistics --- 00:05:55.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:55.225 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:05:55.225 00:07:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:55.225 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:55.225 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:05:55.225 00:05:55.225 --- 10.0.0.1 ping statistics --- 00:05:55.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:55.225 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:05:55.225 00:07:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:55.225 00:07:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:05:55.225 00:07:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:05:55.225 00:07:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:55.225 00:07:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:05:55.225 00:07:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:05:55.225 00:07:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:55.225 00:07:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:05:55.225 00:07:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:05:55.225 00:07:13 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:05:55.225 00:07:13 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:05:55.225 00:07:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:55.225 00:07:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:55.225 00:07:13 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:05:55.225 00:07:13 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:05:55.225 00:07:13 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1348918 00:05:55.225 00:07:13 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:05:55.225 00:07:13 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1348918 00:05:55.225 00:07:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@823 -- # '[' -z 1348918 ']' 00:05:55.225 00:07:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.225 00:07:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@828 -- # local max_retries=100 00:05:55.225 00:07:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.225 00:07:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@832 -- # xtrace_disable 00:05:55.225 00:07:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:55.225 00:07:13 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:05:55.792 00:07:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:05:55.792 00:07:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@856 -- # return 0 00:05:55.792 00:07:14 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:05:55.792 00:07:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:55.792 00:07:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:55.792 00:07:14 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:05:55.792 00:07:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:55.792 00:07:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:55.792 00:07:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:55.792 00:07:14 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:05:55.792 00:07:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:55.792 00:07:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:55.792 00:07:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:55.792 00:07:14 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:05:55.792 00:07:14 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:55.792 00:07:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:55.792 00:07:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:55.792 00:07:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:55.792 00:07:14 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:05:55.792 00:07:14 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:05:55.792 00:07:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:55.792 00:07:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:56.052 00:07:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:56.052 00:07:14 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:56.052 00:07:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:56.052 00:07:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:56.052 00:07:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:56.052 00:07:14 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:05:56.052 00:07:14 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:06:06.100 Initializing NVMe Controllers 00:06:06.100 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:06.100 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:06.100 Initialization complete. Launching workers. 00:06:06.100 ======================================================== 00:06:06.100 Latency(us) 00:06:06.100 Device Information : IOPS MiB/s Average min max 00:06:06.100 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 17779.70 69.45 3599.58 699.59 16409.97 00:06:06.100 ======================================================== 00:06:06.100 Total : 17779.70 69.45 3599.58 699.59 16409.97 00:06:06.100 00:06:06.359 00:07:24 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:06:06.359 00:07:24 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:06:06.359 00:07:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:06.359 00:07:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:06:06.359 00:07:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:06.359 00:07:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:06:06.359 00:07:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:06.359 00:07:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:06.359 rmmod nvme_tcp 00:06:06.359 rmmod nvme_fabrics 00:06:06.359 rmmod nvme_keyring 00:06:06.359 00:07:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:06.359 00:07:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:06:06.359 00:07:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:06:06.359 00:07:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 1348918 ']' 00:06:06.359 00:07:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 1348918 00:06:06.359 00:07:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@942 -- # '[' -z 1348918 ']' 00:06:06.359 00:07:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@946 -- # kill -0 1348918 00:06:06.359 00:07:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@947 -- # uname 00:06:06.359 00:07:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:06:06.359 00:07:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1348918 00:06:06.359 00:07:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@948 -- # process_name=nvmf 00:06:06.359 00:07:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # '[' nvmf = sudo ']' 00:06:06.359 00:07:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1348918' 00:06:06.359 killing process with pid 1348918 00:06:06.359 00:07:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@961 -- # kill 1348918 00:06:06.359 00:07:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@966 -- # wait 1348918 00:06:06.617 nvmf threads initialize successfully 00:06:06.617 bdev subsystem init successfully 00:06:06.617 created a nvmf target service 00:06:06.617 create targets's poll groups done 00:06:06.617 all subsystems of target started 00:06:06.617 nvmf target is running 00:06:06.617 all subsystems of target stopped 00:06:06.617 destroy targets's poll groups done 00:06:06.617 destroyed the nvmf target service 00:06:06.617 bdev subsystem finish successfully 00:06:06.617 nvmf threads destroy successfully 00:06:06.617 00:07:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:06.617 00:07:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:06.617 00:07:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:06.617 00:07:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:06.617 00:07:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:06.617 00:07:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:06.618 00:07:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:06.618 00:07:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:08.523 00:07:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:08.523 00:07:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:06:08.523 00:07:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:08.523 00:07:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:08.523 00:06:08.523 real 0m18.809s 00:06:08.523 user 0m46.003s 00:06:08.523 sys 0m5.175s 00:06:08.523 00:07:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:08.523 00:07:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:08.523 ************************************ 00:06:08.523 END TEST nvmf_example 00:06:08.523 ************************************ 00:06:08.784 00:07:27 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:06:08.784 00:07:27 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:08.784 00:07:27 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:06:08.784 00:07:27 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:08.784 00:07:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:08.784 ************************************ 00:06:08.784 START TEST nvmf_filesystem 00:06:08.784 ************************************ 00:06:08.784 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:08.784 * Looking for test storage... 00:06:08.784 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:08.784 00:07:27 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:06:08.784 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:06:08.784 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:06:08.784 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:06:08.784 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:06:08.784 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:06:08.784 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:06:08.784 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:06:08.784 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:06:08.784 00:07:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:08.784 00:07:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:08.784 00:07:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:08.784 00:07:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:08.784 00:07:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:06:08.784 00:07:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:08.784 00:07:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:08.784 00:07:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:08.784 00:07:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:08.784 00:07:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:08.784 00:07:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:08.784 00:07:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:08.784 00:07:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:08.784 00:07:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:08.784 00:07:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:08.784 00:07:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:08.785 00:07:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:06:08.785 00:07:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:08.785 00:07:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:06:08.785 00:07:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:06:08.785 00:07:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:06:08.785 00:07:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:06:08.785 00:07:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:08.785 00:07:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:06:08.785 00:07:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:06:08.785 00:07:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:08.785 00:07:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:08.785 00:07:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:06:08.785 00:07:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:06:08.785 00:07:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:06:08.785 00:07:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:06:08.785 00:07:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:06:08.785 00:07:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:06:08.785 00:07:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:06:08.785 00:07:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:06:08.785 00:07:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:06:08.785 00:07:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:06:08.785 00:07:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:06:08.785 00:07:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:06:08.785 00:07:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:06:08.785 00:07:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:06:08.785 00:07:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:06:08.785 00:07:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:06:08.785 00:07:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:08.785 00:07:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:06:08.785 00:07:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:06:08.785 00:07:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:06:08.785 00:07:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:06:08.785 00:07:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:08.785 00:07:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:06:08.785 00:07:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:06:08.785 00:07:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:06:08.785 00:07:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:06:08.785 00:07:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:06:08.785 00:07:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:06:08.785 00:07:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:06:08.785 00:07:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:06:08.785 00:07:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:06:08.785 00:07:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:06:08.785 00:07:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:06:08.785 00:07:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:06:08.785 00:07:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:06:08.785 00:07:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:06:08.785 00:07:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:06:08.785 00:07:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:06:08.785 00:07:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:06:08.785 00:07:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:06:08.785 00:07:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:06:08.785 00:07:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:08.785 00:07:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:06:08.785 00:07:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:06:08.785 00:07:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:06:08.785 00:07:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:06:08.785 00:07:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:06:08.785 00:07:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:06:08.785 00:07:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:06:08.785 00:07:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:06:08.785 00:07:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:06:08.785 00:07:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:06:08.785 00:07:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:06:08.785 00:07:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:08.785 00:07:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:06:08.785 00:07:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:06:08.785 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:06:08.785 00:07:27 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:06:08.785 00:07:27 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:06:08.785 00:07:27 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:06:08.785 00:07:27 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:08.785 00:07:27 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:08.785 00:07:27 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:08.785 00:07:27 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:08.785 00:07:27 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:06:08.785 00:07:27 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:06:08.785 00:07:27 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:06:08.785 00:07:27 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:06:08.785 00:07:27 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:06:08.785 00:07:27 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:06:08.785 00:07:27 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:06:08.785 00:07:27 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:06:08.785 #define SPDK_CONFIG_H 00:06:08.785 #define SPDK_CONFIG_APPS 1 00:06:08.785 #define SPDK_CONFIG_ARCH native 00:06:08.785 #undef SPDK_CONFIG_ASAN 00:06:08.785 #undef SPDK_CONFIG_AVAHI 00:06:08.785 #undef SPDK_CONFIG_CET 00:06:08.785 #define SPDK_CONFIG_COVERAGE 1 00:06:08.785 #define SPDK_CONFIG_CROSS_PREFIX 00:06:08.785 #undef SPDK_CONFIG_CRYPTO 00:06:08.785 #undef SPDK_CONFIG_CRYPTO_MLX5 00:06:08.785 #undef SPDK_CONFIG_CUSTOMOCF 00:06:08.785 #undef SPDK_CONFIG_DAOS 00:06:08.785 #define SPDK_CONFIG_DAOS_DIR 00:06:08.785 #define SPDK_CONFIG_DEBUG 1 00:06:08.785 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:06:08.785 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:06:08.785 #define SPDK_CONFIG_DPDK_INC_DIR 00:06:08.785 #define SPDK_CONFIG_DPDK_LIB_DIR 00:06:08.785 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:06:08.785 #undef SPDK_CONFIG_DPDK_UADK 00:06:08.785 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:06:08.785 #define SPDK_CONFIG_EXAMPLES 1 00:06:08.785 #undef SPDK_CONFIG_FC 00:06:08.785 #define SPDK_CONFIG_FC_PATH 00:06:08.785 #define SPDK_CONFIG_FIO_PLUGIN 1 00:06:08.785 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:06:08.785 #undef SPDK_CONFIG_FUSE 00:06:08.785 #undef SPDK_CONFIG_FUZZER 00:06:08.785 #define SPDK_CONFIG_FUZZER_LIB 00:06:08.785 #undef SPDK_CONFIG_GOLANG 00:06:08.785 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:06:08.785 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:06:08.785 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:06:08.785 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:06:08.785 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:06:08.785 #undef SPDK_CONFIG_HAVE_LIBBSD 00:06:08.785 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:06:08.785 #define SPDK_CONFIG_IDXD 1 00:06:08.785 #define SPDK_CONFIG_IDXD_KERNEL 1 00:06:08.785 #undef SPDK_CONFIG_IPSEC_MB 00:06:08.785 #define SPDK_CONFIG_IPSEC_MB_DIR 00:06:08.785 #define SPDK_CONFIG_ISAL 1 00:06:08.785 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:06:08.785 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:06:08.785 #define SPDK_CONFIG_LIBDIR 00:06:08.785 #undef SPDK_CONFIG_LTO 00:06:08.785 #define SPDK_CONFIG_MAX_LCORES 128 00:06:08.785 #define SPDK_CONFIG_NVME_CUSE 1 00:06:08.785 #undef SPDK_CONFIG_OCF 00:06:08.785 #define SPDK_CONFIG_OCF_PATH 00:06:08.785 #define SPDK_CONFIG_OPENSSL_PATH 00:06:08.785 #undef SPDK_CONFIG_PGO_CAPTURE 00:06:08.785 #define SPDK_CONFIG_PGO_DIR 00:06:08.785 #undef SPDK_CONFIG_PGO_USE 00:06:08.785 #define SPDK_CONFIG_PREFIX /usr/local 00:06:08.785 #undef SPDK_CONFIG_RAID5F 00:06:08.785 #undef SPDK_CONFIG_RBD 00:06:08.785 #define SPDK_CONFIG_RDMA 1 00:06:08.785 #define SPDK_CONFIG_RDMA_PROV verbs 00:06:08.785 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:06:08.785 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:06:08.785 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:06:08.785 #define SPDK_CONFIG_SHARED 1 00:06:08.785 #undef SPDK_CONFIG_SMA 00:06:08.785 #define SPDK_CONFIG_TESTS 1 00:06:08.785 #undef SPDK_CONFIG_TSAN 00:06:08.785 #define SPDK_CONFIG_UBLK 1 00:06:08.785 #define SPDK_CONFIG_UBSAN 1 00:06:08.785 #undef SPDK_CONFIG_UNIT_TESTS 00:06:08.785 #undef SPDK_CONFIG_URING 00:06:08.785 #define SPDK_CONFIG_URING_PATH 00:06:08.785 #undef SPDK_CONFIG_URING_ZNS 00:06:08.785 #undef SPDK_CONFIG_USDT 00:06:08.785 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:06:08.785 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:06:08.785 #define SPDK_CONFIG_VFIO_USER 1 00:06:08.785 #define SPDK_CONFIG_VFIO_USER_DIR 00:06:08.785 #define SPDK_CONFIG_VHOST 1 00:06:08.785 #define SPDK_CONFIG_VIRTIO 1 00:06:08.785 #undef SPDK_CONFIG_VTUNE 00:06:08.785 #define SPDK_CONFIG_VTUNE_DIR 00:06:08.785 #define SPDK_CONFIG_WERROR 1 00:06:08.786 #define SPDK_CONFIG_WPDK_DIR 00:06:08.786 #undef SPDK_CONFIG_XNVME 00:06:08.786 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:06:08.786 00:07:27 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:06:08.786 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:08.786 00:07:27 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:08.786 00:07:27 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:08.786 00:07:27 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:08.786 00:07:27 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.786 00:07:27 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.786 00:07:27 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.786 00:07:27 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:06:08.786 00:07:27 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.786 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:06:08.786 00:07:27 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:06:08.786 00:07:27 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:06:08.786 00:07:27 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:06:08.786 00:07:27 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:06:08.786 00:07:27 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:08.786 00:07:27 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:06:08.786 00:07:27 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:06:08.786 00:07:27 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:06:08.786 00:07:27 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:06:08.786 00:07:27 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:06:08.786 00:07:27 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:06:08.786 00:07:27 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:06:08.786 00:07:27 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:06:08.786 00:07:27 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:06:08.786 00:07:27 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:06:08.786 00:07:27 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:06:08.786 00:07:27 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:06:08.786 00:07:27 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:06:08.786 00:07:27 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:06:08.786 00:07:27 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:06:08.786 00:07:27 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:06:08.786 00:07:27 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:06:08.786 00:07:27 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:06:08.786 00:07:27 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:06:08.786 00:07:27 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:06:08.786 00:07:27 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:06:08.786 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:06:08.786 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:06:08.786 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:06:08.786 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:06:08.786 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:06:08.786 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:06:08.786 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:06:08.786 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:06:08.786 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:06:08.786 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:06:08.786 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:06:08.786 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:06:08.786 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:06:08.786 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:06:08.786 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:06:08.786 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:06:08.786 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:06:08.786 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:06:08.786 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:06:08.786 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:06:08.786 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:06:08.786 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:06:08.786 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:06:08.786 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:06:08.786 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:06:08.786 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:06:08.786 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:06:08.786 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:06:08.786 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:06:08.786 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:06:08.786 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:06:08.786 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:06:08.786 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:06:08.786 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:06:08.786 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:06:08.786 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:06:08.786 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:06:08.786 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:06:08.786 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:06:08.786 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:06:08.786 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:06:08.786 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:06:08.786 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:06:08.786 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:06:08.786 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:06:08.786 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:06:08.786 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:06:08.786 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:06:08.786 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:06:08.786 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:06:08.786 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:06:08.786 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:06:08.786 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:06:08.786 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:06:08.786 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:06:08.786 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:06:08.786 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:06:08.786 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:06:08.786 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:06:08.786 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:06:08.786 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:06:08.786 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:06:08.786 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:06:08.786 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:06:08.786 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:06:08.786 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:06:08.786 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:06:08.786 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:06:08.786 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:06:08.786 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:06:08.786 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:06:08.786 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:06:08.786 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:06:08.786 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:06:08.786 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:06:08.786 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:06:08.787 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:06:08.787 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:06:08.787 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:06:08.787 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:06:08.787 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:06:08.787 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:06:08.787 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:06:08.787 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:06:08.787 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:06:08.787 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:06:08.787 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:06:08.787 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:06:08.787 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:06:08.787 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:06:08.787 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:06:08.787 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:06:08.787 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:06:08.787 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:06:08.787 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:06:08.787 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:06:08.787 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:06:08.787 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:06:08.787 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:06:08.787 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:06:08.787 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:06:08.787 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:06:08.787 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:06:08.787 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:06:08.787 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:06:08.787 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:06:08.787 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:06:08.787 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:06:08.787 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:06:08.787 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:06:08.787 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:06:08.787 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:06:08.787 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:06:08.787 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:06:08.787 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:06:08.787 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:06:08.787 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:08.787 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:08.787 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:08.787 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:08.787 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:06:08.787 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:06:08.787 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:08.787 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:08.787 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:06:08.787 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:06:08.787 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:08.787 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:08.787 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:08.787 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:08.787 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:06:08.787 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:06:08.787 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:06:08.787 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:06:08.787 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:08.787 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:08.787 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:08.787 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:08.787 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:06:08.787 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:06:08.787 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:08.787 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:08.787 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:08.787 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:08.787 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:08.787 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:08.787 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:08.787 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:08.787 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:08.787 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:08.787 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:08.787 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:08.787 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:06:08.787 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:06:08.787 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:06:08.787 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:06:08.787 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:06:08.787 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:06:08.787 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:06:08.787 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:06:08.787 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@273 -- # MAKE=make 00:06:08.787 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@274 -- # MAKEFLAGS=-j96 00:06:08.787 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@290 -- # export HUGEMEM=4096 00:06:08.787 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@290 -- # HUGEMEM=4096 00:06:08.787 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@292 -- # NO_HUGE=() 00:06:08.787 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@293 -- # TEST_MODE= 00:06:08.787 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@294 -- # for i in "$@" 00:06:08.787 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@295 -- # case "$i" in 00:06:08.787 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # TEST_TRANSPORT=tcp 00:06:08.787 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@312 -- # [[ -z 1351308 ]] 00:06:08.787 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@312 -- # kill -0 1351308 00:06:08.787 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1674 -- # set_test_storage 2147483648 00:06:08.787 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@322 -- # [[ -v testdir ]] 00:06:08.787 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@324 -- # local requested_size=2147483648 00:06:08.787 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@325 -- # local mount target_dir 00:06:08.787 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # local -A mounts fss sizes avails uses 00:06:08.787 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # local source fs size avail mount use 00:06:08.788 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local storage_fallback storage_candidates 00:06:08.788 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@332 -- # mktemp -udt spdk.XXXXXX 00:06:08.788 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@332 -- # storage_fallback=/tmp/spdk.BMPSgc 00:06:08.788 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@337 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:06:08.788 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@339 -- # [[ -n '' ]] 00:06:08.788 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@344 -- # [[ -n '' ]] 00:06:08.788 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@349 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.BMPSgc/tests/target /tmp/spdk.BMPSgc 00:06:08.788 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@352 -- # requested_size=2214592512 00:06:08.788 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@354 -- # read -r source fs size use avail _ mount 00:06:08.788 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@321 -- # df -T 00:06:08.788 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@321 -- # grep -v Filesystem 00:06:08.788 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mounts["$mount"]=spdk_devtmpfs 00:06:08.788 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # fss["$mount"]=devtmpfs 00:06:08.788 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@356 -- # avails["$mount"]=67108864 00:06:08.788 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@356 -- # sizes["$mount"]=67108864 00:06:08.788 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@357 -- # uses["$mount"]=0 00:06:08.788 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@354 -- # read -r source fs size use avail _ mount 00:06:08.788 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mounts["$mount"]=/dev/pmem0 00:06:08.788 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # fss["$mount"]=ext2 00:06:08.788 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@356 -- # avails["$mount"]=950202368 00:06:08.788 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@356 -- # sizes["$mount"]=5284429824 00:06:08.788 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@357 -- # uses["$mount"]=4334227456 00:06:08.788 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@354 -- # read -r source fs size use avail _ mount 00:06:08.788 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mounts["$mount"]=spdk_root 00:06:08.788 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # fss["$mount"]=overlay 00:06:08.788 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@356 -- # avails["$mount"]=189559603200 00:06:08.788 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@356 -- # sizes["$mount"]=195974299648 00:06:08.788 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@357 -- # uses["$mount"]=6414696448 00:06:08.788 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@354 -- # read -r source fs size use avail _ mount 00:06:08.788 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mounts["$mount"]=tmpfs 00:06:08.788 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # fss["$mount"]=tmpfs 00:06:08.788 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@356 -- # avails["$mount"]=97983774720 00:06:08.788 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@356 -- # sizes["$mount"]=97987149824 00:06:08.788 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@357 -- # uses["$mount"]=3375104 00:06:08.788 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@354 -- # read -r source fs size use avail _ mount 00:06:08.788 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mounts["$mount"]=tmpfs 00:06:08.788 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # fss["$mount"]=tmpfs 00:06:08.788 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@356 -- # avails["$mount"]=39185485824 00:06:08.788 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@356 -- # sizes["$mount"]=39194861568 00:06:08.788 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@357 -- # uses["$mount"]=9375744 00:06:08.788 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@354 -- # read -r source fs size use avail _ mount 00:06:08.788 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mounts["$mount"]=tmpfs 00:06:08.788 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # fss["$mount"]=tmpfs 00:06:08.788 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@356 -- # avails["$mount"]=97986215936 00:06:08.788 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@356 -- # sizes["$mount"]=97987149824 00:06:08.788 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@357 -- # uses["$mount"]=933888 00:06:08.788 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@354 -- # read -r source fs size use avail _ mount 00:06:08.788 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mounts["$mount"]=tmpfs 00:06:08.788 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # fss["$mount"]=tmpfs 00:06:08.788 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@356 -- # avails["$mount"]=19597422592 00:06:08.788 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@356 -- # sizes["$mount"]=19597426688 00:06:08.788 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@357 -- # uses["$mount"]=4096 00:06:08.788 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@354 -- # read -r source fs size use avail _ mount 00:06:08.788 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # printf '* Looking for test storage...\n' 00:06:08.788 * Looking for test storage... 00:06:08.788 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # local target_space new_size 00:06:08.788 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # for target_dir in "${storage_candidates[@]}" 00:06:08.788 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:08.788 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # awk '$1 !~ /Filesystem/{print $6}' 00:06:08.788 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # mount=/ 00:06:08.788 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # target_space=189559603200 00:06:08.788 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # (( target_space == 0 || target_space < requested_size )) 00:06:08.788 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # (( target_space >= requested_size )) 00:06:08.788 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # [[ overlay == tmpfs ]] 00:06:08.788 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # [[ overlay == ramfs ]] 00:06:08.788 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # [[ / == / ]] 00:06:08.788 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # new_size=8629288960 00:06:08.788 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@376 -- # (( new_size * 100 / sizes[/] > 95 )) 00:06:08.788 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:08.788 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:08.788 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@382 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:08.788 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:08.788 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@383 -- # return 0 00:06:08.788 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1676 -- # set -o errtrace 00:06:08.788 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1677 -- # shopt -s extdebug 00:06:08.788 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1678 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:06:08.788 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:06:08.788 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1681 -- # true 00:06:08.788 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # xtrace_fd 00:06:09.048 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:06:09.048 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:06:09.048 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:06:09.048 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:06:09.048 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:06:09.048 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:06:09.048 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:06:09.048 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:06:09.048 00:07:27 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:09.048 00:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:06:09.048 00:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:09.048 00:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:09.048 00:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:09.048 00:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:09.048 00:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:09.048 00:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:09.048 00:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:09.048 00:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:09.048 00:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:09.048 00:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:09.048 00:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:09.048 00:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:09.048 00:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:09.048 00:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:09.048 00:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:09.048 00:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:09.048 00:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:09.048 00:07:27 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:09.048 00:07:27 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:09.048 00:07:27 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:09.048 00:07:27 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:09.048 00:07:27 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:09.048 00:07:27 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:09.048 00:07:27 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:06:09.048 00:07:27 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:09.048 00:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:06:09.048 00:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:09.048 00:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:09.048 00:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:09.048 00:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:09.048 00:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:09.048 00:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:09.048 00:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:09.048 00:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:09.048 00:07:27 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:06:09.048 00:07:27 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:06:09.048 00:07:27 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:06:09.048 00:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:09.048 00:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:09.048 00:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:09.048 00:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:09.048 00:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:09.048 00:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:09.048 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:09.048 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:09.048 00:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:09.048 00:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:09.048 00:07:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:06:09.048 00:07:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:14.323 00:07:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:14.323 00:07:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:06:14.323 00:07:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:14.323 00:07:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:14.323 00:07:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:14.323 00:07:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:14.323 00:07:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:14.323 00:07:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:06:14.323 00:07:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:14.323 00:07:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:06:14.323 00:07:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:06:14.323 00:07:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:06:14.323 00:07:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:06:14.323 00:07:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:06:14.323 00:07:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:06:14.323 00:07:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:14.323 00:07:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:14.323 00:07:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:14.323 00:07:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:14.323 00:07:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:14.323 00:07:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:14.323 00:07:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:14.323 00:07:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:14.323 00:07:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:14.323 00:07:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:14.323 00:07:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:14.323 00:07:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:14.323 00:07:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:14.323 00:07:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:14.323 00:07:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:14.323 00:07:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:14.323 00:07:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:14.323 00:07:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:14.323 00:07:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:14.323 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:14.323 00:07:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:14.323 00:07:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:14.323 00:07:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:14.323 00:07:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:14.323 00:07:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:14.323 00:07:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:14.323 00:07:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:14.323 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:14.323 00:07:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:14.323 00:07:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:14.323 00:07:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:14.323 00:07:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:14.323 00:07:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:14.323 00:07:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:14.323 00:07:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:14.323 00:07:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:14.323 00:07:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:14.323 00:07:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:14.323 00:07:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:14.323 00:07:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:14.323 00:07:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:14.323 00:07:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:14.323 00:07:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:14.323 00:07:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:14.323 Found net devices under 0000:86:00.0: cvl_0_0 00:06:14.323 00:07:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:14.323 00:07:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:14.323 00:07:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:14.323 00:07:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:14.323 00:07:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:14.323 00:07:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:14.323 00:07:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:14.323 00:07:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:14.323 00:07:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:14.323 Found net devices under 0000:86:00.1: cvl_0_1 00:06:14.323 00:07:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:14.323 00:07:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:14.323 00:07:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:06:14.323 00:07:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:14.323 00:07:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:14.323 00:07:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:14.323 00:07:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:14.323 00:07:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:14.323 00:07:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:14.323 00:07:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:14.323 00:07:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:14.323 00:07:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:14.323 00:07:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:14.323 00:07:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:14.323 00:07:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:14.323 00:07:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:14.323 00:07:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:14.323 00:07:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:14.323 00:07:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:14.323 00:07:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:14.323 00:07:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:14.324 00:07:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:14.324 00:07:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:14.324 00:07:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:14.324 00:07:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:14.324 00:07:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:14.324 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:14.324 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:06:14.324 00:06:14.324 --- 10.0.0.2 ping statistics --- 00:06:14.324 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:14.324 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:06:14.324 00:07:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:14.324 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:14.324 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.255 ms 00:06:14.324 00:06:14.324 --- 10.0.0.1 ping statistics --- 00:06:14.324 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:14.324 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:06:14.324 00:07:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:14.324 00:07:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:06:14.324 00:07:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:14.324 00:07:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:14.324 00:07:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:14.324 00:07:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:14.324 00:07:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:14.324 00:07:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:14.324 00:07:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:14.324 00:07:32 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:06:14.324 00:07:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:06:14.324 00:07:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:14.324 00:07:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:14.324 ************************************ 00:06:14.324 START TEST nvmf_filesystem_no_in_capsule 00:06:14.324 ************************************ 00:06:14.324 00:07:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1117 -- # nvmf_filesystem_part 0 00:06:14.324 00:07:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:06:14.324 00:07:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:06:14.324 00:07:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:14.324 00:07:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:14.324 00:07:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:14.324 00:07:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1354322 00:06:14.324 00:07:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1354322 00:06:14.324 00:07:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:14.324 00:07:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@823 -- # '[' -z 1354322 ']' 00:06:14.324 00:07:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.324 00:07:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@828 -- # local max_retries=100 00:06:14.324 00:07:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.324 00:07:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@832 -- # xtrace_disable 00:06:14.324 00:07:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:14.324 [2024-07-16 00:07:33.036956] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:06:14.324 [2024-07-16 00:07:33.037002] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:14.324 [2024-07-16 00:07:33.093890] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:14.583 [2024-07-16 00:07:33.181828] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:14.583 [2024-07-16 00:07:33.181862] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:14.583 [2024-07-16 00:07:33.181869] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:14.583 [2024-07-16 00:07:33.181876] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:14.583 [2024-07-16 00:07:33.181881] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:14.583 [2024-07-16 00:07:33.181921] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:14.583 [2024-07-16 00:07:33.182019] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:14.583 [2024-07-16 00:07:33.182032] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:14.583 [2024-07-16 00:07:33.182033] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.149 00:07:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:06:15.149 00:07:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@856 -- # return 0 00:06:15.149 00:07:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:15.149 00:07:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:15.149 00:07:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:15.149 00:07:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:15.149 00:07:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:06:15.149 00:07:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:06:15.149 00:07:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:15.149 00:07:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:15.149 [2024-07-16 00:07:33.889258] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:15.149 00:07:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:15.149 00:07:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:06:15.149 00:07:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:15.149 00:07:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:15.409 Malloc1 00:06:15.409 00:07:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:15.409 00:07:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:15.409 00:07:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:15.409 00:07:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:15.409 00:07:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:15.409 00:07:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:15.409 00:07:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:15.409 00:07:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:15.409 00:07:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:15.409 00:07:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:15.409 00:07:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:15.409 00:07:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:15.409 [2024-07-16 00:07:34.044297] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:15.409 00:07:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:15.409 00:07:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:06:15.409 00:07:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1372 -- # local bdev_name=Malloc1 00:06:15.409 00:07:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1373 -- # local bdev_info 00:06:15.409 00:07:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1374 -- # local bs 00:06:15.409 00:07:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1375 -- # local nb 00:06:15.409 00:07:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1376 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:06:15.409 00:07:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:15.409 00:07:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:15.409 00:07:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:15.409 00:07:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1376 -- # bdev_info='[ 00:06:15.409 { 00:06:15.409 "name": "Malloc1", 00:06:15.409 "aliases": [ 00:06:15.409 "39e06ef9-05c9-4161-8ba7-4169ce6aab24" 00:06:15.409 ], 00:06:15.409 "product_name": "Malloc disk", 00:06:15.409 "block_size": 512, 00:06:15.409 "num_blocks": 1048576, 00:06:15.409 "uuid": "39e06ef9-05c9-4161-8ba7-4169ce6aab24", 00:06:15.409 "assigned_rate_limits": { 00:06:15.409 "rw_ios_per_sec": 0, 00:06:15.409 "rw_mbytes_per_sec": 0, 00:06:15.409 "r_mbytes_per_sec": 0, 00:06:15.409 "w_mbytes_per_sec": 0 00:06:15.409 }, 00:06:15.409 "claimed": true, 00:06:15.409 "claim_type": "exclusive_write", 00:06:15.409 "zoned": false, 00:06:15.409 "supported_io_types": { 00:06:15.409 "read": true, 00:06:15.409 "write": true, 00:06:15.409 "unmap": true, 00:06:15.409 "flush": true, 00:06:15.409 "reset": true, 00:06:15.409 "nvme_admin": false, 00:06:15.409 "nvme_io": false, 00:06:15.409 "nvme_io_md": false, 00:06:15.409 "write_zeroes": true, 00:06:15.409 "zcopy": true, 00:06:15.409 "get_zone_info": false, 00:06:15.409 "zone_management": false, 00:06:15.409 "zone_append": false, 00:06:15.409 "compare": false, 00:06:15.409 "compare_and_write": false, 00:06:15.409 "abort": true, 00:06:15.409 "seek_hole": false, 00:06:15.409 "seek_data": false, 00:06:15.409 "copy": true, 00:06:15.409 "nvme_iov_md": false 00:06:15.409 }, 00:06:15.409 "memory_domains": [ 00:06:15.409 { 00:06:15.409 "dma_device_id": "system", 00:06:15.409 "dma_device_type": 1 00:06:15.409 }, 00:06:15.409 { 00:06:15.409 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:15.409 "dma_device_type": 2 00:06:15.409 } 00:06:15.409 ], 00:06:15.409 "driver_specific": {} 00:06:15.409 } 00:06:15.409 ]' 00:06:15.409 00:07:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1377 -- # jq '.[] .block_size' 00:06:15.409 00:07:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1377 -- # bs=512 00:06:15.409 00:07:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # jq '.[] .num_blocks' 00:06:15.409 00:07:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # nb=1048576 00:06:15.409 00:07:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # bdev_size=512 00:06:15.409 00:07:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # echo 512 00:06:15.409 00:07:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:06:15.409 00:07:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:16.785 00:07:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:06:16.785 00:07:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1192 -- # local i=0 00:06:16.785 00:07:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1193 -- # local nvme_device_counter=1 nvme_devices=0 00:06:16.785 00:07:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1194 -- # [[ -n '' ]] 00:06:16.785 00:07:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # sleep 2 00:06:18.691 00:07:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # (( i++ <= 15 )) 00:06:18.691 00:07:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1201 -- # lsblk -l -o NAME,SERIAL 00:06:18.691 00:07:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1201 -- # grep -c SPDKISFASTANDAWESOME 00:06:18.691 00:07:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1201 -- # nvme_devices=1 00:06:18.691 00:07:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # (( nvme_devices == nvme_device_counter )) 00:06:18.691 00:07:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # return 0 00:06:18.691 00:07:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:06:18.691 00:07:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:06:18.691 00:07:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:06:18.691 00:07:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:06:18.691 00:07:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:18.691 00:07:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:18.691 00:07:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:06:18.691 00:07:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:06:18.691 00:07:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:06:18.691 00:07:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:06:18.691 00:07:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:06:18.950 00:07:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:06:19.207 00:07:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:06:20.146 00:07:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:06:20.146 00:07:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:06:20.146 00:07:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1093 -- # '[' 4 -le 1 ']' 00:06:20.146 00:07:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:20.146 00:07:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:20.405 ************************************ 00:06:20.405 START TEST filesystem_ext4 00:06:20.405 ************************************ 00:06:20.405 00:07:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1117 -- # nvmf_filesystem_create ext4 nvme0n1 00:06:20.405 00:07:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:06:20.405 00:07:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:20.405 00:07:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:06:20.405 00:07:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@918 -- # local fstype=ext4 00:06:20.405 00:07:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@919 -- # local dev_name=/dev/nvme0n1p1 00:06:20.405 00:07:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@920 -- # local i=0 00:06:20.405 00:07:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@921 -- # local force 00:06:20.405 00:07:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@923 -- # '[' ext4 = ext4 ']' 00:06:20.405 00:07:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # force=-F 00:06:20.405 00:07:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:06:20.405 mke2fs 1.46.5 (30-Dec-2021) 00:06:20.405 Discarding device blocks: 0/522240 done 00:06:20.405 Creating filesystem with 522240 1k blocks and 130560 inodes 00:06:20.405 Filesystem UUID: d73674c9-2fec-432b-9293-59158cb39595 00:06:20.405 Superblock backups stored on blocks: 00:06:20.405 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:06:20.405 00:06:20.405 Allocating group tables: 0/64 done 00:06:20.405 Writing inode tables: 0/64 done 00:06:20.663 Creating journal (8192 blocks): done 00:06:21.489 Writing superblocks and filesystem accounting information: 0/6426/64 done 00:06:21.489 00:06:21.489 00:07:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@937 -- # return 0 00:06:21.489 00:07:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:21.748 00:07:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:21.748 00:07:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:06:21.748 00:07:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:21.748 00:07:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:06:21.748 00:07:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:06:21.748 00:07:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:21.748 00:07:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1354322 00:06:21.748 00:07:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:21.748 00:07:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:21.748 00:07:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:21.748 00:07:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:21.748 00:06:21.748 real 0m1.450s 00:06:21.748 user 0m0.036s 00:06:21.748 sys 0m0.053s 00:06:21.749 00:07:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:21.749 00:07:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:06:21.749 ************************************ 00:06:21.749 END TEST filesystem_ext4 00:06:21.749 ************************************ 00:06:21.749 00:07:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1136 -- # return 0 00:06:21.749 00:07:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:06:21.749 00:07:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1093 -- # '[' 4 -le 1 ']' 00:06:21.749 00:07:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:21.749 00:07:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:21.749 ************************************ 00:06:21.749 START TEST filesystem_btrfs 00:06:21.749 ************************************ 00:06:21.749 00:07:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1117 -- # nvmf_filesystem_create btrfs nvme0n1 00:06:21.749 00:07:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:06:21.749 00:07:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:21.749 00:07:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:06:21.749 00:07:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@918 -- # local fstype=btrfs 00:06:21.749 00:07:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@919 -- # local dev_name=/dev/nvme0n1p1 00:06:21.749 00:07:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@920 -- # local i=0 00:06:21.749 00:07:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@921 -- # local force 00:06:21.749 00:07:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@923 -- # '[' btrfs = ext4 ']' 00:06:21.749 00:07:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # force=-f 00:06:21.749 00:07:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:06:22.316 btrfs-progs v6.6.2 00:06:22.316 See https://btrfs.readthedocs.io for more information. 00:06:22.316 00:06:22.316 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:06:22.316 NOTE: several default settings have changed in version 5.15, please make sure 00:06:22.316 this does not affect your deployments: 00:06:22.316 - DUP for metadata (-m dup) 00:06:22.316 - enabled no-holes (-O no-holes) 00:06:22.316 - enabled free-space-tree (-R free-space-tree) 00:06:22.316 00:06:22.316 Label: (null) 00:06:22.316 UUID: 1a6b1d36-f16a-43a3-9666-36008d85b03e 00:06:22.316 Node size: 16384 00:06:22.316 Sector size: 4096 00:06:22.316 Filesystem size: 510.00MiB 00:06:22.316 Block group profiles: 00:06:22.316 Data: single 8.00MiB 00:06:22.316 Metadata: DUP 32.00MiB 00:06:22.316 System: DUP 8.00MiB 00:06:22.316 SSD detected: yes 00:06:22.316 Zoned device: no 00:06:22.316 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:06:22.316 Runtime features: free-space-tree 00:06:22.316 Checksum: crc32c 00:06:22.316 Number of devices: 1 00:06:22.316 Devices: 00:06:22.316 ID SIZE PATH 00:06:22.316 1 510.00MiB /dev/nvme0n1p1 00:06:22.316 00:06:22.316 00:07:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@937 -- # return 0 00:06:22.316 00:07:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:22.576 00:07:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:22.576 00:07:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:06:22.576 00:07:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:22.576 00:07:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:06:22.576 00:07:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:06:22.576 00:07:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:22.576 00:07:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1354322 00:06:22.576 00:07:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:22.576 00:07:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:22.576 00:07:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:22.576 00:07:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:22.576 00:06:22.576 real 0m0.771s 00:06:22.576 user 0m0.030s 00:06:22.576 sys 0m0.118s 00:06:22.576 00:07:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:22.576 00:07:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:06:22.576 ************************************ 00:06:22.576 END TEST filesystem_btrfs 00:06:22.576 ************************************ 00:06:22.576 00:07:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1136 -- # return 0 00:06:22.576 00:07:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:06:22.576 00:07:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1093 -- # '[' 4 -le 1 ']' 00:06:22.576 00:07:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:22.576 00:07:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:22.576 ************************************ 00:06:22.576 START TEST filesystem_xfs 00:06:22.576 ************************************ 00:06:22.576 00:07:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1117 -- # nvmf_filesystem_create xfs nvme0n1 00:06:22.576 00:07:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:06:22.576 00:07:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:22.576 00:07:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:06:22.576 00:07:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@918 -- # local fstype=xfs 00:06:22.576 00:07:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@919 -- # local dev_name=/dev/nvme0n1p1 00:06:22.576 00:07:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@920 -- # local i=0 00:06:22.576 00:07:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@921 -- # local force 00:06:22.576 00:07:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@923 -- # '[' xfs = ext4 ']' 00:06:22.576 00:07:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # force=-f 00:06:22.576 00:07:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # mkfs.xfs -f /dev/nvme0n1p1 00:06:22.836 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:06:22.836 = sectsz=512 attr=2, projid32bit=1 00:06:22.836 = crc=1 finobt=1, sparse=1, rmapbt=0 00:06:22.836 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:06:22.836 data = bsize=4096 blocks=130560, imaxpct=25 00:06:22.836 = sunit=0 swidth=0 blks 00:06:22.836 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:06:22.836 log =internal log bsize=4096 blocks=16384, version=2 00:06:22.836 = sectsz=512 sunit=0 blks, lazy-count=1 00:06:22.836 realtime =none extsz=4096 blocks=0, rtextents=0 00:06:23.437 Discarding blocks...Done. 00:06:23.437 00:07:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@937 -- # return 0 00:06:23.437 00:07:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:25.971 00:07:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:25.971 00:07:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:06:25.971 00:07:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:25.971 00:07:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:06:25.971 00:07:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:06:25.971 00:07:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:25.971 00:07:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1354322 00:06:25.971 00:07:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:25.971 00:07:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:25.971 00:07:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:25.971 00:07:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:25.971 00:06:25.971 real 0m3.356s 00:06:25.971 user 0m0.026s 00:06:25.971 sys 0m0.067s 00:06:25.971 00:07:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:25.971 00:07:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:06:25.971 ************************************ 00:06:25.971 END TEST filesystem_xfs 00:06:25.971 ************************************ 00:06:25.971 00:07:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1136 -- # return 0 00:06:25.971 00:07:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:06:26.231 00:07:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:06:26.231 00:07:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:06:26.231 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:26.231 00:07:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:06:26.231 00:07:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1213 -- # local i=0 00:06:26.231 00:07:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1214 -- # lsblk -o NAME,SERIAL 00:06:26.231 00:07:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1214 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:26.231 00:07:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1221 -- # lsblk -l -o NAME,SERIAL 00:06:26.231 00:07:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1221 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:26.231 00:07:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1225 -- # return 0 00:06:26.231 00:07:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:26.231 00:07:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:26.231 00:07:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:26.231 00:07:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:26.231 00:07:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:06:26.231 00:07:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1354322 00:06:26.231 00:07:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@942 -- # '[' -z 1354322 ']' 00:06:26.231 00:07:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@946 -- # kill -0 1354322 00:06:26.231 00:07:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@947 -- # uname 00:06:26.231 00:07:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:06:26.231 00:07:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1354322 00:06:26.231 00:07:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:06:26.231 00:07:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:06:26.231 00:07:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1354322' 00:06:26.231 killing process with pid 1354322 00:06:26.231 00:07:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@961 -- # kill 1354322 00:06:26.231 00:07:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # wait 1354322 00:06:26.491 00:07:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:06:26.491 00:06:26.491 real 0m12.356s 00:06:26.491 user 0m48.546s 00:06:26.491 sys 0m1.179s 00:06:26.491 00:07:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:26.491 00:07:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:26.491 ************************************ 00:06:26.491 END TEST nvmf_filesystem_no_in_capsule 00:06:26.491 ************************************ 00:06:26.750 00:07:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1136 -- # return 0 00:06:26.750 00:07:45 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:06:26.750 00:07:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:06:26.750 00:07:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:26.750 00:07:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:26.750 ************************************ 00:06:26.750 START TEST nvmf_filesystem_in_capsule 00:06:26.750 ************************************ 00:06:26.750 00:07:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1117 -- # nvmf_filesystem_part 4096 00:06:26.750 00:07:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:06:26.750 00:07:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:06:26.750 00:07:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:26.750 00:07:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:26.750 00:07:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:26.750 00:07:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1356610 00:06:26.750 00:07:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1356610 00:06:26.750 00:07:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:26.750 00:07:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@823 -- # '[' -z 1356610 ']' 00:06:26.750 00:07:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.750 00:07:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@828 -- # local max_retries=100 00:06:26.750 00:07:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.750 00:07:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@832 -- # xtrace_disable 00:06:26.750 00:07:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:26.750 [2024-07-16 00:07:45.447063] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:06:26.750 [2024-07-16 00:07:45.447101] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:26.750 [2024-07-16 00:07:45.503462] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:26.750 [2024-07-16 00:07:45.583817] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:26.750 [2024-07-16 00:07:45.583854] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:26.750 [2024-07-16 00:07:45.583862] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:26.750 [2024-07-16 00:07:45.583868] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:26.750 [2024-07-16 00:07:45.583873] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:26.750 [2024-07-16 00:07:45.583911] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:26.750 [2024-07-16 00:07:45.583987] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:26.750 [2024-07-16 00:07:45.584048] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:26.750 [2024-07-16 00:07:45.584049] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.683 00:07:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:06:27.684 00:07:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@856 -- # return 0 00:06:27.684 00:07:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:27.684 00:07:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:27.684 00:07:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:27.684 00:07:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:27.684 00:07:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:06:27.684 00:07:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:06:27.684 00:07:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:27.684 00:07:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:27.684 [2024-07-16 00:07:46.300163] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:27.684 00:07:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:27.684 00:07:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:06:27.684 00:07:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:27.684 00:07:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:27.684 Malloc1 00:06:27.684 00:07:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:27.684 00:07:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:27.684 00:07:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:27.684 00:07:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:27.684 00:07:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:27.684 00:07:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:27.684 00:07:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:27.684 00:07:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:27.684 00:07:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:27.684 00:07:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:27.684 00:07:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:27.684 00:07:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:27.684 [2024-07-16 00:07:46.444446] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:27.684 00:07:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:27.684 00:07:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:06:27.684 00:07:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1372 -- # local bdev_name=Malloc1 00:06:27.684 00:07:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1373 -- # local bdev_info 00:06:27.684 00:07:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1374 -- # local bs 00:06:27.684 00:07:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1375 -- # local nb 00:06:27.684 00:07:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1376 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:06:27.684 00:07:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:27.684 00:07:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:27.684 00:07:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:27.684 00:07:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1376 -- # bdev_info='[ 00:06:27.684 { 00:06:27.684 "name": "Malloc1", 00:06:27.684 "aliases": [ 00:06:27.684 "25f02484-ebef-4925-ade6-6e3f32c51352" 00:06:27.684 ], 00:06:27.684 "product_name": "Malloc disk", 00:06:27.684 "block_size": 512, 00:06:27.684 "num_blocks": 1048576, 00:06:27.684 "uuid": "25f02484-ebef-4925-ade6-6e3f32c51352", 00:06:27.684 "assigned_rate_limits": { 00:06:27.684 "rw_ios_per_sec": 0, 00:06:27.684 "rw_mbytes_per_sec": 0, 00:06:27.684 "r_mbytes_per_sec": 0, 00:06:27.684 "w_mbytes_per_sec": 0 00:06:27.684 }, 00:06:27.684 "claimed": true, 00:06:27.684 "claim_type": "exclusive_write", 00:06:27.684 "zoned": false, 00:06:27.684 "supported_io_types": { 00:06:27.684 "read": true, 00:06:27.684 "write": true, 00:06:27.684 "unmap": true, 00:06:27.684 "flush": true, 00:06:27.684 "reset": true, 00:06:27.684 "nvme_admin": false, 00:06:27.684 "nvme_io": false, 00:06:27.684 "nvme_io_md": false, 00:06:27.684 "write_zeroes": true, 00:06:27.684 "zcopy": true, 00:06:27.684 "get_zone_info": false, 00:06:27.684 "zone_management": false, 00:06:27.684 "zone_append": false, 00:06:27.684 "compare": false, 00:06:27.684 "compare_and_write": false, 00:06:27.684 "abort": true, 00:06:27.684 "seek_hole": false, 00:06:27.684 "seek_data": false, 00:06:27.684 "copy": true, 00:06:27.684 "nvme_iov_md": false 00:06:27.684 }, 00:06:27.684 "memory_domains": [ 00:06:27.684 { 00:06:27.684 "dma_device_id": "system", 00:06:27.684 "dma_device_type": 1 00:06:27.684 }, 00:06:27.684 { 00:06:27.684 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:27.684 "dma_device_type": 2 00:06:27.684 } 00:06:27.684 ], 00:06:27.684 "driver_specific": {} 00:06:27.684 } 00:06:27.684 ]' 00:06:27.684 00:07:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1377 -- # jq '.[] .block_size' 00:06:27.684 00:07:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1377 -- # bs=512 00:06:27.684 00:07:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # jq '.[] .num_blocks' 00:06:27.942 00:07:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # nb=1048576 00:06:27.942 00:07:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # bdev_size=512 00:06:27.942 00:07:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # echo 512 00:06:27.942 00:07:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:06:27.942 00:07:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:28.878 00:07:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:06:28.878 00:07:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1192 -- # local i=0 00:06:28.878 00:07:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1193 -- # local nvme_device_counter=1 nvme_devices=0 00:06:28.878 00:07:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1194 -- # [[ -n '' ]] 00:06:28.878 00:07:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # sleep 2 00:06:31.420 00:07:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # (( i++ <= 15 )) 00:06:31.420 00:07:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1201 -- # lsblk -l -o NAME,SERIAL 00:06:31.420 00:07:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1201 -- # grep -c SPDKISFASTANDAWESOME 00:06:31.420 00:07:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1201 -- # nvme_devices=1 00:06:31.420 00:07:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # (( nvme_devices == nvme_device_counter )) 00:06:31.420 00:07:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # return 0 00:06:31.420 00:07:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:06:31.421 00:07:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:06:31.421 00:07:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:06:31.421 00:07:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:06:31.421 00:07:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:31.421 00:07:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:31.421 00:07:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:06:31.421 00:07:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:06:31.421 00:07:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:06:31.421 00:07:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:06:31.421 00:07:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:06:31.421 00:07:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:06:31.685 00:07:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:06:32.622 00:07:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:06:32.622 00:07:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:06:32.622 00:07:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1093 -- # '[' 4 -le 1 ']' 00:06:32.622 00:07:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:32.622 00:07:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:32.622 ************************************ 00:06:32.622 START TEST filesystem_in_capsule_ext4 00:06:32.622 ************************************ 00:06:32.622 00:07:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1117 -- # nvmf_filesystem_create ext4 nvme0n1 00:06:32.622 00:07:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:06:32.622 00:07:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:32.622 00:07:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:06:32.622 00:07:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@918 -- # local fstype=ext4 00:06:32.622 00:07:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@919 -- # local dev_name=/dev/nvme0n1p1 00:06:32.622 00:07:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@920 -- # local i=0 00:06:32.622 00:07:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@921 -- # local force 00:06:32.622 00:07:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@923 -- # '[' ext4 = ext4 ']' 00:06:32.622 00:07:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # force=-F 00:06:32.622 00:07:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:06:32.622 mke2fs 1.46.5 (30-Dec-2021) 00:06:32.881 Discarding device blocks: 0/522240 done 00:06:32.881 Creating filesystem with 522240 1k blocks and 130560 inodes 00:06:32.881 Filesystem UUID: 6fa037a6-a30c-4618-8b28-aec55245bafe 00:06:32.881 Superblock backups stored on blocks: 00:06:32.881 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:06:32.881 00:06:32.881 Allocating group tables: 0/64 done 00:06:32.881 Writing inode tables: 0/64 done 00:06:33.140 Creating journal (8192 blocks): done 00:06:33.966 Writing superblocks and filesystem accounting information: 0/64 4/64 done 00:06:33.966 00:06:33.966 00:07:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@937 -- # return 0 00:06:33.966 00:07:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:34.224 00:07:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:34.224 00:07:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:06:34.224 00:07:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:34.224 00:07:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:06:34.224 00:07:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:06:34.224 00:07:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:34.482 00:07:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1356610 00:06:34.482 00:07:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:34.482 00:07:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:34.482 00:07:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:34.482 00:07:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:34.482 00:06:34.482 real 0m1.730s 00:06:34.482 user 0m0.022s 00:06:34.482 sys 0m0.068s 00:06:34.482 00:07:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:34.482 00:07:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:06:34.482 ************************************ 00:06:34.482 END TEST filesystem_in_capsule_ext4 00:06:34.482 ************************************ 00:06:34.482 00:07:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1136 -- # return 0 00:06:34.483 00:07:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:06:34.483 00:07:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1093 -- # '[' 4 -le 1 ']' 00:06:34.483 00:07:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:34.483 00:07:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:34.483 ************************************ 00:06:34.483 START TEST filesystem_in_capsule_btrfs 00:06:34.483 ************************************ 00:06:34.483 00:07:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1117 -- # nvmf_filesystem_create btrfs nvme0n1 00:06:34.483 00:07:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:06:34.483 00:07:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:34.483 00:07:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:06:34.483 00:07:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@918 -- # local fstype=btrfs 00:06:34.483 00:07:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@919 -- # local dev_name=/dev/nvme0n1p1 00:06:34.483 00:07:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@920 -- # local i=0 00:06:34.483 00:07:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@921 -- # local force 00:06:34.483 00:07:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@923 -- # '[' btrfs = ext4 ']' 00:06:34.483 00:07:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # force=-f 00:06:34.483 00:07:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:06:34.741 btrfs-progs v6.6.2 00:06:34.741 See https://btrfs.readthedocs.io for more information. 00:06:34.741 00:06:34.741 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:06:34.741 NOTE: several default settings have changed in version 5.15, please make sure 00:06:34.741 this does not affect your deployments: 00:06:34.741 - DUP for metadata (-m dup) 00:06:34.741 - enabled no-holes (-O no-holes) 00:06:34.741 - enabled free-space-tree (-R free-space-tree) 00:06:34.741 00:06:34.741 Label: (null) 00:06:34.741 UUID: cfa7092a-9946-4f02-93ac-c80e3d110dc6 00:06:34.741 Node size: 16384 00:06:34.741 Sector size: 4096 00:06:34.741 Filesystem size: 510.00MiB 00:06:34.741 Block group profiles: 00:06:34.741 Data: single 8.00MiB 00:06:34.741 Metadata: DUP 32.00MiB 00:06:34.741 System: DUP 8.00MiB 00:06:34.741 SSD detected: yes 00:06:34.741 Zoned device: no 00:06:34.741 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:06:34.741 Runtime features: free-space-tree 00:06:34.741 Checksum: crc32c 00:06:34.741 Number of devices: 1 00:06:34.741 Devices: 00:06:34.741 ID SIZE PATH 00:06:34.741 1 510.00MiB /dev/nvme0n1p1 00:06:34.741 00:06:34.741 00:07:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@937 -- # return 0 00:06:34.741 00:07:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:35.673 00:07:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:35.673 00:07:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:06:35.673 00:07:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:35.673 00:07:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:06:35.673 00:07:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:06:35.673 00:07:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:35.673 00:07:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1356610 00:06:35.673 00:07:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:35.673 00:07:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:35.673 00:07:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:35.673 00:07:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:35.673 00:06:35.673 real 0m1.092s 00:06:35.673 user 0m0.032s 00:06:35.673 sys 0m0.121s 00:06:35.673 00:07:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:35.673 00:07:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:06:35.673 ************************************ 00:06:35.673 END TEST filesystem_in_capsule_btrfs 00:06:35.673 ************************************ 00:06:35.673 00:07:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1136 -- # return 0 00:06:35.673 00:07:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:06:35.673 00:07:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1093 -- # '[' 4 -le 1 ']' 00:06:35.673 00:07:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:35.673 00:07:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:35.673 ************************************ 00:06:35.673 START TEST filesystem_in_capsule_xfs 00:06:35.673 ************************************ 00:06:35.673 00:07:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1117 -- # nvmf_filesystem_create xfs nvme0n1 00:06:35.673 00:07:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:06:35.673 00:07:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:35.673 00:07:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:06:35.673 00:07:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@918 -- # local fstype=xfs 00:06:35.673 00:07:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@919 -- # local dev_name=/dev/nvme0n1p1 00:06:35.673 00:07:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@920 -- # local i=0 00:06:35.673 00:07:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@921 -- # local force 00:06:35.673 00:07:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@923 -- # '[' xfs = ext4 ']' 00:06:35.673 00:07:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # force=-f 00:06:35.673 00:07:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # mkfs.xfs -f /dev/nvme0n1p1 00:06:35.673 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:06:35.673 = sectsz=512 attr=2, projid32bit=1 00:06:35.673 = crc=1 finobt=1, sparse=1, rmapbt=0 00:06:35.674 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:06:35.674 data = bsize=4096 blocks=130560, imaxpct=25 00:06:35.674 = sunit=0 swidth=0 blks 00:06:35.674 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:06:35.674 log =internal log bsize=4096 blocks=16384, version=2 00:06:35.674 = sectsz=512 sunit=0 blks, lazy-count=1 00:06:35.674 realtime =none extsz=4096 blocks=0, rtextents=0 00:06:36.605 Discarding blocks...Done. 00:06:36.605 00:07:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@937 -- # return 0 00:06:36.605 00:07:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:39.155 00:07:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:39.155 00:07:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:06:39.155 00:07:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:39.155 00:07:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:06:39.155 00:07:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:06:39.155 00:07:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:39.155 00:07:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1356610 00:06:39.155 00:07:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:39.155 00:07:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:39.155 00:07:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:39.155 00:07:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:39.155 00:06:39.155 real 0m3.290s 00:06:39.155 user 0m0.027s 00:06:39.155 sys 0m0.067s 00:06:39.155 00:07:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:39.155 00:07:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:06:39.155 ************************************ 00:06:39.155 END TEST filesystem_in_capsule_xfs 00:06:39.155 ************************************ 00:06:39.155 00:07:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1136 -- # return 0 00:06:39.155 00:07:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:06:39.155 00:07:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:06:39.155 00:07:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:06:39.414 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:39.414 00:07:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:06:39.414 00:07:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1213 -- # local i=0 00:06:39.414 00:07:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1214 -- # lsblk -o NAME,SERIAL 00:06:39.414 00:07:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1214 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:39.414 00:07:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1221 -- # lsblk -l -o NAME,SERIAL 00:06:39.414 00:07:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1221 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:39.414 00:07:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1225 -- # return 0 00:06:39.414 00:07:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:39.414 00:07:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:39.414 00:07:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:39.414 00:07:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:39.414 00:07:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:06:39.414 00:07:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1356610 00:06:39.414 00:07:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@942 -- # '[' -z 1356610 ']' 00:06:39.414 00:07:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@946 -- # kill -0 1356610 00:06:39.414 00:07:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@947 -- # uname 00:06:39.414 00:07:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:06:39.414 00:07:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1356610 00:06:39.414 00:07:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:06:39.414 00:07:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:06:39.414 00:07:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1356610' 00:06:39.414 killing process with pid 1356610 00:06:39.414 00:07:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@961 -- # kill 1356610 00:06:39.414 00:07:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # wait 1356610 00:06:39.673 00:07:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:06:39.673 00:06:39.673 real 0m13.063s 00:06:39.673 user 0m51.332s 00:06:39.673 sys 0m1.237s 00:06:39.673 00:07:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:39.673 00:07:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:39.673 ************************************ 00:06:39.673 END TEST nvmf_filesystem_in_capsule 00:06:39.673 ************************************ 00:06:39.673 00:07:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1136 -- # return 0 00:06:39.673 00:07:58 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:06:39.673 00:07:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:39.673 00:07:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:06:39.673 00:07:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:39.673 00:07:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:06:39.673 00:07:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:39.673 00:07:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:39.673 rmmod nvme_tcp 00:06:39.673 rmmod nvme_fabrics 00:06:39.673 rmmod nvme_keyring 00:06:39.933 00:07:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:39.933 00:07:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:06:39.933 00:07:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:06:39.933 00:07:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:06:39.933 00:07:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:39.933 00:07:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:39.933 00:07:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:39.933 00:07:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:39.933 00:07:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:39.933 00:07:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:39.933 00:07:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:39.933 00:07:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:41.875 00:08:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:41.875 00:06:41.875 real 0m33.190s 00:06:41.875 user 1m41.531s 00:06:41.875 sys 0m6.559s 00:06:41.875 00:08:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:41.875 00:08:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:41.875 ************************************ 00:06:41.875 END TEST nvmf_filesystem 00:06:41.875 ************************************ 00:06:41.875 00:08:00 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:06:41.875 00:08:00 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:06:41.875 00:08:00 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:06:41.875 00:08:00 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:41.875 00:08:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:41.875 ************************************ 00:06:41.875 START TEST nvmf_target_discovery 00:06:41.875 ************************************ 00:06:41.875 00:08:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:06:42.134 * Looking for test storage... 00:06:42.134 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:42.134 00:08:00 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:42.134 00:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:06:42.134 00:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:42.134 00:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:42.134 00:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:42.134 00:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:42.134 00:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:42.134 00:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:42.134 00:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:42.134 00:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:42.134 00:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:42.134 00:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:42.134 00:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:42.134 00:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:42.134 00:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:42.135 00:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:42.135 00:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:42.135 00:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:42.135 00:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:42.135 00:08:00 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:42.135 00:08:00 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:42.135 00:08:00 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:42.135 00:08:00 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.135 00:08:00 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.135 00:08:00 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.135 00:08:00 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:06:42.135 00:08:00 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.135 00:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:06:42.135 00:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:42.135 00:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:42.135 00:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:42.135 00:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:42.135 00:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:42.135 00:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:42.135 00:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:42.135 00:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:42.135 00:08:00 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:06:42.135 00:08:00 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:06:42.135 00:08:00 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:06:42.135 00:08:00 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:06:42.135 00:08:00 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:06:42.135 00:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:42.135 00:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:42.135 00:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:42.135 00:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:42.135 00:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:42.135 00:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:42.135 00:08:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:42.135 00:08:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:42.135 00:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:42.135 00:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:42.135 00:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:06:42.135 00:08:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:47.401 00:08:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:47.401 00:08:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:06:47.401 00:08:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:47.401 00:08:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:47.401 00:08:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:47.401 00:08:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:47.401 00:08:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:47.401 00:08:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:06:47.401 00:08:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:47.401 00:08:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:06:47.401 00:08:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:06:47.401 00:08:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:06:47.401 00:08:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:06:47.401 00:08:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:06:47.401 00:08:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:06:47.401 00:08:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:47.401 00:08:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:47.401 00:08:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:47.401 00:08:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:47.401 00:08:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:47.401 00:08:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:47.401 00:08:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:47.401 00:08:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:47.401 00:08:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:47.401 00:08:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:47.401 00:08:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:47.401 00:08:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:47.401 00:08:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:47.401 00:08:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:47.401 00:08:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:47.401 00:08:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:47.401 00:08:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:47.401 00:08:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:47.401 00:08:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:47.401 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:47.401 00:08:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:47.401 00:08:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:47.401 00:08:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:47.401 00:08:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:47.401 00:08:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:47.401 00:08:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:47.401 00:08:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:47.401 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:47.401 00:08:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:47.401 00:08:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:47.401 00:08:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:47.401 00:08:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:47.401 00:08:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:47.401 00:08:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:47.401 00:08:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:47.401 00:08:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:47.401 00:08:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:47.401 00:08:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:47.401 00:08:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:47.401 00:08:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:47.401 00:08:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:47.401 00:08:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:47.401 00:08:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:47.401 00:08:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:47.402 Found net devices under 0000:86:00.0: cvl_0_0 00:06:47.402 00:08:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:47.402 00:08:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:47.402 00:08:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:47.402 00:08:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:47.402 00:08:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:47.402 00:08:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:47.402 00:08:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:47.402 00:08:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:47.402 00:08:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:47.402 Found net devices under 0000:86:00.1: cvl_0_1 00:06:47.402 00:08:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:47.402 00:08:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:47.402 00:08:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:06:47.402 00:08:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:47.402 00:08:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:47.402 00:08:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:47.402 00:08:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:47.402 00:08:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:47.402 00:08:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:47.402 00:08:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:47.402 00:08:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:47.402 00:08:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:47.402 00:08:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:47.402 00:08:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:47.402 00:08:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:47.402 00:08:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:47.402 00:08:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:47.402 00:08:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:47.402 00:08:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:47.402 00:08:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:47.402 00:08:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:47.402 00:08:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:47.402 00:08:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:47.402 00:08:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:47.402 00:08:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:47.402 00:08:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:47.402 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:47.402 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.184 ms 00:06:47.402 00:06:47.402 --- 10.0.0.2 ping statistics --- 00:06:47.402 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:47.402 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:06:47.402 00:08:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:47.402 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:47.402 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.244 ms 00:06:47.402 00:06:47.402 --- 10.0.0.1 ping statistics --- 00:06:47.402 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:47.402 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:06:47.402 00:08:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:47.402 00:08:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:06:47.402 00:08:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:47.402 00:08:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:47.402 00:08:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:47.402 00:08:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:47.402 00:08:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:47.402 00:08:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:47.402 00:08:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:47.402 00:08:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:06:47.402 00:08:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:47.402 00:08:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:47.402 00:08:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:47.402 00:08:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=1362419 00:06:47.402 00:08:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 1362419 00:06:47.402 00:08:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@823 -- # '[' -z 1362419 ']' 00:06:47.402 00:08:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.402 00:08:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@828 -- # local max_retries=100 00:06:47.402 00:08:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.402 00:08:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@832 -- # xtrace_disable 00:06:47.402 00:08:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:47.402 00:08:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:47.402 [2024-07-16 00:08:05.748339] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:06:47.402 [2024-07-16 00:08:05.748381] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:47.402 [2024-07-16 00:08:05.804918] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:47.402 [2024-07-16 00:08:05.885083] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:47.402 [2024-07-16 00:08:05.885122] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:47.402 [2024-07-16 00:08:05.885131] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:47.402 [2024-07-16 00:08:05.885139] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:47.402 [2024-07-16 00:08:05.885146] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:47.402 [2024-07-16 00:08:05.885184] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:47.402 [2024-07-16 00:08:05.885201] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:47.402 [2024-07-16 00:08:05.885290] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:47.402 [2024-07-16 00:08:05.885293] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.970 00:08:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:06:47.970 00:08:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@856 -- # return 0 00:06:47.970 00:08:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:47.970 00:08:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:47.970 00:08:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:47.970 00:08:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:47.970 00:08:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:47.970 00:08:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:47.970 00:08:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:47.970 [2024-07-16 00:08:06.603218] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:47.970 00:08:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:47.970 00:08:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:06:47.970 00:08:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:47.970 00:08:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:06:47.970 00:08:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:47.970 00:08:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:47.970 Null1 00:06:47.970 00:08:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:47.970 00:08:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:47.970 00:08:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:47.970 00:08:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:47.970 00:08:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:47.970 00:08:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:06:47.970 00:08:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:47.970 00:08:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:47.970 00:08:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:47.970 00:08:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:47.970 00:08:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:47.970 00:08:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:47.970 [2024-07-16 00:08:06.648679] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:47.970 00:08:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:47.970 00:08:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:47.970 00:08:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:06:47.970 00:08:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:47.970 00:08:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:47.970 Null2 00:06:47.970 00:08:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:47.970 00:08:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:06:47.970 00:08:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:47.970 00:08:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:47.970 00:08:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:47.970 00:08:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:06:47.970 00:08:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:47.970 00:08:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:47.971 00:08:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:47.971 00:08:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:06:47.971 00:08:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:47.971 00:08:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:47.971 00:08:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:47.971 00:08:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:47.971 00:08:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:06:47.971 00:08:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:47.971 00:08:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:47.971 Null3 00:06:47.971 00:08:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:47.971 00:08:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:06:47.971 00:08:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:47.971 00:08:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:47.971 00:08:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:47.971 00:08:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:06:47.971 00:08:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:47.971 00:08:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:47.971 00:08:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:47.971 00:08:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:06:47.971 00:08:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:47.971 00:08:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:47.971 00:08:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:47.971 00:08:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:47.971 00:08:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:06:47.971 00:08:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:47.971 00:08:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:47.971 Null4 00:06:47.971 00:08:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:47.971 00:08:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:06:47.971 00:08:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:47.971 00:08:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:47.971 00:08:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:47.971 00:08:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:06:47.971 00:08:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:47.971 00:08:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:47.971 00:08:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:47.971 00:08:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:06:47.971 00:08:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:47.971 00:08:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:47.971 00:08:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:47.971 00:08:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:47.971 00:08:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:47.971 00:08:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:47.971 00:08:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:47.971 00:08:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:06:47.971 00:08:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:47.971 00:08:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:47.971 00:08:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:47.971 00:08:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:06:48.230 00:06:48.230 Discovery Log Number of Records 6, Generation counter 6 00:06:48.230 =====Discovery Log Entry 0====== 00:06:48.230 trtype: tcp 00:06:48.230 adrfam: ipv4 00:06:48.230 subtype: current discovery subsystem 00:06:48.230 treq: not required 00:06:48.230 portid: 0 00:06:48.230 trsvcid: 4420 00:06:48.230 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:06:48.230 traddr: 10.0.0.2 00:06:48.230 eflags: explicit discovery connections, duplicate discovery information 00:06:48.230 sectype: none 00:06:48.230 =====Discovery Log Entry 1====== 00:06:48.230 trtype: tcp 00:06:48.230 adrfam: ipv4 00:06:48.230 subtype: nvme subsystem 00:06:48.230 treq: not required 00:06:48.230 portid: 0 00:06:48.230 trsvcid: 4420 00:06:48.230 subnqn: nqn.2016-06.io.spdk:cnode1 00:06:48.231 traddr: 10.0.0.2 00:06:48.231 eflags: none 00:06:48.231 sectype: none 00:06:48.231 =====Discovery Log Entry 2====== 00:06:48.231 trtype: tcp 00:06:48.231 adrfam: ipv4 00:06:48.231 subtype: nvme subsystem 00:06:48.231 treq: not required 00:06:48.231 portid: 0 00:06:48.231 trsvcid: 4420 00:06:48.231 subnqn: nqn.2016-06.io.spdk:cnode2 00:06:48.231 traddr: 10.0.0.2 00:06:48.231 eflags: none 00:06:48.231 sectype: none 00:06:48.231 =====Discovery Log Entry 3====== 00:06:48.231 trtype: tcp 00:06:48.231 adrfam: ipv4 00:06:48.231 subtype: nvme subsystem 00:06:48.231 treq: not required 00:06:48.231 portid: 0 00:06:48.231 trsvcid: 4420 00:06:48.231 subnqn: nqn.2016-06.io.spdk:cnode3 00:06:48.231 traddr: 10.0.0.2 00:06:48.231 eflags: none 00:06:48.231 sectype: none 00:06:48.231 =====Discovery Log Entry 4====== 00:06:48.231 trtype: tcp 00:06:48.231 adrfam: ipv4 00:06:48.231 subtype: nvme subsystem 00:06:48.231 treq: not required 00:06:48.231 portid: 0 00:06:48.231 trsvcid: 4420 00:06:48.231 subnqn: nqn.2016-06.io.spdk:cnode4 00:06:48.231 traddr: 10.0.0.2 00:06:48.231 eflags: none 00:06:48.231 sectype: none 00:06:48.231 =====Discovery Log Entry 5====== 00:06:48.231 trtype: tcp 00:06:48.231 adrfam: ipv4 00:06:48.231 subtype: discovery subsystem referral 00:06:48.231 treq: not required 00:06:48.231 portid: 0 00:06:48.231 trsvcid: 4430 00:06:48.231 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:06:48.231 traddr: 10.0.0.2 00:06:48.231 eflags: none 00:06:48.231 sectype: none 00:06:48.231 00:08:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:06:48.231 Perform nvmf subsystem discovery via RPC 00:06:48.231 00:08:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:06:48.231 00:08:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:48.231 00:08:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:48.231 [ 00:06:48.231 { 00:06:48.231 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:06:48.231 "subtype": "Discovery", 00:06:48.231 "listen_addresses": [ 00:06:48.231 { 00:06:48.231 "trtype": "TCP", 00:06:48.231 "adrfam": "IPv4", 00:06:48.231 "traddr": "10.0.0.2", 00:06:48.231 "trsvcid": "4420" 00:06:48.231 } 00:06:48.231 ], 00:06:48.231 "allow_any_host": true, 00:06:48.231 "hosts": [] 00:06:48.231 }, 00:06:48.231 { 00:06:48.231 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:06:48.231 "subtype": "NVMe", 00:06:48.231 "listen_addresses": [ 00:06:48.231 { 00:06:48.231 "trtype": "TCP", 00:06:48.231 "adrfam": "IPv4", 00:06:48.231 "traddr": "10.0.0.2", 00:06:48.231 "trsvcid": "4420" 00:06:48.231 } 00:06:48.231 ], 00:06:48.231 "allow_any_host": true, 00:06:48.231 "hosts": [], 00:06:48.231 "serial_number": "SPDK00000000000001", 00:06:48.231 "model_number": "SPDK bdev Controller", 00:06:48.231 "max_namespaces": 32, 00:06:48.231 "min_cntlid": 1, 00:06:48.231 "max_cntlid": 65519, 00:06:48.231 "namespaces": [ 00:06:48.231 { 00:06:48.231 "nsid": 1, 00:06:48.231 "bdev_name": "Null1", 00:06:48.231 "name": "Null1", 00:06:48.231 "nguid": "50ADDD9920B741BB8D3A40F4198084DE", 00:06:48.231 "uuid": "50addd99-20b7-41bb-8d3a-40f4198084de" 00:06:48.231 } 00:06:48.231 ] 00:06:48.231 }, 00:06:48.231 { 00:06:48.231 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:06:48.231 "subtype": "NVMe", 00:06:48.231 "listen_addresses": [ 00:06:48.231 { 00:06:48.231 "trtype": "TCP", 00:06:48.231 "adrfam": "IPv4", 00:06:48.231 "traddr": "10.0.0.2", 00:06:48.231 "trsvcid": "4420" 00:06:48.231 } 00:06:48.231 ], 00:06:48.231 "allow_any_host": true, 00:06:48.231 "hosts": [], 00:06:48.231 "serial_number": "SPDK00000000000002", 00:06:48.231 "model_number": "SPDK bdev Controller", 00:06:48.231 "max_namespaces": 32, 00:06:48.231 "min_cntlid": 1, 00:06:48.231 "max_cntlid": 65519, 00:06:48.231 "namespaces": [ 00:06:48.231 { 00:06:48.231 "nsid": 1, 00:06:48.231 "bdev_name": "Null2", 00:06:48.231 "name": "Null2", 00:06:48.231 "nguid": "7FF2BBE565AA49E9AE7B616C807FE31E", 00:06:48.231 "uuid": "7ff2bbe5-65aa-49e9-ae7b-616c807fe31e" 00:06:48.231 } 00:06:48.231 ] 00:06:48.231 }, 00:06:48.231 { 00:06:48.231 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:06:48.231 "subtype": "NVMe", 00:06:48.231 "listen_addresses": [ 00:06:48.231 { 00:06:48.231 "trtype": "TCP", 00:06:48.231 "adrfam": "IPv4", 00:06:48.231 "traddr": "10.0.0.2", 00:06:48.231 "trsvcid": "4420" 00:06:48.231 } 00:06:48.231 ], 00:06:48.231 "allow_any_host": true, 00:06:48.231 "hosts": [], 00:06:48.231 "serial_number": "SPDK00000000000003", 00:06:48.231 "model_number": "SPDK bdev Controller", 00:06:48.231 "max_namespaces": 32, 00:06:48.231 "min_cntlid": 1, 00:06:48.231 "max_cntlid": 65519, 00:06:48.231 "namespaces": [ 00:06:48.231 { 00:06:48.231 "nsid": 1, 00:06:48.231 "bdev_name": "Null3", 00:06:48.231 "name": "Null3", 00:06:48.231 "nguid": "9270A63AD5C243E492EB7E476188983F", 00:06:48.231 "uuid": "9270a63a-d5c2-43e4-92eb-7e476188983f" 00:06:48.231 } 00:06:48.231 ] 00:06:48.231 }, 00:06:48.231 { 00:06:48.231 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:06:48.231 "subtype": "NVMe", 00:06:48.231 "listen_addresses": [ 00:06:48.231 { 00:06:48.231 "trtype": "TCP", 00:06:48.231 "adrfam": "IPv4", 00:06:48.231 "traddr": "10.0.0.2", 00:06:48.231 "trsvcid": "4420" 00:06:48.231 } 00:06:48.231 ], 00:06:48.232 "allow_any_host": true, 00:06:48.232 "hosts": [], 00:06:48.232 "serial_number": "SPDK00000000000004", 00:06:48.232 "model_number": "SPDK bdev Controller", 00:06:48.232 "max_namespaces": 32, 00:06:48.232 "min_cntlid": 1, 00:06:48.232 "max_cntlid": 65519, 00:06:48.232 "namespaces": [ 00:06:48.232 { 00:06:48.232 "nsid": 1, 00:06:48.232 "bdev_name": "Null4", 00:06:48.232 "name": "Null4", 00:06:48.232 "nguid": "A58734F1F1DF4AE1A0AB6025C6CA61B8", 00:06:48.232 "uuid": "a58734f1-f1df-4ae1-a0ab-6025c6ca61b8" 00:06:48.232 } 00:06:48.232 ] 00:06:48.232 } 00:06:48.232 ] 00:06:48.232 00:08:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:48.232 00:08:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:06:48.232 00:08:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:48.232 00:08:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:48.232 00:08:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:48.232 00:08:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:48.232 00:08:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:48.232 00:08:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:06:48.232 00:08:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:48.232 00:08:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:48.232 00:08:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:48.232 00:08:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:48.232 00:08:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:06:48.232 00:08:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:48.232 00:08:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:48.232 00:08:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:48.232 00:08:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:06:48.232 00:08:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:48.232 00:08:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:48.232 00:08:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:48.232 00:08:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:48.232 00:08:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:06:48.232 00:08:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:48.232 00:08:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:48.232 00:08:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:48.232 00:08:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:06:48.232 00:08:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:48.232 00:08:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:48.232 00:08:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:48.232 00:08:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:48.232 00:08:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:06:48.232 00:08:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:48.232 00:08:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:48.232 00:08:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:48.232 00:08:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:06:48.232 00:08:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:48.232 00:08:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:48.232 00:08:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:48.232 00:08:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:06:48.232 00:08:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:48.232 00:08:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:48.232 00:08:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:48.232 00:08:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:06:48.232 00:08:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:06:48.232 00:08:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:48.232 00:08:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:48.232 00:08:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:48.232 00:08:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:06:48.232 00:08:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:06:48.232 00:08:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:06:48.232 00:08:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:06:48.232 00:08:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:48.232 00:08:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:06:48.232 00:08:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:48.232 00:08:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:06:48.232 00:08:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:48.232 00:08:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:48.232 rmmod nvme_tcp 00:06:48.232 rmmod nvme_fabrics 00:06:48.232 rmmod nvme_keyring 00:06:48.492 00:08:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:48.492 00:08:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:06:48.492 00:08:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:06:48.492 00:08:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 1362419 ']' 00:06:48.492 00:08:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 1362419 00:06:48.492 00:08:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@942 -- # '[' -z 1362419 ']' 00:06:48.492 00:08:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@946 -- # kill -0 1362419 00:06:48.492 00:08:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@947 -- # uname 00:06:48.492 00:08:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:06:48.492 00:08:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1362419 00:06:48.492 00:08:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:06:48.492 00:08:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:06:48.492 00:08:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1362419' 00:06:48.492 killing process with pid 1362419 00:06:48.492 00:08:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@961 -- # kill 1362419 00:06:48.492 00:08:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@966 -- # wait 1362419 00:06:48.492 00:08:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:48.492 00:08:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:48.492 00:08:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:48.492 00:08:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:48.492 00:08:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:48.492 00:08:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:48.492 00:08:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:48.492 00:08:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:51.028 00:08:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:51.028 00:06:51.028 real 0m8.724s 00:06:51.028 user 0m7.258s 00:06:51.028 sys 0m4.106s 00:06:51.028 00:08:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:51.028 00:08:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:51.028 ************************************ 00:06:51.028 END TEST nvmf_target_discovery 00:06:51.028 ************************************ 00:06:51.028 00:08:09 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:06:51.028 00:08:09 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:06:51.028 00:08:09 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:06:51.028 00:08:09 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:51.028 00:08:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:51.028 ************************************ 00:06:51.028 START TEST nvmf_referrals 00:06:51.028 ************************************ 00:06:51.028 00:08:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:06:51.028 * Looking for test storage... 00:06:51.028 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:51.028 00:08:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:51.028 00:08:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:06:51.028 00:08:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:51.028 00:08:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:51.028 00:08:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:51.028 00:08:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:51.028 00:08:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:51.028 00:08:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:51.028 00:08:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:51.028 00:08:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:51.028 00:08:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:51.028 00:08:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:51.028 00:08:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:51.028 00:08:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:51.028 00:08:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:51.028 00:08:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:51.028 00:08:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:51.028 00:08:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:51.028 00:08:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:51.028 00:08:09 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:51.028 00:08:09 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:51.028 00:08:09 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:51.028 00:08:09 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.028 00:08:09 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.028 00:08:09 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.028 00:08:09 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:06:51.028 00:08:09 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.028 00:08:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:06:51.028 00:08:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:51.028 00:08:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:51.028 00:08:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:51.028 00:08:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:51.028 00:08:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:51.028 00:08:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:51.028 00:08:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:51.028 00:08:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:51.028 00:08:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:06:51.028 00:08:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:06:51.028 00:08:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:06:51.028 00:08:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:06:51.028 00:08:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:06:51.028 00:08:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:06:51.028 00:08:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:06:51.028 00:08:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:51.028 00:08:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:51.028 00:08:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:51.028 00:08:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:51.028 00:08:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:51.028 00:08:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:51.028 00:08:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:51.028 00:08:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:51.028 00:08:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:51.028 00:08:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:51.028 00:08:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:06:51.028 00:08:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:56.299 00:08:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:56.299 00:08:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:06:56.299 00:08:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:56.299 00:08:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:56.299 00:08:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:56.299 00:08:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:56.299 00:08:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:56.299 00:08:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:06:56.299 00:08:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:56.299 00:08:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:06:56.299 00:08:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:06:56.299 00:08:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:06:56.299 00:08:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:06:56.299 00:08:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:06:56.299 00:08:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:06:56.299 00:08:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:56.299 00:08:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:56.299 00:08:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:56.299 00:08:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:56.299 00:08:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:56.299 00:08:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:56.299 00:08:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:56.299 00:08:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:56.299 00:08:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:56.299 00:08:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:56.299 00:08:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:56.299 00:08:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:56.299 00:08:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:56.299 00:08:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:56.299 00:08:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:56.299 00:08:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:56.299 00:08:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:56.299 00:08:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:56.299 00:08:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:56.299 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:56.299 00:08:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:56.299 00:08:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:56.299 00:08:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:56.299 00:08:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:56.299 00:08:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:56.299 00:08:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:56.299 00:08:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:56.299 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:56.299 00:08:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:56.299 00:08:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:56.299 00:08:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:56.299 00:08:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:56.299 00:08:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:56.299 00:08:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:56.299 00:08:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:56.299 00:08:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:56.299 00:08:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:56.299 00:08:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:56.299 00:08:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:56.299 00:08:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:56.299 00:08:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:56.299 00:08:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:56.299 00:08:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:56.300 00:08:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:56.300 Found net devices under 0000:86:00.0: cvl_0_0 00:06:56.300 00:08:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:56.300 00:08:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:56.300 00:08:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:56.300 00:08:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:56.300 00:08:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:56.300 00:08:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:56.300 00:08:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:56.300 00:08:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:56.300 00:08:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:56.300 Found net devices under 0000:86:00.1: cvl_0_1 00:06:56.300 00:08:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:56.300 00:08:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:56.300 00:08:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:06:56.300 00:08:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:56.300 00:08:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:56.300 00:08:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:56.300 00:08:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:56.300 00:08:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:56.300 00:08:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:56.300 00:08:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:56.300 00:08:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:56.300 00:08:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:56.300 00:08:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:56.300 00:08:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:56.300 00:08:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:56.300 00:08:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:56.300 00:08:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:56.300 00:08:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:56.300 00:08:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:56.300 00:08:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:56.300 00:08:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:56.300 00:08:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:56.300 00:08:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:56.300 00:08:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:56.300 00:08:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:56.300 00:08:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:56.300 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:56.300 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.165 ms 00:06:56.300 00:06:56.300 --- 10.0.0.2 ping statistics --- 00:06:56.300 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:56.300 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:06:56.300 00:08:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:56.300 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:56.300 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:06:56.300 00:06:56.300 --- 10.0.0.1 ping statistics --- 00:06:56.300 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:56.300 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:06:56.300 00:08:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:56.300 00:08:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:06:56.300 00:08:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:56.300 00:08:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:56.300 00:08:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:56.300 00:08:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:56.300 00:08:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:56.300 00:08:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:56.300 00:08:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:56.300 00:08:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:06:56.300 00:08:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:56.300 00:08:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:56.300 00:08:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:56.300 00:08:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=1366127 00:06:56.300 00:08:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 1366127 00:06:56.300 00:08:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@823 -- # '[' -z 1366127 ']' 00:06:56.300 00:08:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:56.300 00:08:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@828 -- # local max_retries=100 00:06:56.300 00:08:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:56.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:56.300 00:08:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@832 -- # xtrace_disable 00:06:56.300 00:08:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:56.300 00:08:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:56.300 [2024-07-16 00:08:14.973580] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:06:56.300 [2024-07-16 00:08:14.973624] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:56.300 [2024-07-16 00:08:15.029687] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:56.300 [2024-07-16 00:08:15.110393] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:56.300 [2024-07-16 00:08:15.110429] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:56.300 [2024-07-16 00:08:15.110439] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:56.300 [2024-07-16 00:08:15.110446] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:56.300 [2024-07-16 00:08:15.110452] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:56.300 [2024-07-16 00:08:15.110500] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:56.300 [2024-07-16 00:08:15.110595] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:56.300 [2024-07-16 00:08:15.110683] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:56.300 [2024-07-16 00:08:15.110687] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.236 00:08:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:06:57.236 00:08:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@856 -- # return 0 00:06:57.236 00:08:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:57.236 00:08:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:57.236 00:08:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:57.236 00:08:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:57.236 00:08:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:57.236 00:08:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:57.236 00:08:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:57.236 [2024-07-16 00:08:15.816298] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:57.236 00:08:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:57.236 00:08:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:06:57.236 00:08:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:57.236 00:08:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:57.236 [2024-07-16 00:08:15.829711] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:06:57.236 00:08:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:57.236 00:08:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:06:57.236 00:08:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:57.236 00:08:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:57.236 00:08:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:57.236 00:08:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:06:57.236 00:08:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:57.236 00:08:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:57.236 00:08:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:57.236 00:08:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:06:57.236 00:08:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:57.236 00:08:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:57.236 00:08:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:57.236 00:08:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:57.236 00:08:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:06:57.236 00:08:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:57.236 00:08:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:57.236 00:08:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:57.236 00:08:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:06:57.236 00:08:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:06:57.236 00:08:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:06:57.236 00:08:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:57.236 00:08:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:06:57.236 00:08:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:57.236 00:08:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:06:57.236 00:08:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:57.236 00:08:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:57.236 00:08:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:06:57.236 00:08:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:06:57.236 00:08:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:06:57.236 00:08:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:57.236 00:08:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:57.236 00:08:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:57.236 00:08:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:57.236 00:08:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:57.495 00:08:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:06:57.495 00:08:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:06:57.495 00:08:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:06:57.495 00:08:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:57.495 00:08:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:57.495 00:08:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:57.495 00:08:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:06:57.495 00:08:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:57.495 00:08:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:57.495 00:08:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:57.495 00:08:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:06:57.495 00:08:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:57.495 00:08:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:57.495 00:08:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:57.495 00:08:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:57.495 00:08:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:06:57.495 00:08:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:57.495 00:08:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:57.495 00:08:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:57.495 00:08:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:06:57.495 00:08:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:06:57.495 00:08:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:57.495 00:08:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:57.495 00:08:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:57.495 00:08:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:57.495 00:08:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:57.754 00:08:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:06:57.754 00:08:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:06:57.754 00:08:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:06:57.754 00:08:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:57.754 00:08:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:57.754 00:08:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:57.754 00:08:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:06:57.754 00:08:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:57.754 00:08:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:57.754 00:08:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:57.754 00:08:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:06:57.754 00:08:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:06:57.754 00:08:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:57.754 00:08:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:06:57.754 00:08:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:57.754 00:08:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:06:57.754 00:08:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:57.754 00:08:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:57.754 00:08:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:06:57.754 00:08:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:06:57.754 00:08:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:06:57.754 00:08:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:57.754 00:08:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:57.754 00:08:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:57.754 00:08:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:57.754 00:08:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:57.754 00:08:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:06:57.754 00:08:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:06:57.754 00:08:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:06:57.754 00:08:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:06:57.754 00:08:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:06:57.754 00:08:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:57.754 00:08:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:06:58.012 00:08:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:06:58.012 00:08:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:06:58.012 00:08:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:06:58.013 00:08:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:06:58.013 00:08:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:58.013 00:08:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:06:58.013 00:08:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:06:58.013 00:08:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:06:58.013 00:08:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:58.013 00:08:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:58.013 00:08:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:58.013 00:08:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:06:58.013 00:08:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:06:58.013 00:08:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:58.013 00:08:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:06:58.013 00:08:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:58.013 00:08:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:06:58.013 00:08:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:58.271 00:08:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:58.271 00:08:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:06:58.271 00:08:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:06:58.271 00:08:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:06:58.271 00:08:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:58.271 00:08:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:58.271 00:08:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:58.271 00:08:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:58.271 00:08:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:58.271 00:08:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:06:58.271 00:08:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:06:58.271 00:08:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:06:58.271 00:08:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:06:58.271 00:08:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:06:58.271 00:08:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:58.271 00:08:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:06:58.529 00:08:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:06:58.529 00:08:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:06:58.529 00:08:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:06:58.529 00:08:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:06:58.529 00:08:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:58.529 00:08:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:06:58.529 00:08:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:06:58.529 00:08:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:06:58.529 00:08:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:58.529 00:08:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:58.529 00:08:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:58.529 00:08:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:58.529 00:08:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:58.529 00:08:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:06:58.529 00:08:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:58.529 00:08:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:58.529 00:08:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:06:58.529 00:08:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:06:58.529 00:08:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:58.529 00:08:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:58.529 00:08:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:58.529 00:08:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:58.529 00:08:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:58.787 00:08:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:06:58.787 00:08:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:06:58.787 00:08:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:06:58.787 00:08:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:06:58.787 00:08:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:58.787 00:08:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:06:58.787 00:08:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:58.787 00:08:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:06:58.787 00:08:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:58.787 00:08:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:58.787 rmmod nvme_tcp 00:06:58.787 rmmod nvme_fabrics 00:06:58.787 rmmod nvme_keyring 00:06:58.787 00:08:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:58.787 00:08:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:06:58.787 00:08:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:06:58.787 00:08:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 1366127 ']' 00:06:58.787 00:08:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 1366127 00:06:58.787 00:08:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@942 -- # '[' -z 1366127 ']' 00:06:58.787 00:08:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@946 -- # kill -0 1366127 00:06:58.787 00:08:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@947 -- # uname 00:06:58.788 00:08:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:06:58.788 00:08:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1366127 00:06:58.788 00:08:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:06:58.788 00:08:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:06:58.788 00:08:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1366127' 00:06:58.788 killing process with pid 1366127 00:06:58.788 00:08:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@961 -- # kill 1366127 00:06:58.788 00:08:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@966 -- # wait 1366127 00:06:59.047 00:08:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:59.047 00:08:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:59.047 00:08:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:59.047 00:08:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:59.047 00:08:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:59.047 00:08:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:59.047 00:08:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:59.047 00:08:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:01.583 00:08:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:01.583 00:07:01.583 real 0m10.362s 00:07:01.583 user 0m12.984s 00:07:01.583 sys 0m4.629s 00:07:01.583 00:08:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1118 -- # xtrace_disable 00:07:01.583 00:08:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:01.583 ************************************ 00:07:01.583 END TEST nvmf_referrals 00:07:01.583 ************************************ 00:07:01.583 00:08:19 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:07:01.583 00:08:19 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:01.583 00:08:19 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:07:01.583 00:08:19 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:07:01.583 00:08:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:01.583 ************************************ 00:07:01.583 START TEST nvmf_connect_disconnect 00:07:01.583 ************************************ 00:07:01.583 00:08:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:01.583 * Looking for test storage... 00:07:01.583 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:01.583 00:08:19 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:01.583 00:08:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:07:01.583 00:08:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:01.583 00:08:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:01.583 00:08:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:01.583 00:08:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:01.583 00:08:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:01.583 00:08:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:01.583 00:08:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:01.583 00:08:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:01.583 00:08:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:01.583 00:08:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:01.583 00:08:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:01.583 00:08:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:01.583 00:08:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:01.583 00:08:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:01.583 00:08:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:01.583 00:08:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:01.583 00:08:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:01.583 00:08:19 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:01.583 00:08:19 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:01.583 00:08:19 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:01.583 00:08:19 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.583 00:08:19 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.583 00:08:19 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.583 00:08:19 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:07:01.583 00:08:19 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.583 00:08:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:07:01.583 00:08:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:01.583 00:08:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:01.583 00:08:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:01.583 00:08:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:01.583 00:08:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:01.583 00:08:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:01.583 00:08:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:01.583 00:08:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:01.583 00:08:19 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:01.583 00:08:19 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:01.583 00:08:19 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:07:01.583 00:08:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:01.583 00:08:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:01.583 00:08:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:01.583 00:08:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:01.583 00:08:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:01.583 00:08:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:01.583 00:08:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:01.583 00:08:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:01.583 00:08:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:01.583 00:08:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:01.583 00:08:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:07:01.583 00:08:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:06.943 00:08:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:06.943 00:08:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:07:06.943 00:08:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:06.943 00:08:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:06.943 00:08:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:06.943 00:08:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:06.943 00:08:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:06.943 00:08:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:07:06.943 00:08:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:06.943 00:08:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:07:06.943 00:08:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:07:06.943 00:08:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:07:06.943 00:08:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:07:06.943 00:08:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:07:06.943 00:08:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:07:06.943 00:08:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:06.943 00:08:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:06.943 00:08:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:06.943 00:08:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:06.943 00:08:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:06.943 00:08:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:06.943 00:08:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:06.943 00:08:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:06.943 00:08:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:06.943 00:08:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:06.943 00:08:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:06.943 00:08:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:06.943 00:08:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:06.943 00:08:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:06.943 00:08:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:06.943 00:08:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:06.943 00:08:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:06.943 00:08:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:06.943 00:08:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:06.943 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:06.943 00:08:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:06.943 00:08:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:06.943 00:08:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:06.943 00:08:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:06.943 00:08:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:06.943 00:08:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:06.943 00:08:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:06.943 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:06.943 00:08:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:06.943 00:08:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:06.943 00:08:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:06.943 00:08:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:06.943 00:08:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:06.943 00:08:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:06.943 00:08:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:06.943 00:08:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:06.943 00:08:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:06.943 00:08:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:06.943 00:08:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:06.943 00:08:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:06.943 00:08:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:06.943 00:08:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:06.943 00:08:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:06.943 00:08:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:06.943 Found net devices under 0000:86:00.0: cvl_0_0 00:07:06.943 00:08:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:06.943 00:08:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:06.943 00:08:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:06.943 00:08:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:06.943 00:08:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:06.943 00:08:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:06.943 00:08:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:06.943 00:08:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:06.943 00:08:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:06.943 Found net devices under 0000:86:00.1: cvl_0_1 00:07:06.943 00:08:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:06.943 00:08:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:06.943 00:08:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:07:06.943 00:08:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:06.943 00:08:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:06.943 00:08:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:06.943 00:08:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:06.943 00:08:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:06.943 00:08:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:06.943 00:08:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:06.943 00:08:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:06.943 00:08:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:06.943 00:08:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:06.943 00:08:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:06.943 00:08:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:06.943 00:08:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:06.943 00:08:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:06.943 00:08:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:06.943 00:08:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:06.943 00:08:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:06.943 00:08:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:06.943 00:08:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:06.943 00:08:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:06.943 00:08:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:06.943 00:08:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:06.943 00:08:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:06.943 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:06.943 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.158 ms 00:07:06.943 00:07:06.943 --- 10.0.0.2 ping statistics --- 00:07:06.943 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:06.943 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:07:06.943 00:08:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:06.943 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:06.943 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.239 ms 00:07:06.943 00:07:06.943 --- 10.0.0.1 ping statistics --- 00:07:06.943 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:06.943 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:07:06.943 00:08:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:06.943 00:08:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:07:06.943 00:08:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:06.943 00:08:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:06.943 00:08:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:06.943 00:08:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:06.943 00:08:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:06.943 00:08:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:06.943 00:08:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:06.943 00:08:25 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:07:06.943 00:08:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:06.943 00:08:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:06.943 00:08:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:06.943 00:08:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=1370064 00:07:06.943 00:08:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 1370064 00:07:06.943 00:08:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:06.943 00:08:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@823 -- # '[' -z 1370064 ']' 00:07:06.943 00:08:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.943 00:08:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@828 -- # local max_retries=100 00:07:06.943 00:08:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.943 00:08:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@832 -- # xtrace_disable 00:07:06.943 00:08:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:06.943 [2024-07-16 00:08:25.582383] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:07:06.943 [2024-07-16 00:08:25.582426] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:06.943 [2024-07-16 00:08:25.641020] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:06.943 [2024-07-16 00:08:25.715793] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:06.943 [2024-07-16 00:08:25.715829] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:06.943 [2024-07-16 00:08:25.715839] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:06.943 [2024-07-16 00:08:25.715847] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:06.943 [2024-07-16 00:08:25.715853] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:06.943 [2024-07-16 00:08:25.715902] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:06.943 [2024-07-16 00:08:25.715999] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:06.943 [2024-07-16 00:08:25.716078] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:06.943 [2024-07-16 00:08:25.716081] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.879 00:08:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:07:07.879 00:08:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@856 -- # return 0 00:07:07.879 00:08:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:07.879 00:08:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:07.879 00:08:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:07.879 00:08:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:07.879 00:08:26 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:07.879 00:08:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:07.879 00:08:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:07.879 [2024-07-16 00:08:26.429292] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:07.879 00:08:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:07.879 00:08:26 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:07:07.879 00:08:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:07.879 00:08:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:07.879 00:08:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:07.879 00:08:26 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:07:07.879 00:08:26 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:07.879 00:08:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:07.879 00:08:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:07.879 00:08:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:07.879 00:08:26 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:07.879 00:08:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:07.879 00:08:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:07.879 00:08:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:07.879 00:08:26 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:07.879 00:08:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:07.879 00:08:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:07.879 [2024-07-16 00:08:26.481263] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:07.879 00:08:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:07.879 00:08:26 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:07:07.879 00:08:26 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:07:07.879 00:08:26 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:07:11.184 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:14.474 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:17.753 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:21.043 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:24.331 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:24.331 00:08:42 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:07:24.331 00:08:42 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:07:24.331 00:08:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:24.331 00:08:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:07:24.331 00:08:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:24.331 00:08:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:07:24.331 00:08:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:24.331 00:08:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:24.331 rmmod nvme_tcp 00:07:24.331 rmmod nvme_fabrics 00:07:24.331 rmmod nvme_keyring 00:07:24.331 00:08:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:24.331 00:08:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:07:24.331 00:08:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:07:24.331 00:08:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 1370064 ']' 00:07:24.331 00:08:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 1370064 00:07:24.332 00:08:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@942 -- # '[' -z 1370064 ']' 00:07:24.332 00:08:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@946 -- # kill -0 1370064 00:07:24.332 00:08:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@947 -- # uname 00:07:24.332 00:08:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:07:24.332 00:08:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1370064 00:07:24.332 00:08:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:07:24.332 00:08:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:07:24.332 00:08:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1370064' 00:07:24.332 killing process with pid 1370064 00:07:24.332 00:08:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@961 -- # kill 1370064 00:07:24.332 00:08:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # wait 1370064 00:07:24.332 00:08:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:24.332 00:08:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:24.332 00:08:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:24.332 00:08:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:24.332 00:08:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:24.332 00:08:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:24.332 00:08:43 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:24.332 00:08:43 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:26.259 00:08:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:26.259 00:07:26.259 real 0m25.213s 00:07:26.259 user 1m10.340s 00:07:26.259 sys 0m5.383s 00:07:26.259 00:08:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1118 -- # xtrace_disable 00:07:26.259 00:08:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:26.259 ************************************ 00:07:26.259 END TEST nvmf_connect_disconnect 00:07:26.259 ************************************ 00:07:26.519 00:08:45 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:07:26.519 00:08:45 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:07:26.519 00:08:45 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:07:26.519 00:08:45 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:07:26.519 00:08:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:26.519 ************************************ 00:07:26.519 START TEST nvmf_multitarget 00:07:26.519 ************************************ 00:07:26.519 00:08:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:07:26.519 * Looking for test storage... 00:07:26.519 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:26.519 00:08:45 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:26.519 00:08:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:07:26.519 00:08:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:26.519 00:08:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:26.519 00:08:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:26.519 00:08:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:26.519 00:08:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:26.519 00:08:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:26.519 00:08:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:26.519 00:08:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:26.519 00:08:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:26.519 00:08:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:26.519 00:08:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:26.519 00:08:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:26.519 00:08:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:26.519 00:08:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:26.519 00:08:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:26.519 00:08:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:26.519 00:08:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:26.519 00:08:45 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:26.519 00:08:45 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:26.519 00:08:45 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:26.519 00:08:45 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.519 00:08:45 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.519 00:08:45 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.519 00:08:45 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:07:26.519 00:08:45 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.519 00:08:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:07:26.519 00:08:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:26.519 00:08:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:26.519 00:08:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:26.519 00:08:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:26.519 00:08:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:26.519 00:08:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:26.519 00:08:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:26.519 00:08:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:26.519 00:08:45 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:07:26.519 00:08:45 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:07:26.519 00:08:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:26.519 00:08:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:26.519 00:08:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:26.519 00:08:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:26.519 00:08:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:26.519 00:08:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:26.519 00:08:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:26.519 00:08:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:26.519 00:08:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:26.519 00:08:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:26.519 00:08:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:07:26.519 00:08:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:31.853 00:08:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:31.853 00:08:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:07:31.853 00:08:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:31.853 00:08:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:31.853 00:08:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:31.853 00:08:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:31.853 00:08:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:31.853 00:08:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:07:31.853 00:08:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:31.853 00:08:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:07:31.853 00:08:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:07:31.853 00:08:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:07:31.853 00:08:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:07:31.853 00:08:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:07:31.854 00:08:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:07:31.854 00:08:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:31.854 00:08:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:31.854 00:08:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:31.854 00:08:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:31.854 00:08:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:31.854 00:08:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:31.854 00:08:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:31.854 00:08:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:31.854 00:08:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:31.854 00:08:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:31.854 00:08:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:31.854 00:08:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:31.854 00:08:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:31.854 00:08:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:31.854 00:08:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:31.854 00:08:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:31.854 00:08:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:31.854 00:08:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:31.854 00:08:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:31.854 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:31.854 00:08:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:31.854 00:08:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:31.854 00:08:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:31.854 00:08:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:31.854 00:08:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:31.854 00:08:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:31.854 00:08:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:31.854 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:31.854 00:08:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:31.854 00:08:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:31.854 00:08:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:31.854 00:08:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:31.854 00:08:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:31.854 00:08:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:31.854 00:08:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:31.854 00:08:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:31.854 00:08:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:31.854 00:08:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:31.854 00:08:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:31.854 00:08:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:31.854 00:08:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:31.854 00:08:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:31.854 00:08:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:31.854 00:08:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:31.854 Found net devices under 0000:86:00.0: cvl_0_0 00:07:31.854 00:08:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:31.854 00:08:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:31.854 00:08:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:31.854 00:08:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:31.854 00:08:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:31.854 00:08:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:31.854 00:08:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:31.854 00:08:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:31.854 00:08:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:31.854 Found net devices under 0000:86:00.1: cvl_0_1 00:07:31.854 00:08:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:31.854 00:08:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:31.854 00:08:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:07:31.854 00:08:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:31.854 00:08:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:31.854 00:08:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:31.854 00:08:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:31.854 00:08:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:31.854 00:08:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:31.854 00:08:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:31.854 00:08:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:31.854 00:08:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:31.854 00:08:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:31.854 00:08:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:31.854 00:08:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:31.854 00:08:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:31.854 00:08:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:31.854 00:08:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:31.854 00:08:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:31.854 00:08:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:31.854 00:08:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:31.854 00:08:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:31.854 00:08:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:31.854 00:08:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:31.854 00:08:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:31.854 00:08:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:31.854 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:31.854 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.165 ms 00:07:31.854 00:07:31.854 --- 10.0.0.2 ping statistics --- 00:07:31.854 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:31.854 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:07:31.854 00:08:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:31.854 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:31.854 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:07:31.854 00:07:31.854 --- 10.0.0.1 ping statistics --- 00:07:31.854 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:31.854 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:07:31.854 00:08:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:31.854 00:08:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:07:31.854 00:08:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:31.854 00:08:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:31.854 00:08:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:31.854 00:08:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:31.854 00:08:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:31.854 00:08:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:31.854 00:08:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:31.854 00:08:50 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:07:31.854 00:08:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:31.854 00:08:50 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:31.854 00:08:50 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:31.854 00:08:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=1376449 00:07:31.854 00:08:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 1376449 00:07:31.854 00:08:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:31.854 00:08:50 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@823 -- # '[' -z 1376449 ']' 00:07:31.854 00:08:50 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:31.854 00:08:50 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@828 -- # local max_retries=100 00:07:31.854 00:08:50 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:31.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:31.854 00:08:50 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@832 -- # xtrace_disable 00:07:31.854 00:08:50 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:31.854 [2024-07-16 00:08:50.599300] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:07:31.854 [2024-07-16 00:08:50.599343] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:31.854 [2024-07-16 00:08:50.655248] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:32.114 [2024-07-16 00:08:50.740338] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:32.114 [2024-07-16 00:08:50.740370] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:32.114 [2024-07-16 00:08:50.740378] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:32.114 [2024-07-16 00:08:50.740384] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:32.114 [2024-07-16 00:08:50.740390] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:32.114 [2024-07-16 00:08:50.740434] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:32.114 [2024-07-16 00:08:50.740451] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:32.114 [2024-07-16 00:08:50.740536] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:32.114 [2024-07-16 00:08:50.740537] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.681 00:08:51 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:07:32.682 00:08:51 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@856 -- # return 0 00:07:32.682 00:08:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:32.682 00:08:51 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:32.682 00:08:51 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:32.682 00:08:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:32.682 00:08:51 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:07:32.682 00:08:51 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:07:32.682 00:08:51 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:07:32.940 00:08:51 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:07:32.940 00:08:51 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:07:32.940 "nvmf_tgt_1" 00:07:32.940 00:08:51 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:07:32.940 "nvmf_tgt_2" 00:07:32.940 00:08:51 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:07:32.940 00:08:51 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:07:33.198 00:08:51 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:07:33.198 00:08:51 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:07:33.198 true 00:07:33.198 00:08:51 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:07:33.198 true 00:07:33.457 00:08:52 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:07:33.457 00:08:52 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:07:33.457 00:08:52 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:07:33.457 00:08:52 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:07:33.457 00:08:52 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:07:33.457 00:08:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:33.457 00:08:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:07:33.457 00:08:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:33.457 00:08:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:07:33.457 00:08:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:33.457 00:08:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:33.457 rmmod nvme_tcp 00:07:33.457 rmmod nvme_fabrics 00:07:33.457 rmmod nvme_keyring 00:07:33.457 00:08:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:33.457 00:08:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:07:33.457 00:08:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:07:33.457 00:08:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 1376449 ']' 00:07:33.457 00:08:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 1376449 00:07:33.457 00:08:52 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@942 -- # '[' -z 1376449 ']' 00:07:33.457 00:08:52 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@946 -- # kill -0 1376449 00:07:33.457 00:08:52 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@947 -- # uname 00:07:33.457 00:08:52 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:07:33.457 00:08:52 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1376449 00:07:33.457 00:08:52 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:07:33.457 00:08:52 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:07:33.457 00:08:52 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1376449' 00:07:33.457 killing process with pid 1376449 00:07:33.457 00:08:52 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@961 -- # kill 1376449 00:07:33.457 00:08:52 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@966 -- # wait 1376449 00:07:33.716 00:08:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:33.716 00:08:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:33.716 00:08:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:33.716 00:08:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:33.716 00:08:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:33.716 00:08:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:33.716 00:08:52 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:33.716 00:08:52 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:36.252 00:08:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:36.252 00:07:36.252 real 0m9.358s 00:07:36.252 user 0m9.070s 00:07:36.252 sys 0m4.413s 00:07:36.252 00:08:54 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1118 -- # xtrace_disable 00:07:36.252 00:08:54 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:36.252 ************************************ 00:07:36.252 END TEST nvmf_multitarget 00:07:36.252 ************************************ 00:07:36.252 00:08:54 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:07:36.252 00:08:54 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:07:36.252 00:08:54 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:07:36.252 00:08:54 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:07:36.252 00:08:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:36.252 ************************************ 00:07:36.252 START TEST nvmf_rpc 00:07:36.252 ************************************ 00:07:36.252 00:08:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:07:36.252 * Looking for test storage... 00:07:36.252 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:36.252 00:08:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:36.252 00:08:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:07:36.252 00:08:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:36.252 00:08:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:36.252 00:08:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:36.252 00:08:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:36.252 00:08:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:36.252 00:08:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:36.252 00:08:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:36.252 00:08:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:36.252 00:08:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:36.252 00:08:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:36.252 00:08:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:36.252 00:08:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:36.252 00:08:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:36.252 00:08:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:36.252 00:08:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:36.252 00:08:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:36.252 00:08:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:36.252 00:08:54 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:36.252 00:08:54 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:36.252 00:08:54 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:36.252 00:08:54 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.252 00:08:54 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.252 00:08:54 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.253 00:08:54 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:07:36.253 00:08:54 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.253 00:08:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:07:36.253 00:08:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:36.253 00:08:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:36.253 00:08:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:36.253 00:08:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:36.253 00:08:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:36.253 00:08:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:36.253 00:08:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:36.253 00:08:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:36.253 00:08:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:07:36.253 00:08:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:07:36.253 00:08:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:36.253 00:08:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:36.253 00:08:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:36.253 00:08:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:36.253 00:08:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:36.253 00:08:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:36.253 00:08:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:36.253 00:08:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:36.253 00:08:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:36.253 00:08:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:36.253 00:08:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:07:36.253 00:08:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:41.523 00:08:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:41.523 00:08:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:07:41.523 00:08:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:41.523 00:08:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:41.523 00:08:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:41.523 00:08:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:41.523 00:08:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:41.523 00:08:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:07:41.523 00:08:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:41.523 00:08:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:07:41.523 00:08:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:07:41.523 00:08:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:07:41.523 00:08:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:07:41.523 00:08:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:07:41.523 00:08:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:07:41.523 00:08:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:41.523 00:08:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:41.523 00:08:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:41.523 00:08:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:41.523 00:08:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:41.523 00:08:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:41.523 00:08:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:41.523 00:08:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:41.523 00:08:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:41.523 00:08:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:41.523 00:08:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:41.523 00:08:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:41.523 00:08:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:41.523 00:08:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:41.523 00:08:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:41.523 00:08:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:41.523 00:08:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:41.523 00:08:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:41.523 00:08:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:41.523 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:41.523 00:08:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:41.523 00:08:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:41.523 00:08:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:41.523 00:08:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:41.523 00:08:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:41.523 00:08:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:41.523 00:08:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:41.523 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:41.523 00:08:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:41.523 00:08:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:41.523 00:08:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:41.523 00:08:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:41.523 00:08:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:41.523 00:08:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:41.523 00:08:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:41.523 00:08:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:41.523 00:08:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:41.523 00:08:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:41.523 00:08:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:41.523 00:08:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:41.523 00:08:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:41.523 00:08:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:41.523 00:08:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:41.523 00:08:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:41.523 Found net devices under 0000:86:00.0: cvl_0_0 00:07:41.523 00:08:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:41.523 00:08:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:41.523 00:08:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:41.523 00:08:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:41.523 00:08:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:41.523 00:08:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:41.523 00:08:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:41.523 00:08:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:41.523 00:08:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:41.523 Found net devices under 0000:86:00.1: cvl_0_1 00:07:41.523 00:08:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:41.523 00:08:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:41.523 00:08:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:07:41.523 00:08:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:41.524 00:08:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:41.524 00:08:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:41.524 00:08:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:41.524 00:08:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:41.524 00:08:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:41.524 00:08:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:41.524 00:08:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:41.524 00:08:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:41.524 00:08:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:41.524 00:08:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:41.524 00:08:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:41.524 00:08:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:41.524 00:08:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:41.524 00:08:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:41.524 00:08:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:41.524 00:08:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:41.524 00:08:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:41.524 00:08:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:41.524 00:08:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:41.524 00:08:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:41.524 00:08:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:41.524 00:08:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:41.524 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:41.524 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.172 ms 00:07:41.524 00:07:41.524 --- 10.0.0.2 ping statistics --- 00:07:41.524 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:41.524 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:07:41.524 00:08:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:41.524 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:41.524 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.255 ms 00:07:41.524 00:07:41.524 --- 10.0.0.1 ping statistics --- 00:07:41.524 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:41.524 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:07:41.524 00:08:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:41.524 00:08:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:07:41.524 00:08:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:41.524 00:08:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:41.524 00:08:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:41.524 00:08:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:41.524 00:08:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:41.524 00:08:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:41.524 00:08:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:41.524 00:08:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:07:41.524 00:08:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:41.524 00:08:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:41.524 00:08:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:41.524 00:08:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:41.524 00:08:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=1380231 00:07:41.524 00:08:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 1380231 00:07:41.524 00:08:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@823 -- # '[' -z 1380231 ']' 00:07:41.524 00:08:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:41.524 00:08:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@828 -- # local max_retries=100 00:07:41.524 00:08:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:41.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:41.524 00:08:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@832 -- # xtrace_disable 00:07:41.524 00:08:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:41.524 [2024-07-16 00:08:59.969947] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:07:41.524 [2024-07-16 00:08:59.969996] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:41.524 [2024-07-16 00:09:00.033901] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:41.524 [2024-07-16 00:09:00.120184] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:41.524 [2024-07-16 00:09:00.120218] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:41.524 [2024-07-16 00:09:00.120228] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:41.524 [2024-07-16 00:09:00.120235] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:41.524 [2024-07-16 00:09:00.120243] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:41.524 [2024-07-16 00:09:00.120284] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:41.524 [2024-07-16 00:09:00.120360] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:41.524 [2024-07-16 00:09:00.120458] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:41.524 [2024-07-16 00:09:00.120459] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.091 00:09:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:07:42.091 00:09:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@856 -- # return 0 00:07:42.091 00:09:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:42.091 00:09:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:42.091 00:09:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:42.091 00:09:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:42.091 00:09:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:07:42.091 00:09:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:42.091 00:09:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:42.091 00:09:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:42.091 00:09:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:07:42.091 "tick_rate": 2300000000, 00:07:42.091 "poll_groups": [ 00:07:42.091 { 00:07:42.091 "name": "nvmf_tgt_poll_group_000", 00:07:42.091 "admin_qpairs": 0, 00:07:42.091 "io_qpairs": 0, 00:07:42.091 "current_admin_qpairs": 0, 00:07:42.091 "current_io_qpairs": 0, 00:07:42.091 "pending_bdev_io": 0, 00:07:42.091 "completed_nvme_io": 0, 00:07:42.091 "transports": [] 00:07:42.091 }, 00:07:42.091 { 00:07:42.091 "name": "nvmf_tgt_poll_group_001", 00:07:42.091 "admin_qpairs": 0, 00:07:42.091 "io_qpairs": 0, 00:07:42.091 "current_admin_qpairs": 0, 00:07:42.091 "current_io_qpairs": 0, 00:07:42.091 "pending_bdev_io": 0, 00:07:42.091 "completed_nvme_io": 0, 00:07:42.091 "transports": [] 00:07:42.091 }, 00:07:42.091 { 00:07:42.091 "name": "nvmf_tgt_poll_group_002", 00:07:42.091 "admin_qpairs": 0, 00:07:42.091 "io_qpairs": 0, 00:07:42.091 "current_admin_qpairs": 0, 00:07:42.091 "current_io_qpairs": 0, 00:07:42.091 "pending_bdev_io": 0, 00:07:42.091 "completed_nvme_io": 0, 00:07:42.091 "transports": [] 00:07:42.091 }, 00:07:42.091 { 00:07:42.091 "name": "nvmf_tgt_poll_group_003", 00:07:42.091 "admin_qpairs": 0, 00:07:42.091 "io_qpairs": 0, 00:07:42.091 "current_admin_qpairs": 0, 00:07:42.091 "current_io_qpairs": 0, 00:07:42.091 "pending_bdev_io": 0, 00:07:42.091 "completed_nvme_io": 0, 00:07:42.091 "transports": [] 00:07:42.091 } 00:07:42.091 ] 00:07:42.091 }' 00:07:42.091 00:09:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:07:42.091 00:09:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:07:42.091 00:09:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:07:42.091 00:09:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:07:42.091 00:09:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:07:42.091 00:09:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:07:42.091 00:09:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:07:42.091 00:09:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:42.091 00:09:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:42.091 00:09:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:42.091 [2024-07-16 00:09:00.915569] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:42.091 00:09:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:42.091 00:09:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:07:42.091 00:09:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:42.091 00:09:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:42.091 00:09:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:42.091 00:09:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:07:42.092 "tick_rate": 2300000000, 00:07:42.092 "poll_groups": [ 00:07:42.092 { 00:07:42.092 "name": "nvmf_tgt_poll_group_000", 00:07:42.092 "admin_qpairs": 0, 00:07:42.092 "io_qpairs": 0, 00:07:42.092 "current_admin_qpairs": 0, 00:07:42.092 "current_io_qpairs": 0, 00:07:42.092 "pending_bdev_io": 0, 00:07:42.092 "completed_nvme_io": 0, 00:07:42.092 "transports": [ 00:07:42.092 { 00:07:42.092 "trtype": "TCP" 00:07:42.092 } 00:07:42.092 ] 00:07:42.092 }, 00:07:42.092 { 00:07:42.092 "name": "nvmf_tgt_poll_group_001", 00:07:42.092 "admin_qpairs": 0, 00:07:42.092 "io_qpairs": 0, 00:07:42.092 "current_admin_qpairs": 0, 00:07:42.092 "current_io_qpairs": 0, 00:07:42.092 "pending_bdev_io": 0, 00:07:42.092 "completed_nvme_io": 0, 00:07:42.092 "transports": [ 00:07:42.092 { 00:07:42.092 "trtype": "TCP" 00:07:42.092 } 00:07:42.092 ] 00:07:42.092 }, 00:07:42.092 { 00:07:42.092 "name": "nvmf_tgt_poll_group_002", 00:07:42.092 "admin_qpairs": 0, 00:07:42.092 "io_qpairs": 0, 00:07:42.092 "current_admin_qpairs": 0, 00:07:42.092 "current_io_qpairs": 0, 00:07:42.092 "pending_bdev_io": 0, 00:07:42.092 "completed_nvme_io": 0, 00:07:42.092 "transports": [ 00:07:42.092 { 00:07:42.092 "trtype": "TCP" 00:07:42.092 } 00:07:42.092 ] 00:07:42.092 }, 00:07:42.092 { 00:07:42.092 "name": "nvmf_tgt_poll_group_003", 00:07:42.092 "admin_qpairs": 0, 00:07:42.092 "io_qpairs": 0, 00:07:42.092 "current_admin_qpairs": 0, 00:07:42.092 "current_io_qpairs": 0, 00:07:42.092 "pending_bdev_io": 0, 00:07:42.092 "completed_nvme_io": 0, 00:07:42.092 "transports": [ 00:07:42.092 { 00:07:42.092 "trtype": "TCP" 00:07:42.092 } 00:07:42.092 ] 00:07:42.092 } 00:07:42.092 ] 00:07:42.092 }' 00:07:42.350 00:09:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:07:42.350 00:09:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:07:42.350 00:09:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:07:42.350 00:09:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:42.350 00:09:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:07:42.350 00:09:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:07:42.350 00:09:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:07:42.350 00:09:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:07:42.350 00:09:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:42.350 00:09:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:07:42.350 00:09:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:07:42.350 00:09:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:07:42.350 00:09:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:07:42.350 00:09:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:07:42.350 00:09:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:42.350 00:09:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:42.350 Malloc1 00:07:42.350 00:09:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:42.350 00:09:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:42.350 00:09:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:42.350 00:09:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:42.350 00:09:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:42.350 00:09:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:42.350 00:09:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:42.350 00:09:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:42.350 00:09:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:42.350 00:09:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:07:42.350 00:09:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:42.350 00:09:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:42.351 00:09:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:42.351 00:09:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:42.351 00:09:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:42.351 00:09:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:42.351 [2024-07-16 00:09:01.087622] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:42.351 00:09:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:42.351 00:09:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:07:42.351 00:09:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # local es=0 00:07:42.351 00:09:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@644 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:07:42.351 00:09:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@630 -- # local arg=nvme 00:07:42.351 00:09:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:07:42.351 00:09:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@634 -- # type -t nvme 00:07:42.351 00:09:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:07:42.351 00:09:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # type -P nvme 00:07:42.351 00:09:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:07:42.351 00:09:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # arg=/usr/sbin/nvme 00:07:42.351 00:09:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # [[ -x /usr/sbin/nvme ]] 00:07:42.351 00:09:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@645 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:07:42.351 [2024-07-16 00:09:01.112072] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:07:42.351 Failed to write to /dev/nvme-fabrics: Input/output error 00:07:42.351 could not add new controller: failed to write to nvme-fabrics device 00:07:42.351 00:09:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@645 -- # es=1 00:07:42.351 00:09:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:07:42.351 00:09:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:07:42.351 00:09:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:07:42.351 00:09:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:42.351 00:09:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:42.351 00:09:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:42.351 00:09:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:42.351 00:09:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:43.725 00:09:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:07:43.725 00:09:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1192 -- # local i=0 00:07:43.725 00:09:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1193 -- # local nvme_device_counter=1 nvme_devices=0 00:07:43.725 00:09:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # [[ -n '' ]] 00:07:43.725 00:09:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # sleep 2 00:07:45.630 00:09:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # (( i++ <= 15 )) 00:07:45.630 00:09:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # lsblk -l -o NAME,SERIAL 00:07:45.630 00:09:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # grep -c SPDKISFASTANDAWESOME 00:07:45.630 00:09:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # nvme_devices=1 00:07:45.630 00:09:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( nvme_devices == nvme_device_counter )) 00:07:45.630 00:09:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # return 0 00:07:45.630 00:09:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:45.630 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:45.630 00:09:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:45.630 00:09:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1213 -- # local i=0 00:07:45.630 00:09:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1214 -- # lsblk -o NAME,SERIAL 00:07:45.630 00:09:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1214 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:45.630 00:09:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1221 -- # lsblk -l -o NAME,SERIAL 00:07:45.630 00:09:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1221 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:45.630 00:09:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1225 -- # return 0 00:07:45.630 00:09:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:45.630 00:09:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:45.630 00:09:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.630 00:09:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:45.630 00:09:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:45.630 00:09:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # local es=0 00:07:45.630 00:09:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@644 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:45.630 00:09:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@630 -- # local arg=nvme 00:07:45.630 00:09:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:07:45.630 00:09:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@634 -- # type -t nvme 00:07:45.630 00:09:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:07:45.630 00:09:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # type -P nvme 00:07:45.630 00:09:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:07:45.630 00:09:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # arg=/usr/sbin/nvme 00:07:45.630 00:09:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # [[ -x /usr/sbin/nvme ]] 00:07:45.630 00:09:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@645 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:45.630 [2024-07-16 00:09:04.428179] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:07:45.630 Failed to write to /dev/nvme-fabrics: Input/output error 00:07:45.630 could not add new controller: failed to write to nvme-fabrics device 00:07:45.630 00:09:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@645 -- # es=1 00:07:45.630 00:09:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:07:45.630 00:09:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:07:45.630 00:09:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:07:45.630 00:09:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:07:45.630 00:09:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:45.630 00:09:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.630 00:09:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:45.630 00:09:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:47.010 00:09:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:07:47.010 00:09:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1192 -- # local i=0 00:07:47.010 00:09:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1193 -- # local nvme_device_counter=1 nvme_devices=0 00:07:47.010 00:09:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # [[ -n '' ]] 00:07:47.010 00:09:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # sleep 2 00:07:48.916 00:09:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # (( i++ <= 15 )) 00:07:48.916 00:09:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # lsblk -l -o NAME,SERIAL 00:07:48.916 00:09:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # grep -c SPDKISFASTANDAWESOME 00:07:48.916 00:09:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # nvme_devices=1 00:07:48.916 00:09:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( nvme_devices == nvme_device_counter )) 00:07:48.916 00:09:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # return 0 00:07:48.916 00:09:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:48.916 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:48.916 00:09:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:48.916 00:09:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1213 -- # local i=0 00:07:48.916 00:09:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1214 -- # lsblk -o NAME,SERIAL 00:07:48.916 00:09:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1214 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:48.916 00:09:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1221 -- # lsblk -l -o NAME,SERIAL 00:07:48.916 00:09:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1221 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:48.916 00:09:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1225 -- # return 0 00:07:48.916 00:09:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:48.916 00:09:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:48.916 00:09:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:48.916 00:09:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:48.916 00:09:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:07:48.916 00:09:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:48.916 00:09:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:48.916 00:09:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:48.916 00:09:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:48.916 00:09:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:48.916 00:09:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:48.916 00:09:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:48.916 00:09:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:48.916 [2024-07-16 00:09:07.698836] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:48.916 00:09:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:48.916 00:09:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:48.916 00:09:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:48.916 00:09:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:48.916 00:09:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:48.916 00:09:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:48.916 00:09:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:48.916 00:09:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:48.916 00:09:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:48.916 00:09:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:50.295 00:09:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:50.295 00:09:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1192 -- # local i=0 00:07:50.295 00:09:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1193 -- # local nvme_device_counter=1 nvme_devices=0 00:07:50.295 00:09:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # [[ -n '' ]] 00:07:50.295 00:09:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # sleep 2 00:07:52.210 00:09:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # (( i++ <= 15 )) 00:07:52.210 00:09:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # lsblk -l -o NAME,SERIAL 00:07:52.210 00:09:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # grep -c SPDKISFASTANDAWESOME 00:07:52.210 00:09:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # nvme_devices=1 00:07:52.210 00:09:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( nvme_devices == nvme_device_counter )) 00:07:52.210 00:09:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # return 0 00:07:52.210 00:09:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:52.210 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:52.210 00:09:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:52.210 00:09:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1213 -- # local i=0 00:07:52.210 00:09:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1214 -- # lsblk -o NAME,SERIAL 00:07:52.210 00:09:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1214 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:52.210 00:09:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1221 -- # lsblk -l -o NAME,SERIAL 00:07:52.210 00:09:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1221 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:52.210 00:09:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1225 -- # return 0 00:07:52.210 00:09:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:52.210 00:09:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:52.210 00:09:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:52.210 00:09:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:52.210 00:09:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:52.210 00:09:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:52.210 00:09:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:52.210 00:09:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:52.210 00:09:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:52.210 00:09:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:52.210 00:09:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:52.210 00:09:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:52.210 00:09:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:52.210 00:09:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:52.210 00:09:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:52.210 00:09:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:52.210 [2024-07-16 00:09:11.038300] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:52.210 00:09:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:52.210 00:09:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:52.210 00:09:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:52.210 00:09:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:52.210 00:09:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:52.210 00:09:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:52.210 00:09:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:52.210 00:09:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:52.210 00:09:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:52.210 00:09:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:53.625 00:09:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:53.625 00:09:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1192 -- # local i=0 00:07:53.625 00:09:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1193 -- # local nvme_device_counter=1 nvme_devices=0 00:07:53.625 00:09:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # [[ -n '' ]] 00:07:53.625 00:09:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # sleep 2 00:07:55.525 00:09:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # (( i++ <= 15 )) 00:07:55.525 00:09:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # lsblk -l -o NAME,SERIAL 00:07:55.525 00:09:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # grep -c SPDKISFASTANDAWESOME 00:07:55.525 00:09:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # nvme_devices=1 00:07:55.525 00:09:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( nvme_devices == nvme_device_counter )) 00:07:55.525 00:09:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # return 0 00:07:55.525 00:09:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:55.525 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:55.525 00:09:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:55.525 00:09:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1213 -- # local i=0 00:07:55.525 00:09:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1214 -- # lsblk -o NAME,SERIAL 00:07:55.525 00:09:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1214 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:55.525 00:09:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1221 -- # lsblk -l -o NAME,SERIAL 00:07:55.525 00:09:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1221 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:55.525 00:09:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1225 -- # return 0 00:07:55.525 00:09:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:55.525 00:09:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:55.525 00:09:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:55.525 00:09:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:55.525 00:09:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:55.525 00:09:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:55.525 00:09:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:55.525 00:09:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:55.525 00:09:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:55.525 00:09:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:55.525 00:09:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:55.525 00:09:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:55.525 00:09:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:55.525 00:09:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:55.525 00:09:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:55.525 00:09:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:55.784 [2024-07-16 00:09:14.377775] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:55.784 00:09:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:55.784 00:09:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:55.784 00:09:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:55.784 00:09:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:55.784 00:09:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:55.784 00:09:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:55.784 00:09:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:55.785 00:09:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:55.785 00:09:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:55.785 00:09:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:56.723 00:09:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:56.723 00:09:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1192 -- # local i=0 00:07:56.723 00:09:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1193 -- # local nvme_device_counter=1 nvme_devices=0 00:07:56.723 00:09:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # [[ -n '' ]] 00:07:56.723 00:09:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # sleep 2 00:07:59.259 00:09:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # (( i++ <= 15 )) 00:07:59.259 00:09:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # lsblk -l -o NAME,SERIAL 00:07:59.259 00:09:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # grep -c SPDKISFASTANDAWESOME 00:07:59.259 00:09:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # nvme_devices=1 00:07:59.259 00:09:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( nvme_devices == nvme_device_counter )) 00:07:59.259 00:09:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # return 0 00:07:59.259 00:09:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:59.259 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:59.259 00:09:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:59.259 00:09:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1213 -- # local i=0 00:07:59.259 00:09:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1214 -- # lsblk -o NAME,SERIAL 00:07:59.259 00:09:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1214 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:59.259 00:09:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1221 -- # lsblk -l -o NAME,SERIAL 00:07:59.259 00:09:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1221 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:59.259 00:09:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1225 -- # return 0 00:07:59.259 00:09:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:59.259 00:09:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:59.259 00:09:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:59.259 00:09:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:59.259 00:09:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:59.259 00:09:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:59.259 00:09:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:59.259 00:09:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:59.259 00:09:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:59.259 00:09:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:59.260 00:09:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:59.260 00:09:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:59.260 00:09:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:59.260 00:09:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:59.260 00:09:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:59.260 00:09:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:59.260 [2024-07-16 00:09:17.679024] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:59.260 00:09:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:59.260 00:09:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:59.260 00:09:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:59.260 00:09:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:59.260 00:09:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:59.260 00:09:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:59.260 00:09:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:59.260 00:09:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:59.260 00:09:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:59.260 00:09:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:00.197 00:09:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:00.198 00:09:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1192 -- # local i=0 00:08:00.198 00:09:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1193 -- # local nvme_device_counter=1 nvme_devices=0 00:08:00.198 00:09:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # [[ -n '' ]] 00:08:00.198 00:09:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # sleep 2 00:08:02.104 00:09:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # (( i++ <= 15 )) 00:08:02.104 00:09:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # lsblk -l -o NAME,SERIAL 00:08:02.104 00:09:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # grep -c SPDKISFASTANDAWESOME 00:08:02.104 00:09:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # nvme_devices=1 00:08:02.104 00:09:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( nvme_devices == nvme_device_counter )) 00:08:02.104 00:09:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # return 0 00:08:02.104 00:09:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:02.104 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:02.104 00:09:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:02.104 00:09:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1213 -- # local i=0 00:08:02.104 00:09:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1214 -- # lsblk -o NAME,SERIAL 00:08:02.104 00:09:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1214 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:02.104 00:09:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1221 -- # lsblk -l -o NAME,SERIAL 00:08:02.104 00:09:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1221 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:02.104 00:09:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1225 -- # return 0 00:08:02.104 00:09:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:02.104 00:09:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:02.104 00:09:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:02.104 00:09:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:02.104 00:09:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:02.104 00:09:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:02.104 00:09:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:02.363 00:09:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:02.363 00:09:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:02.363 00:09:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:02.363 00:09:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:02.363 00:09:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:02.363 00:09:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:02.363 00:09:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:02.363 00:09:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:02.363 00:09:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:02.363 [2024-07-16 00:09:20.968692] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:02.363 00:09:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:02.364 00:09:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:02.364 00:09:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:02.364 00:09:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:02.364 00:09:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:02.364 00:09:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:02.364 00:09:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:02.364 00:09:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:02.364 00:09:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:02.364 00:09:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:03.742 00:09:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:03.742 00:09:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1192 -- # local i=0 00:08:03.742 00:09:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1193 -- # local nvme_device_counter=1 nvme_devices=0 00:08:03.742 00:09:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # [[ -n '' ]] 00:08:03.742 00:09:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # sleep 2 00:08:05.646 00:09:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # (( i++ <= 15 )) 00:08:05.646 00:09:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # lsblk -l -o NAME,SERIAL 00:08:05.646 00:09:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # grep -c SPDKISFASTANDAWESOME 00:08:05.646 00:09:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # nvme_devices=1 00:08:05.646 00:09:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( nvme_devices == nvme_device_counter )) 00:08:05.646 00:09:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # return 0 00:08:05.646 00:09:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:05.646 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:05.646 00:09:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:05.646 00:09:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1213 -- # local i=0 00:08:05.646 00:09:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1214 -- # lsblk -o NAME,SERIAL 00:08:05.646 00:09:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1214 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:05.646 00:09:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1221 -- # lsblk -l -o NAME,SERIAL 00:08:05.646 00:09:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1221 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:05.646 00:09:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1225 -- # return 0 00:08:05.646 00:09:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:05.646 00:09:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:05.646 00:09:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:05.646 00:09:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:05.646 00:09:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:05.646 00:09:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:05.647 00:09:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:05.647 00:09:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:05.647 00:09:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:08:05.647 00:09:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:05.647 00:09:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:05.647 00:09:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:05.647 00:09:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:05.647 00:09:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:05.647 00:09:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:05.647 00:09:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:05.647 00:09:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:05.647 [2024-07-16 00:09:24.375985] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:05.647 00:09:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:05.647 00:09:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:05.647 00:09:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:05.647 00:09:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:05.647 00:09:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:05.647 00:09:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:05.647 00:09:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:05.647 00:09:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:05.647 00:09:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:05.647 00:09:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:05.647 00:09:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:05.647 00:09:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:05.647 00:09:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:05.647 00:09:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:05.647 00:09:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:05.647 00:09:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:05.647 00:09:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:05.647 00:09:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:05.647 00:09:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:05.647 00:09:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:05.647 00:09:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:05.647 00:09:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:05.647 00:09:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:05.647 00:09:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:05.647 00:09:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:05.647 [2024-07-16 00:09:24.424101] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:05.647 00:09:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:05.647 00:09:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:05.647 00:09:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:05.647 00:09:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:05.647 00:09:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:05.647 00:09:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:05.647 00:09:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:05.647 00:09:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:05.647 00:09:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:05.647 00:09:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:05.647 00:09:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:05.647 00:09:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:05.647 00:09:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:05.647 00:09:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:05.647 00:09:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:05.647 00:09:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:05.647 00:09:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:05.647 00:09:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:05.647 00:09:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:05.647 00:09:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:05.647 00:09:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:05.647 00:09:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:05.647 00:09:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:05.647 00:09:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:05.647 00:09:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:05.647 [2024-07-16 00:09:24.476263] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:05.647 00:09:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:05.647 00:09:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:05.647 00:09:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:05.647 00:09:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:05.647 00:09:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:05.647 00:09:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:05.647 00:09:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:05.647 00:09:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:05.647 00:09:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:05.647 00:09:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:05.647 00:09:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:05.647 00:09:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:05.907 00:09:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:05.907 00:09:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:05.907 00:09:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:05.907 00:09:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:05.907 00:09:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:05.907 00:09:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:05.907 00:09:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:05.907 00:09:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:05.907 00:09:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:05.907 00:09:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:05.907 00:09:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:05.907 00:09:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:05.907 00:09:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:05.907 [2024-07-16 00:09:24.524436] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:05.907 00:09:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:05.907 00:09:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:05.907 00:09:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:05.907 00:09:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:05.907 00:09:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:05.907 00:09:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:05.907 00:09:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:05.907 00:09:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:05.907 00:09:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:05.907 00:09:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:05.907 00:09:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:05.907 00:09:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:05.907 00:09:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:05.907 00:09:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:05.907 00:09:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:05.907 00:09:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:05.907 00:09:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:05.907 00:09:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:05.907 00:09:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:05.907 00:09:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:05.907 00:09:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:05.907 00:09:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:05.907 00:09:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:05.907 00:09:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:05.907 00:09:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:05.907 [2024-07-16 00:09:24.572600] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:05.907 00:09:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:05.907 00:09:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:05.907 00:09:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:05.907 00:09:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:05.907 00:09:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:05.907 00:09:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:05.907 00:09:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:05.907 00:09:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:05.907 00:09:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:05.907 00:09:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:05.907 00:09:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:05.907 00:09:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:05.907 00:09:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:05.907 00:09:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:05.907 00:09:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:05.907 00:09:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:05.907 00:09:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:05.907 00:09:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:08:05.907 00:09:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:05.907 00:09:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:05.907 00:09:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:05.907 00:09:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:08:05.907 "tick_rate": 2300000000, 00:08:05.907 "poll_groups": [ 00:08:05.907 { 00:08:05.907 "name": "nvmf_tgt_poll_group_000", 00:08:05.907 "admin_qpairs": 2, 00:08:05.907 "io_qpairs": 168, 00:08:05.907 "current_admin_qpairs": 0, 00:08:05.907 "current_io_qpairs": 0, 00:08:05.907 "pending_bdev_io": 0, 00:08:05.907 "completed_nvme_io": 267, 00:08:05.907 "transports": [ 00:08:05.907 { 00:08:05.907 "trtype": "TCP" 00:08:05.907 } 00:08:05.907 ] 00:08:05.907 }, 00:08:05.907 { 00:08:05.907 "name": "nvmf_tgt_poll_group_001", 00:08:05.907 "admin_qpairs": 2, 00:08:05.907 "io_qpairs": 168, 00:08:05.907 "current_admin_qpairs": 0, 00:08:05.907 "current_io_qpairs": 0, 00:08:05.907 "pending_bdev_io": 0, 00:08:05.907 "completed_nvme_io": 268, 00:08:05.907 "transports": [ 00:08:05.907 { 00:08:05.907 "trtype": "TCP" 00:08:05.907 } 00:08:05.907 ] 00:08:05.907 }, 00:08:05.907 { 00:08:05.907 "name": "nvmf_tgt_poll_group_002", 00:08:05.907 "admin_qpairs": 1, 00:08:05.907 "io_qpairs": 168, 00:08:05.907 "current_admin_qpairs": 0, 00:08:05.907 "current_io_qpairs": 0, 00:08:05.907 "pending_bdev_io": 0, 00:08:05.907 "completed_nvme_io": 268, 00:08:05.907 "transports": [ 00:08:05.907 { 00:08:05.907 "trtype": "TCP" 00:08:05.907 } 00:08:05.907 ] 00:08:05.907 }, 00:08:05.907 { 00:08:05.907 "name": "nvmf_tgt_poll_group_003", 00:08:05.907 "admin_qpairs": 2, 00:08:05.907 "io_qpairs": 168, 00:08:05.907 "current_admin_qpairs": 0, 00:08:05.907 "current_io_qpairs": 0, 00:08:05.907 "pending_bdev_io": 0, 00:08:05.907 "completed_nvme_io": 219, 00:08:05.907 "transports": [ 00:08:05.907 { 00:08:05.907 "trtype": "TCP" 00:08:05.907 } 00:08:05.907 ] 00:08:05.907 } 00:08:05.907 ] 00:08:05.907 }' 00:08:05.907 00:09:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:08:05.907 00:09:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:08:05.907 00:09:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:08:05.907 00:09:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:05.907 00:09:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:08:05.907 00:09:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:08:05.907 00:09:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:08:05.907 00:09:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:08:05.907 00:09:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:05.907 00:09:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:08:05.907 00:09:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:08:05.907 00:09:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:08:05.907 00:09:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:08:05.907 00:09:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:05.907 00:09:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:08:05.907 00:09:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:05.907 00:09:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:08:05.907 00:09:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:05.907 00:09:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:05.907 rmmod nvme_tcp 00:08:05.907 rmmod nvme_fabrics 00:08:06.166 rmmod nvme_keyring 00:08:06.166 00:09:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:06.166 00:09:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:08:06.166 00:09:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:08:06.166 00:09:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 1380231 ']' 00:08:06.166 00:09:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 1380231 00:08:06.166 00:09:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@942 -- # '[' -z 1380231 ']' 00:08:06.166 00:09:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@946 -- # kill -0 1380231 00:08:06.166 00:09:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@947 -- # uname 00:08:06.166 00:09:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:08:06.166 00:09:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1380231 00:08:06.167 00:09:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:08:06.167 00:09:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:08:06.167 00:09:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1380231' 00:08:06.167 killing process with pid 1380231 00:08:06.167 00:09:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@961 -- # kill 1380231 00:08:06.167 00:09:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@966 -- # wait 1380231 00:08:06.426 00:09:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:06.426 00:09:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:06.426 00:09:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:06.426 00:09:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:06.426 00:09:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:06.426 00:09:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:06.426 00:09:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:06.426 00:09:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:08.346 00:09:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:08.346 00:08:08.346 real 0m32.524s 00:08:08.346 user 1m40.834s 00:08:08.346 sys 0m5.735s 00:08:08.346 00:09:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1118 -- # xtrace_disable 00:08:08.346 00:09:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:08.346 ************************************ 00:08:08.346 END TEST nvmf_rpc 00:08:08.346 ************************************ 00:08:08.346 00:09:27 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:08:08.346 00:09:27 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:08:08.346 00:09:27 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:08:08.346 00:09:27 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:08:08.346 00:09:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:08.346 ************************************ 00:08:08.346 START TEST nvmf_invalid 00:08:08.346 ************************************ 00:08:08.346 00:09:27 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:08:08.606 * Looking for test storage... 00:08:08.606 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:08.606 00:09:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:08.606 00:09:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:08:08.606 00:09:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:08.606 00:09:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:08.606 00:09:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:08.606 00:09:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:08.606 00:09:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:08.606 00:09:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:08.606 00:09:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:08.606 00:09:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:08.606 00:09:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:08.606 00:09:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:08.606 00:09:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:08.607 00:09:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:08.607 00:09:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:08.607 00:09:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:08.607 00:09:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:08.607 00:09:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:08.607 00:09:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:08.607 00:09:27 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:08.607 00:09:27 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:08.607 00:09:27 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:08.607 00:09:27 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.607 00:09:27 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.607 00:09:27 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.607 00:09:27 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:08:08.607 00:09:27 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.607 00:09:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:08:08.607 00:09:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:08.607 00:09:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:08.607 00:09:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:08.607 00:09:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:08.607 00:09:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:08.607 00:09:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:08.607 00:09:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:08.607 00:09:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:08.607 00:09:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:08:08.607 00:09:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:08.607 00:09:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:08:08.607 00:09:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:08:08.607 00:09:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:08:08.607 00:09:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:08:08.607 00:09:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:08.607 00:09:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:08.607 00:09:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:08.607 00:09:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:08.607 00:09:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:08.607 00:09:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:08.607 00:09:27 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:08.607 00:09:27 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:08.607 00:09:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:08.607 00:09:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:08.607 00:09:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:08:08.607 00:09:27 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:08:13.928 00:09:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:13.928 00:09:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:08:13.928 00:09:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:13.928 00:09:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:13.928 00:09:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:13.928 00:09:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:13.928 00:09:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:13.928 00:09:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:08:13.928 00:09:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:13.928 00:09:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:08:13.928 00:09:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:08:13.928 00:09:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:08:13.928 00:09:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:08:13.928 00:09:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:08:13.928 00:09:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:08:13.928 00:09:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:13.928 00:09:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:13.928 00:09:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:13.928 00:09:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:13.928 00:09:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:13.928 00:09:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:13.928 00:09:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:13.928 00:09:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:13.928 00:09:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:13.928 00:09:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:13.928 00:09:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:13.928 00:09:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:13.928 00:09:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:13.928 00:09:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:13.928 00:09:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:13.928 00:09:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:13.928 00:09:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:13.928 00:09:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:13.928 00:09:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:13.928 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:13.928 00:09:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:13.928 00:09:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:13.928 00:09:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:13.928 00:09:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:13.928 00:09:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:13.928 00:09:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:13.928 00:09:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:13.928 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:13.928 00:09:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:13.928 00:09:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:13.928 00:09:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:13.928 00:09:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:13.929 00:09:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:13.929 00:09:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:13.929 00:09:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:13.929 00:09:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:13.929 00:09:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:13.929 00:09:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:13.929 00:09:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:13.929 00:09:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:13.929 00:09:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:13.929 00:09:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:13.929 00:09:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:13.929 00:09:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:13.929 Found net devices under 0000:86:00.0: cvl_0_0 00:08:13.929 00:09:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:13.929 00:09:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:13.929 00:09:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:13.929 00:09:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:13.929 00:09:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:13.929 00:09:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:13.929 00:09:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:13.929 00:09:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:13.929 00:09:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:13.929 Found net devices under 0000:86:00.1: cvl_0_1 00:08:13.929 00:09:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:13.929 00:09:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:13.929 00:09:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:08:13.929 00:09:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:13.929 00:09:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:13.929 00:09:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:13.929 00:09:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:13.929 00:09:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:13.929 00:09:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:13.929 00:09:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:13.929 00:09:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:13.929 00:09:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:13.929 00:09:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:13.929 00:09:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:13.929 00:09:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:13.929 00:09:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:13.929 00:09:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:13.929 00:09:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:13.929 00:09:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:13.929 00:09:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:13.929 00:09:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:13.929 00:09:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:13.929 00:09:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:13.929 00:09:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:13.929 00:09:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:13.929 00:09:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:13.929 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:13.929 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.172 ms 00:08:13.929 00:08:13.929 --- 10.0.0.2 ping statistics --- 00:08:13.929 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:13.929 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:08:13.929 00:09:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:13.929 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:13.929 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.250 ms 00:08:13.929 00:08:13.929 --- 10.0.0.1 ping statistics --- 00:08:13.929 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:13.929 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:08:13.929 00:09:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:13.929 00:09:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:08:13.929 00:09:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:13.929 00:09:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:13.929 00:09:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:13.929 00:09:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:13.929 00:09:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:13.929 00:09:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:13.929 00:09:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:13.929 00:09:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:08:13.929 00:09:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:13.929 00:09:32 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:13.929 00:09:32 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:08:13.929 00:09:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=1388477 00:08:13.929 00:09:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 1388477 00:08:13.929 00:09:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:13.929 00:09:32 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@823 -- # '[' -z 1388477 ']' 00:08:13.929 00:09:32 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:13.929 00:09:32 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@828 -- # local max_retries=100 00:08:13.929 00:09:32 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:13.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:13.929 00:09:32 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@832 -- # xtrace_disable 00:08:13.929 00:09:32 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:08:13.929 [2024-07-16 00:09:32.773080] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:08:13.929 [2024-07-16 00:09:32.773121] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:14.188 [2024-07-16 00:09:32.831672] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:14.188 [2024-07-16 00:09:32.909258] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:14.188 [2024-07-16 00:09:32.909301] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:14.188 [2024-07-16 00:09:32.909311] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:14.188 [2024-07-16 00:09:32.909318] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:14.188 [2024-07-16 00:09:32.909325] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:14.188 [2024-07-16 00:09:32.909376] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:14.188 [2024-07-16 00:09:32.909407] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:14.188 [2024-07-16 00:09:32.909493] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:14.188 [2024-07-16 00:09:32.909497] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.754 00:09:33 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:08:14.754 00:09:33 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@856 -- # return 0 00:08:14.754 00:09:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:14.754 00:09:33 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:14.754 00:09:33 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:08:15.013 00:09:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:15.013 00:09:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:08:15.013 00:09:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode8538 00:08:15.013 [2024-07-16 00:09:33.764547] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:08:15.013 00:09:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:08:15.013 { 00:08:15.013 "nqn": "nqn.2016-06.io.spdk:cnode8538", 00:08:15.013 "tgt_name": "foobar", 00:08:15.013 "method": "nvmf_create_subsystem", 00:08:15.013 "req_id": 1 00:08:15.013 } 00:08:15.013 Got JSON-RPC error response 00:08:15.013 response: 00:08:15.013 { 00:08:15.013 "code": -32603, 00:08:15.013 "message": "Unable to find target foobar" 00:08:15.013 }' 00:08:15.013 00:09:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:08:15.013 { 00:08:15.013 "nqn": "nqn.2016-06.io.spdk:cnode8538", 00:08:15.013 "tgt_name": "foobar", 00:08:15.013 "method": "nvmf_create_subsystem", 00:08:15.013 "req_id": 1 00:08:15.013 } 00:08:15.013 Got JSON-RPC error response 00:08:15.013 response: 00:08:15.013 { 00:08:15.013 "code": -32603, 00:08:15.013 "message": "Unable to find target foobar" 00:08:15.013 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:08:15.013 00:09:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:08:15.013 00:09:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode27061 00:08:15.272 [2024-07-16 00:09:33.957244] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27061: invalid serial number 'SPDKISFASTANDAWESOME' 00:08:15.272 00:09:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:08:15.272 { 00:08:15.272 "nqn": "nqn.2016-06.io.spdk:cnode27061", 00:08:15.272 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:08:15.272 "method": "nvmf_create_subsystem", 00:08:15.272 "req_id": 1 00:08:15.272 } 00:08:15.272 Got JSON-RPC error response 00:08:15.272 response: 00:08:15.272 { 00:08:15.272 "code": -32602, 00:08:15.272 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:08:15.272 }' 00:08:15.272 00:09:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:08:15.272 { 00:08:15.272 "nqn": "nqn.2016-06.io.spdk:cnode27061", 00:08:15.272 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:08:15.272 "method": "nvmf_create_subsystem", 00:08:15.272 "req_id": 1 00:08:15.272 } 00:08:15.272 Got JSON-RPC error response 00:08:15.272 response: 00:08:15.272 { 00:08:15.272 "code": -32602, 00:08:15.272 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:08:15.272 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:08:15.272 00:09:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:08:15.272 00:09:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode19312 00:08:15.532 [2024-07-16 00:09:34.153866] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19312: invalid model number 'SPDK_Controller' 00:08:15.532 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:08:15.532 { 00:08:15.532 "nqn": "nqn.2016-06.io.spdk:cnode19312", 00:08:15.532 "model_number": "SPDK_Controller\u001f", 00:08:15.532 "method": "nvmf_create_subsystem", 00:08:15.532 "req_id": 1 00:08:15.532 } 00:08:15.532 Got JSON-RPC error response 00:08:15.532 response: 00:08:15.532 { 00:08:15.532 "code": -32602, 00:08:15.532 "message": "Invalid MN SPDK_Controller\u001f" 00:08:15.532 }' 00:08:15.532 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:08:15.532 { 00:08:15.532 "nqn": "nqn.2016-06.io.spdk:cnode19312", 00:08:15.532 "model_number": "SPDK_Controller\u001f", 00:08:15.532 "method": "nvmf_create_subsystem", 00:08:15.532 "req_id": 1 00:08:15.532 } 00:08:15.532 Got JSON-RPC error response 00:08:15.532 response: 00:08:15.532 { 00:08:15.532 "code": -32602, 00:08:15.532 "message": "Invalid MN SPDK_Controller\u001f" 00:08:15.532 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:08:15.532 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:08:15.532 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:08:15.532 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:08:15.532 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:08:15.532 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:08:15.532 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:08:15.532 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:15.532 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:08:15.532 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:08:15.532 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:08:15.533 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:15.533 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:15.533 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:08:15.533 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:08:15.533 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:08:15.533 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:15.533 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:15.533 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:08:15.533 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:08:15.533 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:08:15.533 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:15.533 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:15.533 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:08:15.533 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:08:15.533 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:08:15.533 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:15.533 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:15.533 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:08:15.533 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:08:15.533 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:08:15.533 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:15.533 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:15.533 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:08:15.533 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:08:15.533 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:08:15.533 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:15.533 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:15.533 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:08:15.533 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:08:15.533 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:08:15.533 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:15.533 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:15.533 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:08:15.533 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:08:15.533 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:08:15.533 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:15.533 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:15.533 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:08:15.533 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:08:15.533 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:08:15.533 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:15.533 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:15.533 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:08:15.533 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:08:15.533 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:08:15.533 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:15.533 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:15.533 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:08:15.533 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:08:15.533 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:08:15.533 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:15.533 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:15.533 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:08:15.533 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:08:15.533 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:08:15.533 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:15.533 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:15.533 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:08:15.533 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:08:15.533 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:08:15.533 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:15.533 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:15.533 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:08:15.533 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:08:15.533 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:08:15.533 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:15.533 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:15.533 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:08:15.533 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:08:15.533 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:08:15.533 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:15.533 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:15.533 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:08:15.533 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:08:15.533 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:08:15.533 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:15.533 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:15.533 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:08:15.533 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:08:15.533 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:08:15.533 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:15.533 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:15.533 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:08:15.533 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:08:15.533 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:08:15.533 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:15.533 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:15.533 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:08:15.533 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:08:15.533 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:08:15.533 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:15.533 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:15.533 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:08:15.533 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:08:15.533 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:08:15.533 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:15.533 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:15.533 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:08:15.533 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:08:15.533 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:08:15.533 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:15.533 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:15.533 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ / == \- ]] 00:08:15.533 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '/uZ5D=imer?Fq5&E=U4"g' 00:08:15.533 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '/uZ5D=imer?Fq5&E=U4"g' nqn.2016-06.io.spdk:cnode6886 00:08:15.792 [2024-07-16 00:09:34.478944] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6886: invalid serial number '/uZ5D=imer?Fq5&E=U4"g' 00:08:15.792 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:08:15.792 { 00:08:15.792 "nqn": "nqn.2016-06.io.spdk:cnode6886", 00:08:15.792 "serial_number": "/uZ5D=imer?Fq5&E=U4\"g", 00:08:15.792 "method": "nvmf_create_subsystem", 00:08:15.792 "req_id": 1 00:08:15.792 } 00:08:15.792 Got JSON-RPC error response 00:08:15.792 response: 00:08:15.792 { 00:08:15.792 "code": -32602, 00:08:15.792 "message": "Invalid SN /uZ5D=imer?Fq5&E=U4\"g" 00:08:15.792 }' 00:08:15.792 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:08:15.792 { 00:08:15.793 "nqn": "nqn.2016-06.io.spdk:cnode6886", 00:08:15.793 "serial_number": "/uZ5D=imer?Fq5&E=U4\"g", 00:08:15.793 "method": "nvmf_create_subsystem", 00:08:15.793 "req_id": 1 00:08:15.793 } 00:08:15.793 Got JSON-RPC error response 00:08:15.793 response: 00:08:15.793 { 00:08:15.793 "code": -32602, 00:08:15.793 "message": "Invalid SN /uZ5D=imer?Fq5&E=U4\"g" 00:08:15.793 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:08:15.793 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:08:15.793 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:08:15.793 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:08:15.793 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:08:15.793 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:08:15.793 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:08:15.793 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:15.793 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:08:15.793 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:08:15.793 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:08:15.793 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:15.793 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:15.793 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:08:15.793 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:08:15.793 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:08:15.793 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:15.793 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:15.793 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:08:15.793 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:08:15.793 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:08:15.793 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:15.793 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:15.793 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:08:15.793 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:08:15.793 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:08:15.793 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:15.793 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:15.793 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:08:15.793 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:08:15.793 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:08:15.793 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:15.793 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:15.793 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:08:15.793 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:08:15.793 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:08:15.793 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:15.793 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:15.793 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:08:15.793 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:08:15.793 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:08:15.793 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:15.793 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:15.793 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:08:15.793 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:08:15.793 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:08:15.793 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:15.793 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:15.793 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:08:15.793 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:08:15.793 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:08:15.793 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:15.793 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:15.793 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:08:15.793 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:08:15.793 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:08:15.793 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:15.793 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:15.793 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:08:15.793 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:08:15.793 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:08:15.793 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:15.793 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:15.793 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:08:15.793 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:08:15.793 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:08:15.793 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:15.793 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:15.793 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:08:15.793 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:08:15.793 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:08:15.793 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:15.793 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:15.793 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:08:15.793 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:08:15.793 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:08:15.793 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:15.793 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:15.793 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:08:15.793 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:08:15.793 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:08:15.793 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:15.793 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:15.793 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:08:15.793 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:08:15.793 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:08:15.793 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:15.793 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:15.793 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:08:15.793 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:08:15.793 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:08:15.793 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:15.793 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:15.793 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:08:15.793 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:08:15.793 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:08:15.793 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:15.793 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:15.793 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:08:15.793 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:08:15.793 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:08:15.793 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:15.793 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:15.793 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:08:15.793 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:08:15.793 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:08:15.793 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:15.793 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:15.793 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:08:15.793 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:08:15.793 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:08:15.793 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:15.793 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:16.053 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:08:16.053 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:08:16.053 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:08:16.053 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:16.053 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:16.053 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:08:16.053 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:08:16.053 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:08:16.053 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:16.053 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:16.053 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:08:16.053 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:08:16.053 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:08:16.053 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:16.053 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:16.053 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:08:16.053 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:08:16.053 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:08:16.053 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:16.053 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:16.053 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:08:16.053 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:08:16.053 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:08:16.053 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:16.053 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:16.053 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:08:16.053 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:08:16.053 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:08:16.053 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:16.053 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:16.053 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:08:16.053 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:08:16.053 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:08:16.053 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:16.053 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:16.053 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:08:16.053 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:08:16.053 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:08:16.053 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:16.053 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:16.053 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:08:16.053 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:08:16.053 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:08:16.053 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:16.053 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:16.053 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:08:16.053 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:08:16.053 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:08:16.053 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:16.053 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:16.053 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:08:16.053 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:08:16.053 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:08:16.053 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:16.053 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:16.053 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:08:16.053 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:08:16.053 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:08:16.053 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:16.053 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:16.053 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:08:16.053 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:08:16.053 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:08:16.053 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:16.053 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:16.053 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:08:16.053 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:08:16.053 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:08:16.053 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:16.053 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:16.053 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:08:16.053 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:08:16.053 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:08:16.053 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:16.053 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:16.053 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:08:16.053 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:08:16.053 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:08:16.053 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:16.053 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:16.053 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:08:16.053 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:08:16.053 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:08:16.053 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:16.053 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:16.053 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:08:16.053 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:08:16.053 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:08:16.053 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:16.053 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:16.053 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:08:16.053 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:08:16.053 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:08:16.053 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:16.053 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:16.053 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:08:16.053 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:08:16.053 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:08:16.053 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:16.053 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:16.053 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ ! == \- ]] 00:08:16.053 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '!E`^jGF@oz*xPL4xXg"C2& bXP'\''X6\FJ8AujRS1' 00:08:16.053 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '!E`^jGF@oz*xPL4xXg"C2& bXP'\''X6\FJ8AujRS1' nqn.2016-06.io.spdk:cnode7707 00:08:16.312 [2024-07-16 00:09:34.920428] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7707: invalid model number '!E`^jGF@oz*xPL4xXg"C2& bXP'X6\FJ8AujRS1' 00:08:16.312 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:08:16.312 { 00:08:16.312 "nqn": "nqn.2016-06.io.spdk:cnode7707", 00:08:16.312 "model_number": "!E`^jGF@oz*x\u007fPL4x\u007fXg\"C2& bXP'\''X6\\FJ8AujRS1", 00:08:16.312 "method": "nvmf_create_subsystem", 00:08:16.312 "req_id": 1 00:08:16.312 } 00:08:16.312 Got JSON-RPC error response 00:08:16.312 response: 00:08:16.312 { 00:08:16.312 "code": -32602, 00:08:16.312 "message": "Invalid MN !E`^jGF@oz*x\u007fPL4x\u007fXg\"C2& bXP'\''X6\\FJ8AujRS1" 00:08:16.312 }' 00:08:16.312 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:08:16.312 { 00:08:16.312 "nqn": "nqn.2016-06.io.spdk:cnode7707", 00:08:16.312 "model_number": "!E`^jGF@oz*x\u007fPL4x\u007fXg\"C2& bXP'X6\\FJ8AujRS1", 00:08:16.312 "method": "nvmf_create_subsystem", 00:08:16.312 "req_id": 1 00:08:16.312 } 00:08:16.312 Got JSON-RPC error response 00:08:16.312 response: 00:08:16.312 { 00:08:16.312 "code": -32602, 00:08:16.312 "message": "Invalid MN !E`^jGF@oz*x\u007fPL4x\u007fXg\"C2& bXP'X6\\FJ8AujRS1" 00:08:16.312 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:08:16.312 00:09:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:08:16.312 [2024-07-16 00:09:35.101107] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:16.312 00:09:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:08:16.570 00:09:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:08:16.570 00:09:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:08:16.570 00:09:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:08:16.570 00:09:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:08:16.570 00:09:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:08:16.853 [2024-07-16 00:09:35.490420] nvmf_rpc.c: 809:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:08:16.853 00:09:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:08:16.853 { 00:08:16.853 "nqn": "nqn.2016-06.io.spdk:cnode", 00:08:16.853 "listen_address": { 00:08:16.853 "trtype": "tcp", 00:08:16.853 "traddr": "", 00:08:16.853 "trsvcid": "4421" 00:08:16.853 }, 00:08:16.853 "method": "nvmf_subsystem_remove_listener", 00:08:16.853 "req_id": 1 00:08:16.853 } 00:08:16.853 Got JSON-RPC error response 00:08:16.853 response: 00:08:16.853 { 00:08:16.853 "code": -32602, 00:08:16.854 "message": "Invalid parameters" 00:08:16.854 }' 00:08:16.854 00:09:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:08:16.854 { 00:08:16.854 "nqn": "nqn.2016-06.io.spdk:cnode", 00:08:16.854 "listen_address": { 00:08:16.854 "trtype": "tcp", 00:08:16.854 "traddr": "", 00:08:16.854 "trsvcid": "4421" 00:08:16.854 }, 00:08:16.854 "method": "nvmf_subsystem_remove_listener", 00:08:16.854 "req_id": 1 00:08:16.854 } 00:08:16.854 Got JSON-RPC error response 00:08:16.854 response: 00:08:16.854 { 00:08:16.854 "code": -32602, 00:08:16.854 "message": "Invalid parameters" 00:08:16.854 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:08:16.854 00:09:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode21478 -i 0 00:08:16.854 [2024-07-16 00:09:35.683057] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21478: invalid cntlid range [0-65519] 00:08:17.113 00:09:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:08:17.113 { 00:08:17.113 "nqn": "nqn.2016-06.io.spdk:cnode21478", 00:08:17.113 "min_cntlid": 0, 00:08:17.113 "method": "nvmf_create_subsystem", 00:08:17.113 "req_id": 1 00:08:17.113 } 00:08:17.113 Got JSON-RPC error response 00:08:17.113 response: 00:08:17.113 { 00:08:17.113 "code": -32602, 00:08:17.113 "message": "Invalid cntlid range [0-65519]" 00:08:17.113 }' 00:08:17.113 00:09:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:08:17.113 { 00:08:17.113 "nqn": "nqn.2016-06.io.spdk:cnode21478", 00:08:17.113 "min_cntlid": 0, 00:08:17.113 "method": "nvmf_create_subsystem", 00:08:17.113 "req_id": 1 00:08:17.113 } 00:08:17.113 Got JSON-RPC error response 00:08:17.113 response: 00:08:17.113 { 00:08:17.113 "code": -32602, 00:08:17.113 "message": "Invalid cntlid range [0-65519]" 00:08:17.113 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:08:17.113 00:09:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode14947 -i 65520 00:08:17.113 [2024-07-16 00:09:35.867656] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14947: invalid cntlid range [65520-65519] 00:08:17.113 00:09:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:08:17.113 { 00:08:17.113 "nqn": "nqn.2016-06.io.spdk:cnode14947", 00:08:17.113 "min_cntlid": 65520, 00:08:17.113 "method": "nvmf_create_subsystem", 00:08:17.113 "req_id": 1 00:08:17.113 } 00:08:17.113 Got JSON-RPC error response 00:08:17.113 response: 00:08:17.113 { 00:08:17.113 "code": -32602, 00:08:17.113 "message": "Invalid cntlid range [65520-65519]" 00:08:17.113 }' 00:08:17.113 00:09:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:08:17.113 { 00:08:17.113 "nqn": "nqn.2016-06.io.spdk:cnode14947", 00:08:17.113 "min_cntlid": 65520, 00:08:17.113 "method": "nvmf_create_subsystem", 00:08:17.113 "req_id": 1 00:08:17.113 } 00:08:17.113 Got JSON-RPC error response 00:08:17.113 response: 00:08:17.113 { 00:08:17.113 "code": -32602, 00:08:17.113 "message": "Invalid cntlid range [65520-65519]" 00:08:17.113 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:08:17.113 00:09:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode22778 -I 0 00:08:17.373 [2024-07-16 00:09:36.044305] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22778: invalid cntlid range [1-0] 00:08:17.373 00:09:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:08:17.373 { 00:08:17.373 "nqn": "nqn.2016-06.io.spdk:cnode22778", 00:08:17.373 "max_cntlid": 0, 00:08:17.373 "method": "nvmf_create_subsystem", 00:08:17.373 "req_id": 1 00:08:17.373 } 00:08:17.373 Got JSON-RPC error response 00:08:17.373 response: 00:08:17.373 { 00:08:17.373 "code": -32602, 00:08:17.373 "message": "Invalid cntlid range [1-0]" 00:08:17.373 }' 00:08:17.373 00:09:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:08:17.373 { 00:08:17.373 "nqn": "nqn.2016-06.io.spdk:cnode22778", 00:08:17.373 "max_cntlid": 0, 00:08:17.373 "method": "nvmf_create_subsystem", 00:08:17.373 "req_id": 1 00:08:17.373 } 00:08:17.373 Got JSON-RPC error response 00:08:17.373 response: 00:08:17.373 { 00:08:17.373 "code": -32602, 00:08:17.373 "message": "Invalid cntlid range [1-0]" 00:08:17.373 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:08:17.373 00:09:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6799 -I 65520 00:08:17.373 [2024-07-16 00:09:36.216811] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6799: invalid cntlid range [1-65520] 00:08:17.632 00:09:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:08:17.632 { 00:08:17.632 "nqn": "nqn.2016-06.io.spdk:cnode6799", 00:08:17.633 "max_cntlid": 65520, 00:08:17.633 "method": "nvmf_create_subsystem", 00:08:17.633 "req_id": 1 00:08:17.633 } 00:08:17.633 Got JSON-RPC error response 00:08:17.633 response: 00:08:17.633 { 00:08:17.633 "code": -32602, 00:08:17.633 "message": "Invalid cntlid range [1-65520]" 00:08:17.633 }' 00:08:17.633 00:09:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:08:17.633 { 00:08:17.633 "nqn": "nqn.2016-06.io.spdk:cnode6799", 00:08:17.633 "max_cntlid": 65520, 00:08:17.633 "method": "nvmf_create_subsystem", 00:08:17.633 "req_id": 1 00:08:17.633 } 00:08:17.633 Got JSON-RPC error response 00:08:17.633 response: 00:08:17.633 { 00:08:17.633 "code": -32602, 00:08:17.633 "message": "Invalid cntlid range [1-65520]" 00:08:17.633 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:08:17.633 00:09:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode17581 -i 6 -I 5 00:08:17.633 [2024-07-16 00:09:36.389378] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17581: invalid cntlid range [6-5] 00:08:17.633 00:09:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:08:17.633 { 00:08:17.633 "nqn": "nqn.2016-06.io.spdk:cnode17581", 00:08:17.633 "min_cntlid": 6, 00:08:17.633 "max_cntlid": 5, 00:08:17.633 "method": "nvmf_create_subsystem", 00:08:17.633 "req_id": 1 00:08:17.633 } 00:08:17.633 Got JSON-RPC error response 00:08:17.633 response: 00:08:17.633 { 00:08:17.633 "code": -32602, 00:08:17.633 "message": "Invalid cntlid range [6-5]" 00:08:17.633 }' 00:08:17.633 00:09:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:08:17.633 { 00:08:17.633 "nqn": "nqn.2016-06.io.spdk:cnode17581", 00:08:17.633 "min_cntlid": 6, 00:08:17.633 "max_cntlid": 5, 00:08:17.633 "method": "nvmf_create_subsystem", 00:08:17.633 "req_id": 1 00:08:17.633 } 00:08:17.633 Got JSON-RPC error response 00:08:17.633 response: 00:08:17.633 { 00:08:17.633 "code": -32602, 00:08:17.633 "message": "Invalid cntlid range [6-5]" 00:08:17.633 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:08:17.633 00:09:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:08:17.893 00:09:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:08:17.893 { 00:08:17.893 "name": "foobar", 00:08:17.893 "method": "nvmf_delete_target", 00:08:17.893 "req_id": 1 00:08:17.893 } 00:08:17.893 Got JSON-RPC error response 00:08:17.893 response: 00:08:17.893 { 00:08:17.893 "code": -32602, 00:08:17.893 "message": "The specified target doesn'\''t exist, cannot delete it." 00:08:17.893 }' 00:08:17.893 00:09:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:08:17.893 { 00:08:17.893 "name": "foobar", 00:08:17.893 "method": "nvmf_delete_target", 00:08:17.893 "req_id": 1 00:08:17.893 } 00:08:17.893 Got JSON-RPC error response 00:08:17.893 response: 00:08:17.893 { 00:08:17.893 "code": -32602, 00:08:17.893 "message": "The specified target doesn't exist, cannot delete it." 00:08:17.893 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:08:17.893 00:09:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:08:17.893 00:09:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:08:17.893 00:09:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:17.893 00:09:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:08:17.893 00:09:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:17.893 00:09:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:08:17.893 00:09:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:17.893 00:09:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:17.893 rmmod nvme_tcp 00:08:17.893 rmmod nvme_fabrics 00:08:17.893 rmmod nvme_keyring 00:08:17.893 00:09:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:17.893 00:09:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:08:17.893 00:09:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:08:17.893 00:09:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 1388477 ']' 00:08:17.893 00:09:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 1388477 00:08:17.893 00:09:36 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@942 -- # '[' -z 1388477 ']' 00:08:17.893 00:09:36 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@946 -- # kill -0 1388477 00:08:17.893 00:09:36 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@947 -- # uname 00:08:17.893 00:09:36 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:08:17.893 00:09:36 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1388477 00:08:17.893 00:09:36 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:08:17.893 00:09:36 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:08:17.893 00:09:36 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1388477' 00:08:17.893 killing process with pid 1388477 00:08:17.893 00:09:36 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@961 -- # kill 1388477 00:08:17.893 00:09:36 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@966 -- # wait 1388477 00:08:18.152 00:09:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:18.152 00:09:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:18.152 00:09:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:18.153 00:09:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:18.153 00:09:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:18.153 00:09:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:18.153 00:09:36 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:18.153 00:09:36 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:20.060 00:09:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:20.060 00:08:20.060 real 0m11.730s 00:08:20.060 user 0m19.393s 00:08:20.060 sys 0m5.054s 00:08:20.060 00:09:38 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1118 -- # xtrace_disable 00:08:20.060 00:09:38 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:08:20.060 ************************************ 00:08:20.060 END TEST nvmf_invalid 00:08:20.060 ************************************ 00:08:20.320 00:09:38 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:08:20.320 00:09:38 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:08:20.320 00:09:38 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:08:20.320 00:09:38 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:08:20.320 00:09:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:20.320 ************************************ 00:08:20.320 START TEST nvmf_abort 00:08:20.320 ************************************ 00:08:20.320 00:09:38 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:08:20.320 * Looking for test storage... 00:08:20.320 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:20.320 00:09:39 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:20.320 00:09:39 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:08:20.320 00:09:39 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:20.320 00:09:39 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:20.320 00:09:39 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:20.320 00:09:39 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:20.320 00:09:39 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:20.320 00:09:39 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:20.320 00:09:39 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:20.320 00:09:39 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:20.320 00:09:39 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:20.320 00:09:39 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:20.320 00:09:39 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:20.320 00:09:39 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:20.320 00:09:39 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:20.320 00:09:39 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:20.320 00:09:39 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:20.320 00:09:39 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:20.320 00:09:39 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:20.320 00:09:39 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:20.320 00:09:39 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:20.320 00:09:39 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:20.320 00:09:39 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.320 00:09:39 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.320 00:09:39 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.320 00:09:39 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:08:20.320 00:09:39 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.320 00:09:39 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:08:20.320 00:09:39 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:20.320 00:09:39 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:20.320 00:09:39 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:20.320 00:09:39 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:20.320 00:09:39 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:20.320 00:09:39 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:20.320 00:09:39 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:20.320 00:09:39 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:20.320 00:09:39 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:20.320 00:09:39 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:08:20.320 00:09:39 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:08:20.320 00:09:39 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:20.320 00:09:39 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:20.320 00:09:39 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:20.320 00:09:39 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:20.320 00:09:39 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:20.320 00:09:39 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:20.320 00:09:39 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:20.321 00:09:39 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:20.321 00:09:39 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:20.321 00:09:39 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:20.321 00:09:39 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:08:20.321 00:09:39 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:25.602 00:09:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:25.602 00:09:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:08:25.602 00:09:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:25.602 00:09:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:25.602 00:09:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:25.602 00:09:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:25.602 00:09:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:25.602 00:09:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:08:25.602 00:09:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:25.602 00:09:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:08:25.602 00:09:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:08:25.602 00:09:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:08:25.602 00:09:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:08:25.602 00:09:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:08:25.602 00:09:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:08:25.602 00:09:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:25.602 00:09:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:25.602 00:09:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:25.602 00:09:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:25.602 00:09:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:25.602 00:09:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:25.602 00:09:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:25.602 00:09:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:25.602 00:09:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:25.602 00:09:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:25.602 00:09:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:25.602 00:09:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:25.602 00:09:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:25.602 00:09:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:25.602 00:09:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:25.602 00:09:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:25.602 00:09:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:25.602 00:09:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:25.602 00:09:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:25.602 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:25.602 00:09:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:25.602 00:09:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:25.602 00:09:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:25.602 00:09:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:25.602 00:09:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:25.602 00:09:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:25.602 00:09:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:25.602 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:25.602 00:09:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:25.602 00:09:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:25.602 00:09:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:25.602 00:09:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:25.602 00:09:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:25.602 00:09:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:25.602 00:09:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:25.602 00:09:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:25.602 00:09:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:25.602 00:09:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:25.602 00:09:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:25.602 00:09:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:25.602 00:09:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:25.602 00:09:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:25.602 00:09:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:25.602 00:09:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:25.602 Found net devices under 0000:86:00.0: cvl_0_0 00:08:25.602 00:09:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:25.602 00:09:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:25.602 00:09:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:25.602 00:09:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:25.602 00:09:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:25.602 00:09:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:25.602 00:09:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:25.602 00:09:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:25.602 00:09:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:25.602 Found net devices under 0000:86:00.1: cvl_0_1 00:08:25.602 00:09:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:25.602 00:09:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:25.602 00:09:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:08:25.602 00:09:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:25.602 00:09:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:25.602 00:09:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:25.602 00:09:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:25.602 00:09:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:25.602 00:09:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:25.602 00:09:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:25.602 00:09:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:25.602 00:09:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:25.602 00:09:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:25.602 00:09:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:25.602 00:09:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:25.602 00:09:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:25.602 00:09:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:25.602 00:09:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:25.602 00:09:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:25.602 00:09:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:25.602 00:09:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:25.602 00:09:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:25.602 00:09:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:25.602 00:09:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:25.602 00:09:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:25.602 00:09:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:25.602 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:25.602 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.171 ms 00:08:25.602 00:08:25.602 --- 10.0.0.2 ping statistics --- 00:08:25.602 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:25.602 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:08:25.602 00:09:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:25.602 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:25.602 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:08:25.602 00:08:25.602 --- 10.0.0.1 ping statistics --- 00:08:25.602 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:25.602 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:08:25.602 00:09:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:25.602 00:09:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:08:25.602 00:09:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:25.602 00:09:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:25.602 00:09:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:25.602 00:09:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:25.602 00:09:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:25.602 00:09:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:25.602 00:09:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:25.602 00:09:44 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:08:25.602 00:09:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:25.602 00:09:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:25.602 00:09:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:25.602 00:09:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=1392741 00:08:25.602 00:09:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 1392741 00:08:25.602 00:09:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:25.602 00:09:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@823 -- # '[' -z 1392741 ']' 00:08:25.602 00:09:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:25.602 00:09:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@828 -- # local max_retries=100 00:08:25.602 00:09:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:25.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:25.603 00:09:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@832 -- # xtrace_disable 00:08:25.603 00:09:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:25.861 [2024-07-16 00:09:44.472921] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:08:25.861 [2024-07-16 00:09:44.472965] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:25.861 [2024-07-16 00:09:44.528708] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:25.861 [2024-07-16 00:09:44.607843] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:25.861 [2024-07-16 00:09:44.607879] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:25.861 [2024-07-16 00:09:44.607887] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:25.861 [2024-07-16 00:09:44.607893] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:25.861 [2024-07-16 00:09:44.607899] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:25.861 [2024-07-16 00:09:44.607936] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:25.861 [2024-07-16 00:09:44.608025] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:25.861 [2024-07-16 00:09:44.608026] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:26.797 00:09:45 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:08:26.797 00:09:45 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@856 -- # return 0 00:08:26.797 00:09:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:26.797 00:09:45 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:26.797 00:09:45 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:26.797 00:09:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:26.797 00:09:45 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:08:26.797 00:09:45 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:26.797 00:09:45 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:26.797 [2024-07-16 00:09:45.332147] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:26.797 00:09:45 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:26.797 00:09:45 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:08:26.797 00:09:45 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:26.797 00:09:45 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:26.797 Malloc0 00:08:26.797 00:09:45 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:26.797 00:09:45 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:26.797 00:09:45 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:26.797 00:09:45 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:26.797 Delay0 00:08:26.797 00:09:45 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:26.797 00:09:45 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:26.797 00:09:45 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:26.797 00:09:45 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:26.797 00:09:45 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:26.797 00:09:45 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:08:26.797 00:09:45 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:26.797 00:09:45 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:26.797 00:09:45 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:26.798 00:09:45 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:26.798 00:09:45 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:26.798 00:09:45 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:26.798 [2024-07-16 00:09:45.401881] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:26.798 00:09:45 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:26.798 00:09:45 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:26.798 00:09:45 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:26.798 00:09:45 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:26.798 00:09:45 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:26.798 00:09:45 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:08:26.798 [2024-07-16 00:09:45.509059] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:08:28.702 [2024-07-16 00:09:47.546381] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0010 is same with the state(5) to be set 00:08:28.702 Initializing NVMe Controllers 00:08:28.702 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:28.702 controller IO queue size 128 less than required 00:08:28.702 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:08:28.702 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:08:28.702 Initialization complete. Launching workers. 00:08:28.702 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 41570 00:08:28.702 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 41631, failed to submit 62 00:08:28.702 success 41574, unsuccess 57, failed 0 00:08:28.702 00:09:47 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:28.961 00:09:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:28.961 00:09:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:28.961 00:09:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:28.961 00:09:47 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:08:28.961 00:09:47 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:08:28.961 00:09:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:28.961 00:09:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:08:28.961 00:09:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:28.961 00:09:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:08:28.961 00:09:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:28.961 00:09:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:28.961 rmmod nvme_tcp 00:08:28.961 rmmod nvme_fabrics 00:08:28.961 rmmod nvme_keyring 00:08:28.961 00:09:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:28.961 00:09:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:08:28.961 00:09:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:08:28.961 00:09:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 1392741 ']' 00:08:28.961 00:09:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 1392741 00:08:28.961 00:09:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@942 -- # '[' -z 1392741 ']' 00:08:28.961 00:09:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@946 -- # kill -0 1392741 00:08:28.961 00:09:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@947 -- # uname 00:08:28.961 00:09:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:08:28.961 00:09:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1392741 00:08:28.961 00:09:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:08:28.961 00:09:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:08:28.961 00:09:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1392741' 00:08:28.961 killing process with pid 1392741 00:08:28.961 00:09:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@961 -- # kill 1392741 00:08:28.961 00:09:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@966 -- # wait 1392741 00:08:29.220 00:09:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:29.220 00:09:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:29.220 00:09:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:29.220 00:09:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:29.220 00:09:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:29.220 00:09:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:29.220 00:09:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:29.220 00:09:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:31.126 00:09:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:31.126 00:08:31.126 real 0m10.973s 00:08:31.126 user 0m12.957s 00:08:31.126 sys 0m4.908s 00:08:31.126 00:09:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1118 -- # xtrace_disable 00:08:31.126 00:09:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:31.126 ************************************ 00:08:31.126 END TEST nvmf_abort 00:08:31.126 ************************************ 00:08:31.126 00:09:49 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:08:31.126 00:09:49 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:08:31.126 00:09:49 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:08:31.126 00:09:49 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:08:31.126 00:09:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:31.385 ************************************ 00:08:31.385 START TEST nvmf_ns_hotplug_stress 00:08:31.385 ************************************ 00:08:31.385 00:09:49 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:08:31.385 * Looking for test storage... 00:08:31.385 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:31.385 00:09:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:31.385 00:09:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:08:31.385 00:09:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:31.385 00:09:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:31.385 00:09:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:31.385 00:09:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:31.385 00:09:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:31.385 00:09:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:31.385 00:09:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:31.385 00:09:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:31.385 00:09:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:31.385 00:09:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:31.385 00:09:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:31.385 00:09:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:31.385 00:09:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:31.385 00:09:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:31.385 00:09:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:31.385 00:09:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:31.386 00:09:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:31.386 00:09:50 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:31.386 00:09:50 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:31.386 00:09:50 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:31.386 00:09:50 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.386 00:09:50 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.386 00:09:50 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.386 00:09:50 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:08:31.386 00:09:50 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.386 00:09:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:08:31.386 00:09:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:31.386 00:09:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:31.386 00:09:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:31.386 00:09:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:31.386 00:09:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:31.386 00:09:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:31.386 00:09:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:31.386 00:09:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:31.386 00:09:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:31.386 00:09:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:08:31.386 00:09:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:31.386 00:09:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:31.386 00:09:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:31.386 00:09:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:31.386 00:09:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:31.386 00:09:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:31.386 00:09:50 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:31.386 00:09:50 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:31.386 00:09:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:31.386 00:09:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:31.386 00:09:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:08:31.386 00:09:50 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:36.704 00:09:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:36.704 00:09:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:08:36.704 00:09:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:36.704 00:09:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:36.704 00:09:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:36.704 00:09:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:36.704 00:09:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:36.704 00:09:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:08:36.704 00:09:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:36.704 00:09:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:08:36.704 00:09:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:08:36.704 00:09:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:08:36.704 00:09:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:08:36.704 00:09:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:08:36.704 00:09:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:08:36.704 00:09:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:36.704 00:09:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:36.704 00:09:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:36.704 00:09:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:36.704 00:09:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:36.704 00:09:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:36.704 00:09:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:36.704 00:09:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:36.704 00:09:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:36.704 00:09:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:36.704 00:09:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:36.704 00:09:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:36.704 00:09:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:36.704 00:09:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:36.704 00:09:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:36.704 00:09:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:36.704 00:09:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:36.704 00:09:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:36.704 00:09:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:36.704 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:36.704 00:09:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:36.704 00:09:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:36.704 00:09:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:36.704 00:09:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:36.704 00:09:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:36.704 00:09:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:36.704 00:09:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:36.704 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:36.704 00:09:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:36.704 00:09:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:36.705 00:09:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:36.705 00:09:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:36.705 00:09:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:36.705 00:09:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:36.705 00:09:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:36.705 00:09:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:36.705 00:09:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:36.705 00:09:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:36.705 00:09:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:36.705 00:09:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:36.705 00:09:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:36.705 00:09:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:36.705 00:09:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:36.705 00:09:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:36.705 Found net devices under 0000:86:00.0: cvl_0_0 00:08:36.705 00:09:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:36.705 00:09:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:36.705 00:09:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:36.705 00:09:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:36.705 00:09:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:36.705 00:09:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:36.705 00:09:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:36.705 00:09:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:36.705 00:09:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:36.705 Found net devices under 0000:86:00.1: cvl_0_1 00:08:36.705 00:09:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:36.705 00:09:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:36.705 00:09:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:08:36.705 00:09:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:36.705 00:09:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:36.705 00:09:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:36.705 00:09:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:36.705 00:09:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:36.705 00:09:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:36.705 00:09:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:36.705 00:09:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:36.705 00:09:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:36.705 00:09:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:36.705 00:09:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:36.705 00:09:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:36.705 00:09:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:36.705 00:09:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:36.705 00:09:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:36.705 00:09:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:36.705 00:09:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:36.705 00:09:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:36.705 00:09:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:36.705 00:09:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:36.964 00:09:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:36.964 00:09:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:36.964 00:09:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:36.964 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:36.964 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.158 ms 00:08:36.964 00:08:36.964 --- 10.0.0.2 ping statistics --- 00:08:36.964 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:36.964 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:08:36.964 00:09:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:36.964 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:36.964 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.089 ms 00:08:36.964 00:08:36.964 --- 10.0.0.1 ping statistics --- 00:08:36.964 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:36.964 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:08:36.964 00:09:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:36.964 00:09:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:08:36.964 00:09:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:36.964 00:09:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:36.964 00:09:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:36.964 00:09:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:36.964 00:09:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:36.964 00:09:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:36.964 00:09:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:36.964 00:09:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:08:36.964 00:09:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:36.964 00:09:55 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:36.964 00:09:55 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:36.964 00:09:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=1396742 00:08:36.964 00:09:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 1396742 00:08:36.964 00:09:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:36.964 00:09:55 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@823 -- # '[' -z 1396742 ']' 00:08:36.964 00:09:55 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:36.964 00:09:55 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@828 -- # local max_retries=100 00:08:36.964 00:09:55 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:36.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:36.964 00:09:55 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@832 -- # xtrace_disable 00:08:36.964 00:09:55 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:36.964 [2024-07-16 00:09:55.701872] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:08:36.964 [2024-07-16 00:09:55.701913] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:36.964 [2024-07-16 00:09:55.758763] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:37.223 [2024-07-16 00:09:55.834161] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:37.223 [2024-07-16 00:09:55.834199] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:37.223 [2024-07-16 00:09:55.834208] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:37.223 [2024-07-16 00:09:55.834215] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:37.223 [2024-07-16 00:09:55.834221] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:37.223 [2024-07-16 00:09:55.834352] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:37.223 [2024-07-16 00:09:55.834438] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:37.223 [2024-07-16 00:09:55.834442] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:37.792 00:09:56 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:08:37.792 00:09:56 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@856 -- # return 0 00:08:37.792 00:09:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:37.792 00:09:56 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:37.792 00:09:56 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:37.792 00:09:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:37.792 00:09:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:08:37.792 00:09:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:38.051 [2024-07-16 00:09:56.707226] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:38.051 00:09:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:38.310 00:09:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:38.310 [2024-07-16 00:09:57.080606] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:38.310 00:09:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:38.569 00:09:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:08:38.829 Malloc0 00:08:38.829 00:09:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:38.829 Delay0 00:08:38.829 00:09:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:39.088 00:09:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:08:39.347 NULL1 00:08:39.347 00:09:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:08:39.607 00:09:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1397230 00:08:39.607 00:09:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1397230 00:08:39.607 00:09:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:39.607 00:09:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:08:39.607 00:09:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:39.866 00:09:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:08:39.866 00:09:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:08:40.125 true 00:08:40.125 00:09:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1397230 00:08:40.125 00:09:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:40.385 00:09:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:40.385 00:09:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:08:40.385 00:09:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:08:40.644 true 00:08:40.644 00:09:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1397230 00:08:40.644 00:09:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:40.903 00:09:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:41.163 00:09:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:08:41.163 00:09:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:08:41.163 true 00:08:41.163 00:10:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1397230 00:08:41.163 00:10:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:41.422 00:10:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:41.680 00:10:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:08:41.680 00:10:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:08:41.939 true 00:08:41.939 00:10:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1397230 00:08:41.939 00:10:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:42.196 00:10:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:42.196 00:10:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:08:42.196 00:10:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:08:42.454 true 00:08:42.454 00:10:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1397230 00:08:42.454 00:10:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:42.712 00:10:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:42.970 00:10:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:08:42.970 00:10:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:08:42.970 true 00:08:42.970 00:10:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1397230 00:08:42.970 00:10:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:43.230 00:10:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:43.489 00:10:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:08:43.489 00:10:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:08:43.747 true 00:08:43.747 00:10:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1397230 00:08:43.747 00:10:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:44.017 00:10:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:44.018 00:10:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:08:44.018 00:10:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:08:44.275 true 00:08:44.275 00:10:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1397230 00:08:44.275 00:10:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:44.533 00:10:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:44.791 00:10:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:08:44.791 00:10:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:08:44.791 true 00:08:44.791 00:10:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1397230 00:08:44.791 00:10:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:45.049 00:10:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:45.308 00:10:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:08:45.308 00:10:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:08:45.566 true 00:08:45.566 00:10:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1397230 00:08:45.566 00:10:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:45.566 00:10:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:45.825 00:10:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:08:45.825 00:10:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:08:46.083 true 00:08:46.083 00:10:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1397230 00:08:46.083 00:10:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:46.342 00:10:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:46.601 00:10:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:08:46.601 00:10:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:08:46.601 true 00:08:46.601 00:10:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1397230 00:08:46.601 00:10:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:46.860 00:10:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:47.119 00:10:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:08:47.119 00:10:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:08:47.378 true 00:08:47.378 00:10:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1397230 00:08:47.378 00:10:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:47.378 00:10:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:47.637 00:10:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:08:47.637 00:10:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:08:47.896 true 00:08:47.896 00:10:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1397230 00:08:47.896 00:10:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:48.155 00:10:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:48.413 00:10:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:08:48.414 00:10:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:08:48.414 true 00:08:48.414 00:10:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1397230 00:08:48.414 00:10:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:48.672 00:10:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:48.963 00:10:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:08:48.963 00:10:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:08:49.221 true 00:08:49.221 00:10:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1397230 00:08:49.222 00:10:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:49.222 00:10:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:49.480 00:10:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:08:49.480 00:10:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:08:49.739 true 00:08:49.739 00:10:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1397230 00:08:49.739 00:10:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:49.998 00:10:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:49.998 00:10:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:08:49.998 00:10:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:08:50.276 true 00:08:50.276 00:10:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1397230 00:08:50.276 00:10:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:50.535 00:10:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:50.793 00:10:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:08:50.793 00:10:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:08:50.793 true 00:08:50.793 00:10:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1397230 00:08:50.793 00:10:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:51.051 00:10:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:51.309 00:10:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:08:51.310 00:10:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:08:51.569 true 00:08:51.569 00:10:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1397230 00:08:51.569 00:10:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:51.569 00:10:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:51.827 00:10:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:08:51.827 00:10:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:08:52.086 true 00:08:52.086 00:10:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1397230 00:08:52.086 00:10:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:52.343 00:10:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:52.601 00:10:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:08:52.601 00:10:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:08:52.601 true 00:08:52.601 00:10:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1397230 00:08:52.601 00:10:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:52.859 00:10:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:53.117 00:10:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:08:53.117 00:10:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:08:53.375 true 00:08:53.375 00:10:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1397230 00:08:53.375 00:10:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:53.375 00:10:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:53.633 00:10:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:08:53.633 00:10:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:08:53.892 true 00:08:53.892 00:10:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1397230 00:08:53.892 00:10:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:54.152 00:10:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:54.152 00:10:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:08:54.152 00:10:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:08:54.411 true 00:08:54.411 00:10:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1397230 00:08:54.411 00:10:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:54.669 00:10:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:54.928 00:10:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:08:54.928 00:10:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:08:54.928 true 00:08:55.187 00:10:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1397230 00:08:55.187 00:10:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:55.187 00:10:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:55.446 00:10:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:08:55.446 00:10:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:08:55.705 true 00:08:55.705 00:10:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1397230 00:08:55.705 00:10:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:55.965 00:10:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:55.965 00:10:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:08:55.965 00:10:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:08:56.224 true 00:08:56.224 00:10:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1397230 00:08:56.224 00:10:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:56.483 00:10:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:56.743 00:10:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:08:56.743 00:10:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:08:56.743 true 00:08:56.743 00:10:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1397230 00:08:56.743 00:10:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:57.026 00:10:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:57.285 00:10:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:08:57.285 00:10:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:08:57.545 true 00:08:57.545 00:10:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1397230 00:08:57.545 00:10:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:57.545 00:10:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:57.805 00:10:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:08:57.805 00:10:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:08:58.063 true 00:08:58.063 00:10:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1397230 00:08:58.063 00:10:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:58.322 00:10:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:58.322 00:10:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:08:58.322 00:10:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:08:58.581 true 00:08:58.581 00:10:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1397230 00:08:58.581 00:10:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:58.840 00:10:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:59.099 00:10:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:08:59.099 00:10:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:08:59.359 true 00:08:59.359 00:10:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1397230 00:08:59.359 00:10:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:59.359 00:10:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:59.618 00:10:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:08:59.618 00:10:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:08:59.877 true 00:08:59.877 00:10:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1397230 00:08:59.877 00:10:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:00.137 00:10:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:00.137 00:10:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:09:00.137 00:10:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:09:00.395 true 00:09:00.395 00:10:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1397230 00:09:00.395 00:10:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:00.654 00:10:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:00.913 00:10:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:09:00.913 00:10:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:09:00.913 true 00:09:01.171 00:10:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1397230 00:09:01.171 00:10:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:01.171 00:10:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:01.429 00:10:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:09:01.429 00:10:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:09:01.688 true 00:09:01.688 00:10:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1397230 00:09:01.688 00:10:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:01.947 00:10:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:01.947 00:10:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:09:01.947 00:10:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:09:02.206 true 00:09:02.206 00:10:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1397230 00:09:02.206 00:10:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:02.465 00:10:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:02.766 00:10:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:09:02.766 00:10:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:09:02.766 true 00:09:02.766 00:10:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1397230 00:09:02.766 00:10:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:03.025 00:10:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:03.283 00:10:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:09:03.283 00:10:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:09:03.542 true 00:09:03.542 00:10:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1397230 00:09:03.542 00:10:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:03.542 00:10:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:03.800 00:10:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:09:03.800 00:10:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:09:04.060 true 00:09:04.060 00:10:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1397230 00:09:04.060 00:10:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:04.318 00:10:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:04.577 00:10:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:09:04.577 00:10:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:09:04.577 true 00:09:04.577 00:10:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1397230 00:09:04.577 00:10:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:04.835 00:10:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:05.094 00:10:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:09:05.094 00:10:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:09:05.094 true 00:09:05.354 00:10:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1397230 00:09:05.354 00:10:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:05.354 00:10:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:05.613 00:10:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:09:05.613 00:10:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:09:05.871 true 00:09:05.871 00:10:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1397230 00:09:05.871 00:10:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:06.130 00:10:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:06.130 00:10:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:09:06.130 00:10:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:09:06.388 true 00:09:06.388 00:10:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1397230 00:09:06.388 00:10:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:06.647 00:10:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:06.906 00:10:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:09:06.906 00:10:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:09:06.906 true 00:09:06.906 00:10:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1397230 00:09:06.906 00:10:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:07.165 00:10:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:07.424 00:10:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:09:07.424 00:10:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:09:07.683 true 00:09:07.683 00:10:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1397230 00:09:07.683 00:10:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:07.942 00:10:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:07.942 00:10:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:09:07.942 00:10:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:09:08.201 true 00:09:08.202 00:10:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1397230 00:09:08.202 00:10:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:08.461 00:10:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:08.720 00:10:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:09:08.720 00:10:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:09:08.720 true 00:09:08.720 00:10:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1397230 00:09:08.720 00:10:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:08.980 00:10:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:09.238 00:10:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:09:09.238 00:10:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:09:09.498 true 00:09:09.498 00:10:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1397230 00:09:09.498 00:10:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:09.757 00:10:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:09.757 Initializing NVMe Controllers 00:09:09.757 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:09.757 Controller IO queue size 128, less than required. 00:09:09.757 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:09.757 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:09:09.757 Initialization complete. Launching workers. 00:09:09.757 ======================================================== 00:09:09.757 Latency(us) 00:09:09.757 Device Information : IOPS MiB/s Average min max 00:09:09.757 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 26467.75 12.92 4836.29 2085.58 8723.16 00:09:09.757 ======================================================== 00:09:09.757 Total : 26467.75 12.92 4836.29 2085.58 8723.16 00:09:09.757 00:09:09.757 00:10:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:09:09.757 00:10:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:09:10.017 true 00:09:10.017 00:10:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1397230 00:09:10.017 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1397230) - No such process 00:09:10.017 00:10:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1397230 00:09:10.017 00:10:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:10.276 00:10:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:10.276 00:10:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:09:10.276 00:10:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:09:10.276 00:10:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:09:10.276 00:10:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:10.276 00:10:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:09:10.535 null0 00:09:10.535 00:10:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:10.535 00:10:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:10.535 00:10:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:09:10.794 null1 00:09:10.794 00:10:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:10.794 00:10:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:10.794 00:10:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:09:11.052 null2 00:09:11.052 00:10:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:11.052 00:10:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:11.052 00:10:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:09:11.052 null3 00:09:11.052 00:10:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:11.052 00:10:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:11.053 00:10:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:09:11.311 null4 00:09:11.311 00:10:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:11.311 00:10:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:11.311 00:10:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:09:11.571 null5 00:09:11.571 00:10:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:11.571 00:10:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:11.571 00:10:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:09:11.571 null6 00:09:11.830 00:10:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:11.830 00:10:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:11.830 00:10:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:09:11.830 null7 00:09:11.830 00:10:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:11.830 00:10:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:11.830 00:10:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:09:11.830 00:10:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:11.830 00:10:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:11.830 00:10:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:11.830 00:10:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:09:11.830 00:10:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:11.830 00:10:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:09:11.830 00:10:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:11.830 00:10:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:11.830 00:10:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:11.830 00:10:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:11.830 00:10:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:11.830 00:10:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:09:11.830 00:10:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:11.830 00:10:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:09:11.830 00:10:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:11.830 00:10:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:11.830 00:10:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:11.830 00:10:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:11.830 00:10:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:11.830 00:10:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:09:11.830 00:10:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:11.830 00:10:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:09:11.830 00:10:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:11.830 00:10:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:11.830 00:10:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:11.830 00:10:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:11.830 00:10:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:09:11.830 00:10:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:11.830 00:10:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:11.830 00:10:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:09:11.830 00:10:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:11.830 00:10:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:11.830 00:10:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:11.830 00:10:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:11.830 00:10:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:11.830 00:10:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:09:11.830 00:10:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:11.830 00:10:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:09:11.830 00:10:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:11.830 00:10:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:11.830 00:10:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:11.830 00:10:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:11.830 00:10:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:11.830 00:10:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:11.830 00:10:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:09:11.830 00:10:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:09:11.830 00:10:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:11.830 00:10:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:11.830 00:10:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:11.830 00:10:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:11.830 00:10:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:11.830 00:10:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:09:11.830 00:10:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:11.830 00:10:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:09:11.830 00:10:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:11.830 00:10:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:11.830 00:10:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:11.830 00:10:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:11.830 00:10:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:11.830 00:10:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1402663 1402664 1402666 1402668 1402670 1402672 1402674 1402675 00:09:11.830 00:10:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:09:11.830 00:10:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:11.830 00:10:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:09:11.830 00:10:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:11.830 00:10:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:11.830 00:10:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:12.089 00:10:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:12.089 00:10:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:12.089 00:10:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:12.089 00:10:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:12.089 00:10:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:12.089 00:10:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:12.089 00:10:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:12.089 00:10:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:12.348 00:10:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:12.348 00:10:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:12.348 00:10:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:12.348 00:10:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:12.348 00:10:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:12.348 00:10:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:12.348 00:10:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:12.348 00:10:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:12.348 00:10:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:12.348 00:10:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:12.348 00:10:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:12.348 00:10:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:12.348 00:10:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:12.348 00:10:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:12.348 00:10:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:12.348 00:10:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:12.348 00:10:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:12.348 00:10:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:12.348 00:10:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:12.348 00:10:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:12.348 00:10:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:12.348 00:10:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:12.348 00:10:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:12.348 00:10:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:12.348 00:10:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:12.608 00:10:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:12.608 00:10:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:12.608 00:10:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:12.608 00:10:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:12.608 00:10:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:12.608 00:10:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:12.608 00:10:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:12.608 00:10:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:12.608 00:10:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:12.608 00:10:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:12.608 00:10:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:12.608 00:10:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:12.608 00:10:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:12.608 00:10:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:12.608 00:10:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:12.608 00:10:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:12.608 00:10:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:12.608 00:10:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:12.608 00:10:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:12.608 00:10:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:12.608 00:10:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:12.608 00:10:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:12.608 00:10:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:12.608 00:10:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:12.608 00:10:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:12.608 00:10:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:12.608 00:10:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:12.608 00:10:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:12.608 00:10:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:12.608 00:10:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:12.608 00:10:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:12.866 00:10:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:12.866 00:10:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:12.866 00:10:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:12.866 00:10:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:12.866 00:10:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:12.866 00:10:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:12.866 00:10:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:12.866 00:10:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:13.124 00:10:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:13.124 00:10:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:13.124 00:10:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:13.124 00:10:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:13.124 00:10:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:13.124 00:10:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:13.124 00:10:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:13.124 00:10:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:13.124 00:10:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:13.124 00:10:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:13.124 00:10:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:13.124 00:10:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:13.124 00:10:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:13.124 00:10:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:13.124 00:10:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:13.124 00:10:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:13.125 00:10:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:13.125 00:10:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:13.125 00:10:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:13.125 00:10:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:13.125 00:10:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:13.125 00:10:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:13.125 00:10:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:13.125 00:10:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:13.125 00:10:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:13.125 00:10:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:13.125 00:10:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:13.125 00:10:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:13.125 00:10:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:13.125 00:10:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:13.125 00:10:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:13.125 00:10:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:13.382 00:10:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:13.382 00:10:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:13.382 00:10:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:13.382 00:10:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:13.382 00:10:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:13.382 00:10:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:13.382 00:10:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:13.382 00:10:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:13.382 00:10:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:13.382 00:10:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:13.382 00:10:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:13.382 00:10:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:13.382 00:10:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:13.383 00:10:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:13.383 00:10:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:13.383 00:10:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:13.383 00:10:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:13.383 00:10:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:13.383 00:10:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:13.383 00:10:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:13.383 00:10:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:13.383 00:10:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:13.383 00:10:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:13.383 00:10:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:13.641 00:10:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:13.641 00:10:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:13.641 00:10:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:13.641 00:10:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:13.641 00:10:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:13.641 00:10:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:13.641 00:10:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:13.641 00:10:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:13.899 00:10:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:13.899 00:10:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:13.899 00:10:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:13.899 00:10:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:13.899 00:10:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:13.899 00:10:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:13.899 00:10:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:13.899 00:10:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:13.899 00:10:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:13.899 00:10:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:13.899 00:10:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:13.900 00:10:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:13.900 00:10:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:13.900 00:10:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:13.900 00:10:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:13.900 00:10:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:13.900 00:10:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:13.900 00:10:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:13.900 00:10:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:13.900 00:10:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:13.900 00:10:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:13.900 00:10:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:13.900 00:10:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:13.900 00:10:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:13.900 00:10:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:13.900 00:10:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:13.900 00:10:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:13.900 00:10:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:13.900 00:10:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:13.900 00:10:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:13.900 00:10:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:13.900 00:10:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:14.158 00:10:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:14.158 00:10:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.158 00:10:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:14.158 00:10:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:14.158 00:10:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.158 00:10:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:14.158 00:10:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:14.158 00:10:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.158 00:10:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:14.158 00:10:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:14.158 00:10:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.158 00:10:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:14.158 00:10:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:14.158 00:10:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.158 00:10:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:14.158 00:10:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:14.158 00:10:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.158 00:10:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:14.158 00:10:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:14.158 00:10:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.158 00:10:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:14.158 00:10:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:14.158 00:10:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.158 00:10:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:14.417 00:10:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:14.417 00:10:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:14.417 00:10:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:14.417 00:10:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:14.417 00:10:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:14.417 00:10:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:14.417 00:10:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:14.417 00:10:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:14.674 00:10:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:14.674 00:10:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.674 00:10:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:14.674 00:10:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:14.674 00:10:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.674 00:10:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:14.674 00:10:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:14.674 00:10:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.674 00:10:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:14.674 00:10:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:14.674 00:10:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.674 00:10:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:14.674 00:10:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:14.674 00:10:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.674 00:10:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:14.674 00:10:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:14.674 00:10:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.674 00:10:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:14.674 00:10:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:14.674 00:10:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.674 00:10:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:14.674 00:10:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:14.674 00:10:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.675 00:10:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:14.675 00:10:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:14.675 00:10:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:14.675 00:10:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:14.675 00:10:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:14.675 00:10:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:14.675 00:10:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:14.675 00:10:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:14.675 00:10:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:14.933 00:10:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:14.933 00:10:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.933 00:10:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:14.933 00:10:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:14.933 00:10:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.933 00:10:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:14.933 00:10:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:14.933 00:10:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.933 00:10:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:14.933 00:10:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:14.933 00:10:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.933 00:10:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:14.933 00:10:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:14.933 00:10:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.933 00:10:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:14.933 00:10:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:14.933 00:10:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.933 00:10:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:14.933 00:10:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:14.933 00:10:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.933 00:10:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:14.933 00:10:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:14.933 00:10:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.933 00:10:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:15.191 00:10:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:15.191 00:10:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:15.191 00:10:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:15.191 00:10:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:15.191 00:10:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:15.191 00:10:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:15.191 00:10:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:15.191 00:10:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:15.191 00:10:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.191 00:10:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.191 00:10:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:15.191 00:10:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.191 00:10:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.191 00:10:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:15.450 00:10:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.450 00:10:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.450 00:10:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:15.450 00:10:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.450 00:10:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.450 00:10:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:15.450 00:10:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.450 00:10:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.450 00:10:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:15.450 00:10:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.450 00:10:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.450 00:10:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:15.450 00:10:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.450 00:10:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.450 00:10:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:15.450 00:10:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.450 00:10:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.450 00:10:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:15.450 00:10:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:15.450 00:10:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:15.450 00:10:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:15.450 00:10:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:15.450 00:10:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:15.450 00:10:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:15.450 00:10:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:15.450 00:10:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:15.708 00:10:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.708 00:10:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.708 00:10:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.708 00:10:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.708 00:10:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.708 00:10:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.708 00:10:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.708 00:10:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.708 00:10:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.708 00:10:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.708 00:10:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.708 00:10:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.708 00:10:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.708 00:10:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.708 00:10:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.708 00:10:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.708 00:10:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:09:15.708 00:10:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:09:15.708 00:10:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:15.708 00:10:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:09:15.708 00:10:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:15.708 00:10:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:09:15.708 00:10:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:15.708 00:10:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:15.708 rmmod nvme_tcp 00:09:15.708 rmmod nvme_fabrics 00:09:15.708 rmmod nvme_keyring 00:09:15.708 00:10:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:15.708 00:10:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:09:15.708 00:10:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:09:15.708 00:10:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 1396742 ']' 00:09:15.708 00:10:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 1396742 00:09:15.708 00:10:34 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@942 -- # '[' -z 1396742 ']' 00:09:15.708 00:10:34 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@946 -- # kill -0 1396742 00:09:15.708 00:10:34 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@947 -- # uname 00:09:15.708 00:10:34 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:09:15.708 00:10:34 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1396742 00:09:15.708 00:10:34 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:09:15.708 00:10:34 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:09:15.708 00:10:34 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1396742' 00:09:15.708 killing process with pid 1396742 00:09:15.708 00:10:34 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@961 -- # kill 1396742 00:09:15.708 00:10:34 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # wait 1396742 00:09:15.967 00:10:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:15.967 00:10:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:15.967 00:10:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:15.967 00:10:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:15.967 00:10:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:15.967 00:10:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:15.967 00:10:34 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:15.967 00:10:34 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:18.570 00:10:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:18.570 00:09:18.570 real 0m46.818s 00:09:18.570 user 3m17.990s 00:09:18.570 sys 0m16.767s 00:09:18.570 00:10:36 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1118 -- # xtrace_disable 00:09:18.570 00:10:36 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:18.570 ************************************ 00:09:18.570 END TEST nvmf_ns_hotplug_stress 00:09:18.570 ************************************ 00:09:18.570 00:10:36 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:09:18.571 00:10:36 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:09:18.571 00:10:36 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:09:18.571 00:10:36 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:09:18.571 00:10:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:18.571 ************************************ 00:09:18.571 START TEST nvmf_connect_stress 00:09:18.571 ************************************ 00:09:18.571 00:10:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:09:18.571 * Looking for test storage... 00:09:18.571 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:18.571 00:10:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:18.571 00:10:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:09:18.571 00:10:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:18.571 00:10:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:18.571 00:10:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:18.571 00:10:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:18.571 00:10:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:18.571 00:10:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:18.571 00:10:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:18.571 00:10:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:18.571 00:10:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:18.571 00:10:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:18.571 00:10:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:18.571 00:10:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:18.571 00:10:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:18.571 00:10:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:18.571 00:10:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:18.571 00:10:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:18.571 00:10:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:18.571 00:10:36 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:18.571 00:10:36 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:18.571 00:10:36 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:18.571 00:10:36 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.571 00:10:36 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.571 00:10:36 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.571 00:10:36 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:09:18.571 00:10:36 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.571 00:10:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:09:18.571 00:10:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:18.571 00:10:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:18.571 00:10:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:18.571 00:10:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:18.571 00:10:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:18.571 00:10:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:18.571 00:10:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:18.571 00:10:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:18.571 00:10:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:09:18.571 00:10:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:18.571 00:10:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:18.571 00:10:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:18.571 00:10:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:18.571 00:10:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:18.571 00:10:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:18.571 00:10:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:18.571 00:10:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:18.571 00:10:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:18.571 00:10:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:18.571 00:10:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:09:18.571 00:10:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:23.867 00:10:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:23.867 00:10:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:09:23.867 00:10:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:23.867 00:10:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:23.867 00:10:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:23.867 00:10:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:23.867 00:10:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:23.867 00:10:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:09:23.867 00:10:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:23.867 00:10:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:09:23.867 00:10:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:09:23.867 00:10:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:09:23.867 00:10:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:09:23.867 00:10:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:09:23.867 00:10:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:09:23.867 00:10:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:23.867 00:10:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:23.867 00:10:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:23.867 00:10:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:23.867 00:10:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:23.867 00:10:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:23.867 00:10:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:23.867 00:10:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:23.867 00:10:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:23.867 00:10:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:23.867 00:10:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:23.867 00:10:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:23.867 00:10:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:23.867 00:10:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:23.867 00:10:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:23.867 00:10:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:23.867 00:10:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:23.867 00:10:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:23.867 00:10:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:23.867 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:23.867 00:10:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:23.867 00:10:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:23.867 00:10:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:23.867 00:10:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:23.867 00:10:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:23.867 00:10:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:23.867 00:10:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:23.867 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:23.867 00:10:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:23.867 00:10:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:23.867 00:10:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:23.867 00:10:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:23.867 00:10:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:23.867 00:10:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:23.867 00:10:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:23.867 00:10:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:23.867 00:10:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:23.867 00:10:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:23.867 00:10:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:23.867 00:10:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:23.867 00:10:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:23.867 00:10:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:23.867 00:10:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:23.867 00:10:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:23.867 Found net devices under 0000:86:00.0: cvl_0_0 00:09:23.867 00:10:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:23.867 00:10:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:23.867 00:10:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:23.867 00:10:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:23.867 00:10:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:23.867 00:10:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:23.867 00:10:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:23.867 00:10:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:23.867 00:10:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:23.867 Found net devices under 0000:86:00.1: cvl_0_1 00:09:23.867 00:10:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:23.867 00:10:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:23.867 00:10:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:09:23.867 00:10:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:23.867 00:10:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:23.867 00:10:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:23.867 00:10:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:23.867 00:10:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:23.867 00:10:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:23.867 00:10:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:23.867 00:10:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:23.867 00:10:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:23.867 00:10:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:23.867 00:10:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:23.867 00:10:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:23.867 00:10:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:23.867 00:10:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:23.867 00:10:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:23.867 00:10:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:23.867 00:10:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:23.867 00:10:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:23.868 00:10:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:23.868 00:10:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:23.868 00:10:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:23.868 00:10:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:23.868 00:10:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:23.868 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:23.868 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:09:23.868 00:09:23.868 --- 10.0.0.2 ping statistics --- 00:09:23.868 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:23.868 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:09:23.868 00:10:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:23.868 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:23.868 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.267 ms 00:09:23.868 00:09:23.868 --- 10.0.0.1 ping statistics --- 00:09:23.868 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:23.868 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:09:23.868 00:10:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:23.868 00:10:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:09:23.868 00:10:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:23.868 00:10:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:23.868 00:10:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:23.868 00:10:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:23.868 00:10:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:23.868 00:10:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:23.868 00:10:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:23.868 00:10:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:09:23.868 00:10:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:23.868 00:10:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:23.868 00:10:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:23.868 00:10:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=1407048 00:09:23.868 00:10:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:23.868 00:10:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 1407048 00:09:23.868 00:10:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@823 -- # '[' -z 1407048 ']' 00:09:23.868 00:10:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:23.868 00:10:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@828 -- # local max_retries=100 00:09:23.868 00:10:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:23.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:23.868 00:10:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@832 -- # xtrace_disable 00:09:23.868 00:10:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:23.868 [2024-07-16 00:10:42.388407] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:09:23.868 [2024-07-16 00:10:42.388452] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:23.868 [2024-07-16 00:10:42.444790] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:23.868 [2024-07-16 00:10:42.524090] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:23.868 [2024-07-16 00:10:42.524125] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:23.868 [2024-07-16 00:10:42.524132] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:23.868 [2024-07-16 00:10:42.524138] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:23.868 [2024-07-16 00:10:42.524143] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:23.868 [2024-07-16 00:10:42.524183] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:23.868 [2024-07-16 00:10:42.524274] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:23.868 [2024-07-16 00:10:42.524276] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:24.436 00:10:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:09:24.436 00:10:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@856 -- # return 0 00:09:24.436 00:10:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:24.436 00:10:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:24.436 00:10:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:24.436 00:10:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:24.436 00:10:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:24.436 00:10:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:24.436 00:10:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:24.436 [2024-07-16 00:10:43.252743] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:24.436 00:10:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:24.436 00:10:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:24.436 00:10:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:24.436 00:10:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:24.436 00:10:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:24.436 00:10:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:24.436 00:10:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:24.436 00:10:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:24.436 [2024-07-16 00:10:43.282335] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:24.436 00:10:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:24.436 00:10:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:09:24.436 00:10:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:24.436 00:10:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:24.695 NULL1 00:09:24.695 00:10:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:24.695 00:10:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=1407219 00:09:24.695 00:10:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:09:24.695 00:10:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:09:24.695 00:10:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:09:24.695 00:10:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:09:24.695 00:10:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:24.695 00:10:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:24.695 00:10:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:24.695 00:10:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:24.695 00:10:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:24.695 00:10:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:24.695 00:10:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:24.695 00:10:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:24.695 00:10:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:24.695 00:10:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:24.695 00:10:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:24.695 00:10:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:24.695 00:10:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:24.695 00:10:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:24.695 00:10:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:24.695 00:10:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:24.695 00:10:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:24.695 00:10:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:24.695 00:10:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:24.695 00:10:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:24.695 00:10:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:24.695 00:10:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:24.695 00:10:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:24.695 00:10:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:24.695 00:10:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:24.695 00:10:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:24.695 00:10:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:24.695 00:10:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:24.695 00:10:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:24.695 00:10:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:24.695 00:10:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:24.695 00:10:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:24.695 00:10:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:24.695 00:10:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:24.695 00:10:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:24.695 00:10:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:24.695 00:10:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:24.695 00:10:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:24.695 00:10:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:24.695 00:10:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:24.695 00:10:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1407219 00:09:24.695 00:10:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:24.695 00:10:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:24.695 00:10:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:24.954 00:10:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:24.954 00:10:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1407219 00:09:24.954 00:10:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:24.954 00:10:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:24.954 00:10:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:25.213 00:10:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:25.213 00:10:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1407219 00:09:25.213 00:10:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:25.213 00:10:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:25.213 00:10:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:25.781 00:10:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:25.781 00:10:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1407219 00:09:25.781 00:10:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:25.781 00:10:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:25.781 00:10:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:26.041 00:10:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:26.041 00:10:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1407219 00:09:26.041 00:10:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:26.041 00:10:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:26.041 00:10:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:26.300 00:10:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:26.300 00:10:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1407219 00:09:26.300 00:10:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:26.300 00:10:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:26.300 00:10:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:26.559 00:10:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:26.559 00:10:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1407219 00:09:26.559 00:10:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:26.559 00:10:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:26.559 00:10:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:26.819 00:10:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:26.819 00:10:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1407219 00:09:26.819 00:10:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:26.819 00:10:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:26.819 00:10:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:27.388 00:10:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:27.388 00:10:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1407219 00:09:27.388 00:10:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:27.388 00:10:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:27.388 00:10:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:27.647 00:10:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:27.647 00:10:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1407219 00:09:27.647 00:10:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:27.647 00:10:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:27.647 00:10:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:27.907 00:10:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:27.907 00:10:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1407219 00:09:27.907 00:10:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:27.907 00:10:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:27.907 00:10:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:28.166 00:10:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:28.166 00:10:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1407219 00:09:28.166 00:10:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:28.166 00:10:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:28.166 00:10:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:28.734 00:10:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:28.734 00:10:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1407219 00:09:28.734 00:10:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:28.734 00:10:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:28.734 00:10:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:28.994 00:10:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:28.994 00:10:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1407219 00:09:28.994 00:10:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:28.994 00:10:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:28.994 00:10:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:29.254 00:10:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:29.254 00:10:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1407219 00:09:29.254 00:10:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:29.254 00:10:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:29.254 00:10:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:29.513 00:10:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:29.513 00:10:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1407219 00:09:29.513 00:10:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:29.513 00:10:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:29.513 00:10:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:29.772 00:10:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:29.772 00:10:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1407219 00:09:29.772 00:10:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:29.772 00:10:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:29.772 00:10:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:30.341 00:10:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:30.341 00:10:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1407219 00:09:30.341 00:10:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:30.341 00:10:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:30.341 00:10:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:30.600 00:10:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:30.600 00:10:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1407219 00:09:30.600 00:10:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:30.600 00:10:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:30.600 00:10:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:30.859 00:10:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:30.860 00:10:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1407219 00:09:30.860 00:10:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:30.860 00:10:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:30.860 00:10:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:31.119 00:10:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:31.119 00:10:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1407219 00:09:31.119 00:10:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:31.119 00:10:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:31.119 00:10:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:31.379 00:10:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:31.379 00:10:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1407219 00:09:31.379 00:10:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:31.379 00:10:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:31.379 00:10:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:31.948 00:10:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:31.948 00:10:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1407219 00:09:31.948 00:10:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:31.948 00:10:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:31.948 00:10:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:32.207 00:10:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:32.207 00:10:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1407219 00:09:32.207 00:10:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:32.207 00:10:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:32.207 00:10:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:32.465 00:10:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:32.465 00:10:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1407219 00:09:32.465 00:10:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:32.465 00:10:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:32.465 00:10:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:32.725 00:10:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:32.725 00:10:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1407219 00:09:32.725 00:10:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:32.725 00:10:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:32.725 00:10:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:33.293 00:10:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:33.293 00:10:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1407219 00:09:33.293 00:10:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:33.293 00:10:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:33.293 00:10:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:33.551 00:10:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:33.551 00:10:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1407219 00:09:33.551 00:10:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:33.551 00:10:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:33.551 00:10:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:33.810 00:10:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:33.810 00:10:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1407219 00:09:33.810 00:10:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:33.810 00:10:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:33.810 00:10:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:34.068 00:10:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:34.068 00:10:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1407219 00:09:34.068 00:10:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:34.068 00:10:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:34.068 00:10:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:34.328 00:10:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:34.328 00:10:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1407219 00:09:34.328 00:10:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:34.329 00:10:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:34.329 00:10:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:34.629 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:34.629 00:10:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:34.629 00:10:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1407219 00:09:34.629 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1407219) - No such process 00:09:34.629 00:10:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 1407219 00:09:34.629 00:10:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:09:34.629 00:10:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:34.629 00:10:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:09:34.629 00:10:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:34.629 00:10:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:09:34.629 00:10:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:34.629 00:10:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:09:34.629 00:10:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:34.629 00:10:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:34.629 rmmod nvme_tcp 00:09:34.887 rmmod nvme_fabrics 00:09:34.887 rmmod nvme_keyring 00:09:34.887 00:10:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:34.887 00:10:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:09:34.887 00:10:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:09:34.887 00:10:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 1407048 ']' 00:09:34.887 00:10:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 1407048 00:09:34.887 00:10:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@942 -- # '[' -z 1407048 ']' 00:09:34.887 00:10:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@946 -- # kill -0 1407048 00:09:34.887 00:10:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@947 -- # uname 00:09:34.887 00:10:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:09:34.887 00:10:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1407048 00:09:34.887 00:10:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:09:34.887 00:10:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:09:34.887 00:10:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1407048' 00:09:34.887 killing process with pid 1407048 00:09:34.887 00:10:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@961 -- # kill 1407048 00:09:34.887 00:10:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@966 -- # wait 1407048 00:09:35.146 00:10:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:35.146 00:10:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:35.146 00:10:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:35.146 00:10:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:35.146 00:10:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:35.146 00:10:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:35.146 00:10:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:35.146 00:10:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:37.052 00:10:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:37.052 00:09:37.052 real 0m18.952s 00:09:37.052 user 0m41.072s 00:09:37.052 sys 0m7.997s 00:09:37.052 00:10:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1118 -- # xtrace_disable 00:09:37.052 00:10:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:37.052 ************************************ 00:09:37.052 END TEST nvmf_connect_stress 00:09:37.052 ************************************ 00:09:37.052 00:10:55 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:09:37.052 00:10:55 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:09:37.052 00:10:55 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:09:37.052 00:10:55 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:09:37.052 00:10:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:37.052 ************************************ 00:09:37.052 START TEST nvmf_fused_ordering 00:09:37.052 ************************************ 00:09:37.052 00:10:55 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:09:37.312 * Looking for test storage... 00:09:37.312 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:37.312 00:10:55 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:37.312 00:10:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:09:37.312 00:10:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:37.312 00:10:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:37.312 00:10:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:37.312 00:10:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:37.312 00:10:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:37.312 00:10:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:37.312 00:10:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:37.312 00:10:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:37.312 00:10:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:37.312 00:10:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:37.312 00:10:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:37.312 00:10:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:37.312 00:10:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:37.312 00:10:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:37.312 00:10:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:37.312 00:10:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:37.312 00:10:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:37.312 00:10:55 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:37.312 00:10:55 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:37.312 00:10:55 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:37.312 00:10:55 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.312 00:10:55 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.312 00:10:55 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.312 00:10:55 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:09:37.312 00:10:55 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.313 00:10:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:09:37.313 00:10:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:37.313 00:10:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:37.313 00:10:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:37.313 00:10:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:37.313 00:10:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:37.313 00:10:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:37.313 00:10:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:37.313 00:10:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:37.313 00:10:55 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:09:37.313 00:10:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:37.313 00:10:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:37.313 00:10:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:37.313 00:10:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:37.313 00:10:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:37.313 00:10:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:37.313 00:10:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:37.313 00:10:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:37.313 00:10:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:37.313 00:10:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:37.313 00:10:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:09:37.313 00:10:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:42.612 00:11:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:42.612 00:11:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:09:42.612 00:11:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:42.612 00:11:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:42.612 00:11:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:42.612 00:11:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:42.612 00:11:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:42.612 00:11:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:09:42.612 00:11:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:42.612 00:11:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:09:42.612 00:11:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:09:42.612 00:11:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:09:42.612 00:11:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:09:42.612 00:11:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:09:42.612 00:11:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:09:42.612 00:11:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:42.612 00:11:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:42.612 00:11:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:42.612 00:11:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:42.612 00:11:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:42.612 00:11:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:42.612 00:11:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:42.612 00:11:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:42.612 00:11:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:42.612 00:11:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:42.612 00:11:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:42.612 00:11:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:42.612 00:11:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:42.612 00:11:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:42.612 00:11:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:42.612 00:11:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:42.612 00:11:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:42.612 00:11:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:42.612 00:11:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:42.612 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:42.612 00:11:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:42.612 00:11:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:42.612 00:11:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:42.612 00:11:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:42.612 00:11:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:42.612 00:11:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:42.612 00:11:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:42.613 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:42.613 00:11:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:42.613 00:11:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:42.613 00:11:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:42.613 00:11:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:42.613 00:11:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:42.613 00:11:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:42.613 00:11:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:42.613 00:11:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:42.613 00:11:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:42.613 00:11:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:42.613 00:11:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:42.613 00:11:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:42.613 00:11:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:42.613 00:11:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:42.613 00:11:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:42.613 00:11:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:42.613 Found net devices under 0000:86:00.0: cvl_0_0 00:09:42.613 00:11:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:42.613 00:11:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:42.613 00:11:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:42.613 00:11:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:42.613 00:11:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:42.613 00:11:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:42.613 00:11:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:42.613 00:11:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:42.613 00:11:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:42.613 Found net devices under 0000:86:00.1: cvl_0_1 00:09:42.613 00:11:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:42.613 00:11:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:42.613 00:11:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:09:42.613 00:11:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:42.613 00:11:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:42.613 00:11:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:42.613 00:11:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:42.613 00:11:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:42.613 00:11:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:42.613 00:11:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:42.613 00:11:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:42.613 00:11:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:42.613 00:11:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:42.613 00:11:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:42.613 00:11:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:42.613 00:11:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:42.613 00:11:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:42.613 00:11:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:42.613 00:11:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:42.613 00:11:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:42.613 00:11:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:42.613 00:11:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:42.613 00:11:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:42.613 00:11:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:42.613 00:11:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:42.613 00:11:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:42.613 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:42.613 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.237 ms 00:09:42.613 00:09:42.613 --- 10.0.0.2 ping statistics --- 00:09:42.613 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:42.613 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:09:42.613 00:11:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:42.613 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:42.613 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:09:42.613 00:09:42.613 --- 10.0.0.1 ping statistics --- 00:09:42.613 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:42.613 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:09:42.613 00:11:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:42.613 00:11:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:09:42.613 00:11:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:42.613 00:11:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:42.613 00:11:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:42.613 00:11:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:42.613 00:11:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:42.613 00:11:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:42.613 00:11:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:42.613 00:11:00 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:09:42.613 00:11:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:42.613 00:11:00 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:42.613 00:11:00 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:42.613 00:11:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=1412227 00:09:42.613 00:11:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 1412227 00:09:42.613 00:11:00 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@823 -- # '[' -z 1412227 ']' 00:09:42.613 00:11:00 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:42.613 00:11:00 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@828 -- # local max_retries=100 00:09:42.613 00:11:00 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:42.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:42.613 00:11:00 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@832 -- # xtrace_disable 00:09:42.613 00:11:00 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:42.613 00:11:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:42.613 [2024-07-16 00:11:00.956703] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:09:42.613 [2024-07-16 00:11:00.956748] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:42.613 [2024-07-16 00:11:01.011032] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:42.613 [2024-07-16 00:11:01.093127] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:42.613 [2024-07-16 00:11:01.093160] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:42.613 [2024-07-16 00:11:01.093167] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:42.613 [2024-07-16 00:11:01.093173] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:42.613 [2024-07-16 00:11:01.093178] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:42.613 [2024-07-16 00:11:01.093195] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:43.182 00:11:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:09:43.182 00:11:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@856 -- # return 0 00:09:43.182 00:11:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:43.182 00:11:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:43.182 00:11:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:43.182 00:11:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:43.182 00:11:01 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:43.182 00:11:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:43.182 00:11:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:43.182 [2024-07-16 00:11:01.792663] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:43.182 00:11:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:43.182 00:11:01 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:43.182 00:11:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:43.182 00:11:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:43.182 00:11:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:43.182 00:11:01 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:43.182 00:11:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:43.182 00:11:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:43.182 [2024-07-16 00:11:01.808791] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:43.182 00:11:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:43.182 00:11:01 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:09:43.182 00:11:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:43.182 00:11:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:43.182 NULL1 00:09:43.182 00:11:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:43.182 00:11:01 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:09:43.182 00:11:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:43.182 00:11:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:43.182 00:11:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:43.182 00:11:01 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:09:43.182 00:11:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:43.182 00:11:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:43.182 00:11:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:43.182 00:11:01 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:09:43.182 [2024-07-16 00:11:01.851554] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:09:43.182 [2024-07-16 00:11:01.851584] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1412469 ] 00:09:43.442 Attached to nqn.2016-06.io.spdk:cnode1 00:09:43.442 Namespace ID: 1 size: 1GB 00:09:43.442 fused_ordering(0) 00:09:43.442 fused_ordering(1) 00:09:43.442 fused_ordering(2) 00:09:43.442 fused_ordering(3) 00:09:43.442 fused_ordering(4) 00:09:43.442 fused_ordering(5) 00:09:43.442 fused_ordering(6) 00:09:43.442 fused_ordering(7) 00:09:43.442 fused_ordering(8) 00:09:43.442 fused_ordering(9) 00:09:43.442 fused_ordering(10) 00:09:43.442 fused_ordering(11) 00:09:43.442 fused_ordering(12) 00:09:43.442 fused_ordering(13) 00:09:43.442 fused_ordering(14) 00:09:43.442 fused_ordering(15) 00:09:43.442 fused_ordering(16) 00:09:43.442 fused_ordering(17) 00:09:43.442 fused_ordering(18) 00:09:43.442 fused_ordering(19) 00:09:43.442 fused_ordering(20) 00:09:43.442 fused_ordering(21) 00:09:43.442 fused_ordering(22) 00:09:43.442 fused_ordering(23) 00:09:43.442 fused_ordering(24) 00:09:43.442 fused_ordering(25) 00:09:43.442 fused_ordering(26) 00:09:43.442 fused_ordering(27) 00:09:43.442 fused_ordering(28) 00:09:43.442 fused_ordering(29) 00:09:43.442 fused_ordering(30) 00:09:43.442 fused_ordering(31) 00:09:43.442 fused_ordering(32) 00:09:43.442 fused_ordering(33) 00:09:43.442 fused_ordering(34) 00:09:43.442 fused_ordering(35) 00:09:43.442 fused_ordering(36) 00:09:43.442 fused_ordering(37) 00:09:43.442 fused_ordering(38) 00:09:43.442 fused_ordering(39) 00:09:43.442 fused_ordering(40) 00:09:43.442 fused_ordering(41) 00:09:43.442 fused_ordering(42) 00:09:43.442 fused_ordering(43) 00:09:43.442 fused_ordering(44) 00:09:43.442 fused_ordering(45) 00:09:43.442 fused_ordering(46) 00:09:43.442 fused_ordering(47) 00:09:43.442 fused_ordering(48) 00:09:43.442 fused_ordering(49) 00:09:43.442 fused_ordering(50) 00:09:43.442 fused_ordering(51) 00:09:43.442 fused_ordering(52) 00:09:43.442 fused_ordering(53) 00:09:43.442 fused_ordering(54) 00:09:43.442 fused_ordering(55) 00:09:43.442 fused_ordering(56) 00:09:43.442 fused_ordering(57) 00:09:43.442 fused_ordering(58) 00:09:43.442 fused_ordering(59) 00:09:43.442 fused_ordering(60) 00:09:43.442 fused_ordering(61) 00:09:43.442 fused_ordering(62) 00:09:43.442 fused_ordering(63) 00:09:43.442 fused_ordering(64) 00:09:43.442 fused_ordering(65) 00:09:43.442 fused_ordering(66) 00:09:43.442 fused_ordering(67) 00:09:43.442 fused_ordering(68) 00:09:43.442 fused_ordering(69) 00:09:43.442 fused_ordering(70) 00:09:43.442 fused_ordering(71) 00:09:43.442 fused_ordering(72) 00:09:43.442 fused_ordering(73) 00:09:43.442 fused_ordering(74) 00:09:43.442 fused_ordering(75) 00:09:43.442 fused_ordering(76) 00:09:43.442 fused_ordering(77) 00:09:43.442 fused_ordering(78) 00:09:43.442 fused_ordering(79) 00:09:43.442 fused_ordering(80) 00:09:43.442 fused_ordering(81) 00:09:43.442 fused_ordering(82) 00:09:43.442 fused_ordering(83) 00:09:43.442 fused_ordering(84) 00:09:43.442 fused_ordering(85) 00:09:43.442 fused_ordering(86) 00:09:43.442 fused_ordering(87) 00:09:43.442 fused_ordering(88) 00:09:43.442 fused_ordering(89) 00:09:43.442 fused_ordering(90) 00:09:43.442 fused_ordering(91) 00:09:43.442 fused_ordering(92) 00:09:43.442 fused_ordering(93) 00:09:43.442 fused_ordering(94) 00:09:43.442 fused_ordering(95) 00:09:43.442 fused_ordering(96) 00:09:43.442 fused_ordering(97) 00:09:43.442 fused_ordering(98) 00:09:43.443 fused_ordering(99) 00:09:43.443 fused_ordering(100) 00:09:43.443 fused_ordering(101) 00:09:43.443 fused_ordering(102) 00:09:43.443 fused_ordering(103) 00:09:43.443 fused_ordering(104) 00:09:43.443 fused_ordering(105) 00:09:43.443 fused_ordering(106) 00:09:43.443 fused_ordering(107) 00:09:43.443 fused_ordering(108) 00:09:43.443 fused_ordering(109) 00:09:43.443 fused_ordering(110) 00:09:43.443 fused_ordering(111) 00:09:43.443 fused_ordering(112) 00:09:43.443 fused_ordering(113) 00:09:43.443 fused_ordering(114) 00:09:43.443 fused_ordering(115) 00:09:43.443 fused_ordering(116) 00:09:43.443 fused_ordering(117) 00:09:43.443 fused_ordering(118) 00:09:43.443 fused_ordering(119) 00:09:43.443 fused_ordering(120) 00:09:43.443 fused_ordering(121) 00:09:43.443 fused_ordering(122) 00:09:43.443 fused_ordering(123) 00:09:43.443 fused_ordering(124) 00:09:43.443 fused_ordering(125) 00:09:43.443 fused_ordering(126) 00:09:43.443 fused_ordering(127) 00:09:43.443 fused_ordering(128) 00:09:43.443 fused_ordering(129) 00:09:43.443 fused_ordering(130) 00:09:43.443 fused_ordering(131) 00:09:43.443 fused_ordering(132) 00:09:43.443 fused_ordering(133) 00:09:43.443 fused_ordering(134) 00:09:43.443 fused_ordering(135) 00:09:43.443 fused_ordering(136) 00:09:43.443 fused_ordering(137) 00:09:43.443 fused_ordering(138) 00:09:43.443 fused_ordering(139) 00:09:43.443 fused_ordering(140) 00:09:43.443 fused_ordering(141) 00:09:43.443 fused_ordering(142) 00:09:43.443 fused_ordering(143) 00:09:43.443 fused_ordering(144) 00:09:43.443 fused_ordering(145) 00:09:43.443 fused_ordering(146) 00:09:43.443 fused_ordering(147) 00:09:43.443 fused_ordering(148) 00:09:43.443 fused_ordering(149) 00:09:43.443 fused_ordering(150) 00:09:43.443 fused_ordering(151) 00:09:43.443 fused_ordering(152) 00:09:43.443 fused_ordering(153) 00:09:43.443 fused_ordering(154) 00:09:43.443 fused_ordering(155) 00:09:43.443 fused_ordering(156) 00:09:43.443 fused_ordering(157) 00:09:43.443 fused_ordering(158) 00:09:43.443 fused_ordering(159) 00:09:43.443 fused_ordering(160) 00:09:43.443 fused_ordering(161) 00:09:43.443 fused_ordering(162) 00:09:43.443 fused_ordering(163) 00:09:43.443 fused_ordering(164) 00:09:43.443 fused_ordering(165) 00:09:43.443 fused_ordering(166) 00:09:43.443 fused_ordering(167) 00:09:43.443 fused_ordering(168) 00:09:43.443 fused_ordering(169) 00:09:43.443 fused_ordering(170) 00:09:43.443 fused_ordering(171) 00:09:43.443 fused_ordering(172) 00:09:43.443 fused_ordering(173) 00:09:43.443 fused_ordering(174) 00:09:43.443 fused_ordering(175) 00:09:43.443 fused_ordering(176) 00:09:43.443 fused_ordering(177) 00:09:43.443 fused_ordering(178) 00:09:43.443 fused_ordering(179) 00:09:43.443 fused_ordering(180) 00:09:43.443 fused_ordering(181) 00:09:43.443 fused_ordering(182) 00:09:43.443 fused_ordering(183) 00:09:43.443 fused_ordering(184) 00:09:43.443 fused_ordering(185) 00:09:43.443 fused_ordering(186) 00:09:43.443 fused_ordering(187) 00:09:43.443 fused_ordering(188) 00:09:43.443 fused_ordering(189) 00:09:43.443 fused_ordering(190) 00:09:43.443 fused_ordering(191) 00:09:43.443 fused_ordering(192) 00:09:43.443 fused_ordering(193) 00:09:43.443 fused_ordering(194) 00:09:43.443 fused_ordering(195) 00:09:43.443 fused_ordering(196) 00:09:43.443 fused_ordering(197) 00:09:43.443 fused_ordering(198) 00:09:43.443 fused_ordering(199) 00:09:43.443 fused_ordering(200) 00:09:43.443 fused_ordering(201) 00:09:43.443 fused_ordering(202) 00:09:43.443 fused_ordering(203) 00:09:43.443 fused_ordering(204) 00:09:43.443 fused_ordering(205) 00:09:43.702 fused_ordering(206) 00:09:43.702 fused_ordering(207) 00:09:43.702 fused_ordering(208) 00:09:43.702 fused_ordering(209) 00:09:43.702 fused_ordering(210) 00:09:43.702 fused_ordering(211) 00:09:43.702 fused_ordering(212) 00:09:43.702 fused_ordering(213) 00:09:43.702 fused_ordering(214) 00:09:43.702 fused_ordering(215) 00:09:43.702 fused_ordering(216) 00:09:43.702 fused_ordering(217) 00:09:43.702 fused_ordering(218) 00:09:43.702 fused_ordering(219) 00:09:43.702 fused_ordering(220) 00:09:43.702 fused_ordering(221) 00:09:43.702 fused_ordering(222) 00:09:43.702 fused_ordering(223) 00:09:43.702 fused_ordering(224) 00:09:43.702 fused_ordering(225) 00:09:43.702 fused_ordering(226) 00:09:43.702 fused_ordering(227) 00:09:43.702 fused_ordering(228) 00:09:43.702 fused_ordering(229) 00:09:43.702 fused_ordering(230) 00:09:43.702 fused_ordering(231) 00:09:43.702 fused_ordering(232) 00:09:43.702 fused_ordering(233) 00:09:43.702 fused_ordering(234) 00:09:43.702 fused_ordering(235) 00:09:43.702 fused_ordering(236) 00:09:43.702 fused_ordering(237) 00:09:43.702 fused_ordering(238) 00:09:43.702 fused_ordering(239) 00:09:43.702 fused_ordering(240) 00:09:43.702 fused_ordering(241) 00:09:43.702 fused_ordering(242) 00:09:43.702 fused_ordering(243) 00:09:43.702 fused_ordering(244) 00:09:43.702 fused_ordering(245) 00:09:43.702 fused_ordering(246) 00:09:43.702 fused_ordering(247) 00:09:43.702 fused_ordering(248) 00:09:43.702 fused_ordering(249) 00:09:43.702 fused_ordering(250) 00:09:43.702 fused_ordering(251) 00:09:43.702 fused_ordering(252) 00:09:43.702 fused_ordering(253) 00:09:43.702 fused_ordering(254) 00:09:43.702 fused_ordering(255) 00:09:43.702 fused_ordering(256) 00:09:43.702 fused_ordering(257) 00:09:43.702 fused_ordering(258) 00:09:43.702 fused_ordering(259) 00:09:43.702 fused_ordering(260) 00:09:43.702 fused_ordering(261) 00:09:43.702 fused_ordering(262) 00:09:43.702 fused_ordering(263) 00:09:43.702 fused_ordering(264) 00:09:43.702 fused_ordering(265) 00:09:43.702 fused_ordering(266) 00:09:43.702 fused_ordering(267) 00:09:43.702 fused_ordering(268) 00:09:43.702 fused_ordering(269) 00:09:43.702 fused_ordering(270) 00:09:43.702 fused_ordering(271) 00:09:43.702 fused_ordering(272) 00:09:43.702 fused_ordering(273) 00:09:43.702 fused_ordering(274) 00:09:43.702 fused_ordering(275) 00:09:43.702 fused_ordering(276) 00:09:43.702 fused_ordering(277) 00:09:43.702 fused_ordering(278) 00:09:43.702 fused_ordering(279) 00:09:43.702 fused_ordering(280) 00:09:43.703 fused_ordering(281) 00:09:43.703 fused_ordering(282) 00:09:43.703 fused_ordering(283) 00:09:43.703 fused_ordering(284) 00:09:43.703 fused_ordering(285) 00:09:43.703 fused_ordering(286) 00:09:43.703 fused_ordering(287) 00:09:43.703 fused_ordering(288) 00:09:43.703 fused_ordering(289) 00:09:43.703 fused_ordering(290) 00:09:43.703 fused_ordering(291) 00:09:43.703 fused_ordering(292) 00:09:43.703 fused_ordering(293) 00:09:43.703 fused_ordering(294) 00:09:43.703 fused_ordering(295) 00:09:43.703 fused_ordering(296) 00:09:43.703 fused_ordering(297) 00:09:43.703 fused_ordering(298) 00:09:43.703 fused_ordering(299) 00:09:43.703 fused_ordering(300) 00:09:43.703 fused_ordering(301) 00:09:43.703 fused_ordering(302) 00:09:43.703 fused_ordering(303) 00:09:43.703 fused_ordering(304) 00:09:43.703 fused_ordering(305) 00:09:43.703 fused_ordering(306) 00:09:43.703 fused_ordering(307) 00:09:43.703 fused_ordering(308) 00:09:43.703 fused_ordering(309) 00:09:43.703 fused_ordering(310) 00:09:43.703 fused_ordering(311) 00:09:43.703 fused_ordering(312) 00:09:43.703 fused_ordering(313) 00:09:43.703 fused_ordering(314) 00:09:43.703 fused_ordering(315) 00:09:43.703 fused_ordering(316) 00:09:43.703 fused_ordering(317) 00:09:43.703 fused_ordering(318) 00:09:43.703 fused_ordering(319) 00:09:43.703 fused_ordering(320) 00:09:43.703 fused_ordering(321) 00:09:43.703 fused_ordering(322) 00:09:43.703 fused_ordering(323) 00:09:43.703 fused_ordering(324) 00:09:43.703 fused_ordering(325) 00:09:43.703 fused_ordering(326) 00:09:43.703 fused_ordering(327) 00:09:43.703 fused_ordering(328) 00:09:43.703 fused_ordering(329) 00:09:43.703 fused_ordering(330) 00:09:43.703 fused_ordering(331) 00:09:43.703 fused_ordering(332) 00:09:43.703 fused_ordering(333) 00:09:43.703 fused_ordering(334) 00:09:43.703 fused_ordering(335) 00:09:43.703 fused_ordering(336) 00:09:43.703 fused_ordering(337) 00:09:43.703 fused_ordering(338) 00:09:43.703 fused_ordering(339) 00:09:43.703 fused_ordering(340) 00:09:43.703 fused_ordering(341) 00:09:43.703 fused_ordering(342) 00:09:43.703 fused_ordering(343) 00:09:43.703 fused_ordering(344) 00:09:43.703 fused_ordering(345) 00:09:43.703 fused_ordering(346) 00:09:43.703 fused_ordering(347) 00:09:43.703 fused_ordering(348) 00:09:43.703 fused_ordering(349) 00:09:43.703 fused_ordering(350) 00:09:43.703 fused_ordering(351) 00:09:43.703 fused_ordering(352) 00:09:43.703 fused_ordering(353) 00:09:43.703 fused_ordering(354) 00:09:43.703 fused_ordering(355) 00:09:43.703 fused_ordering(356) 00:09:43.703 fused_ordering(357) 00:09:43.703 fused_ordering(358) 00:09:43.703 fused_ordering(359) 00:09:43.703 fused_ordering(360) 00:09:43.703 fused_ordering(361) 00:09:43.703 fused_ordering(362) 00:09:43.703 fused_ordering(363) 00:09:43.703 fused_ordering(364) 00:09:43.703 fused_ordering(365) 00:09:43.703 fused_ordering(366) 00:09:43.703 fused_ordering(367) 00:09:43.703 fused_ordering(368) 00:09:43.703 fused_ordering(369) 00:09:43.703 fused_ordering(370) 00:09:43.703 fused_ordering(371) 00:09:43.703 fused_ordering(372) 00:09:43.703 fused_ordering(373) 00:09:43.703 fused_ordering(374) 00:09:43.703 fused_ordering(375) 00:09:43.703 fused_ordering(376) 00:09:43.703 fused_ordering(377) 00:09:43.703 fused_ordering(378) 00:09:43.703 fused_ordering(379) 00:09:43.703 fused_ordering(380) 00:09:43.703 fused_ordering(381) 00:09:43.703 fused_ordering(382) 00:09:43.703 fused_ordering(383) 00:09:43.703 fused_ordering(384) 00:09:43.703 fused_ordering(385) 00:09:43.703 fused_ordering(386) 00:09:43.703 fused_ordering(387) 00:09:43.703 fused_ordering(388) 00:09:43.703 fused_ordering(389) 00:09:43.703 fused_ordering(390) 00:09:43.703 fused_ordering(391) 00:09:43.703 fused_ordering(392) 00:09:43.703 fused_ordering(393) 00:09:43.703 fused_ordering(394) 00:09:43.703 fused_ordering(395) 00:09:43.703 fused_ordering(396) 00:09:43.703 fused_ordering(397) 00:09:43.703 fused_ordering(398) 00:09:43.703 fused_ordering(399) 00:09:43.703 fused_ordering(400) 00:09:43.703 fused_ordering(401) 00:09:43.703 fused_ordering(402) 00:09:43.703 fused_ordering(403) 00:09:43.703 fused_ordering(404) 00:09:43.703 fused_ordering(405) 00:09:43.703 fused_ordering(406) 00:09:43.703 fused_ordering(407) 00:09:43.703 fused_ordering(408) 00:09:43.703 fused_ordering(409) 00:09:43.703 fused_ordering(410) 00:09:44.270 fused_ordering(411) 00:09:44.270 fused_ordering(412) 00:09:44.270 fused_ordering(413) 00:09:44.270 fused_ordering(414) 00:09:44.270 fused_ordering(415) 00:09:44.270 fused_ordering(416) 00:09:44.270 fused_ordering(417) 00:09:44.270 fused_ordering(418) 00:09:44.270 fused_ordering(419) 00:09:44.270 fused_ordering(420) 00:09:44.270 fused_ordering(421) 00:09:44.270 fused_ordering(422) 00:09:44.270 fused_ordering(423) 00:09:44.270 fused_ordering(424) 00:09:44.270 fused_ordering(425) 00:09:44.270 fused_ordering(426) 00:09:44.270 fused_ordering(427) 00:09:44.270 fused_ordering(428) 00:09:44.270 fused_ordering(429) 00:09:44.270 fused_ordering(430) 00:09:44.270 fused_ordering(431) 00:09:44.270 fused_ordering(432) 00:09:44.270 fused_ordering(433) 00:09:44.270 fused_ordering(434) 00:09:44.270 fused_ordering(435) 00:09:44.270 fused_ordering(436) 00:09:44.270 fused_ordering(437) 00:09:44.270 fused_ordering(438) 00:09:44.270 fused_ordering(439) 00:09:44.270 fused_ordering(440) 00:09:44.270 fused_ordering(441) 00:09:44.270 fused_ordering(442) 00:09:44.270 fused_ordering(443) 00:09:44.270 fused_ordering(444) 00:09:44.270 fused_ordering(445) 00:09:44.270 fused_ordering(446) 00:09:44.270 fused_ordering(447) 00:09:44.270 fused_ordering(448) 00:09:44.270 fused_ordering(449) 00:09:44.270 fused_ordering(450) 00:09:44.270 fused_ordering(451) 00:09:44.270 fused_ordering(452) 00:09:44.270 fused_ordering(453) 00:09:44.270 fused_ordering(454) 00:09:44.270 fused_ordering(455) 00:09:44.270 fused_ordering(456) 00:09:44.270 fused_ordering(457) 00:09:44.270 fused_ordering(458) 00:09:44.270 fused_ordering(459) 00:09:44.270 fused_ordering(460) 00:09:44.270 fused_ordering(461) 00:09:44.270 fused_ordering(462) 00:09:44.270 fused_ordering(463) 00:09:44.270 fused_ordering(464) 00:09:44.270 fused_ordering(465) 00:09:44.270 fused_ordering(466) 00:09:44.270 fused_ordering(467) 00:09:44.270 fused_ordering(468) 00:09:44.270 fused_ordering(469) 00:09:44.270 fused_ordering(470) 00:09:44.270 fused_ordering(471) 00:09:44.270 fused_ordering(472) 00:09:44.270 fused_ordering(473) 00:09:44.270 fused_ordering(474) 00:09:44.270 fused_ordering(475) 00:09:44.270 fused_ordering(476) 00:09:44.270 fused_ordering(477) 00:09:44.270 fused_ordering(478) 00:09:44.270 fused_ordering(479) 00:09:44.270 fused_ordering(480) 00:09:44.270 fused_ordering(481) 00:09:44.270 fused_ordering(482) 00:09:44.270 fused_ordering(483) 00:09:44.270 fused_ordering(484) 00:09:44.270 fused_ordering(485) 00:09:44.270 fused_ordering(486) 00:09:44.270 fused_ordering(487) 00:09:44.270 fused_ordering(488) 00:09:44.270 fused_ordering(489) 00:09:44.270 fused_ordering(490) 00:09:44.270 fused_ordering(491) 00:09:44.270 fused_ordering(492) 00:09:44.270 fused_ordering(493) 00:09:44.270 fused_ordering(494) 00:09:44.270 fused_ordering(495) 00:09:44.270 fused_ordering(496) 00:09:44.270 fused_ordering(497) 00:09:44.270 fused_ordering(498) 00:09:44.270 fused_ordering(499) 00:09:44.270 fused_ordering(500) 00:09:44.270 fused_ordering(501) 00:09:44.270 fused_ordering(502) 00:09:44.270 fused_ordering(503) 00:09:44.270 fused_ordering(504) 00:09:44.270 fused_ordering(505) 00:09:44.270 fused_ordering(506) 00:09:44.270 fused_ordering(507) 00:09:44.270 fused_ordering(508) 00:09:44.270 fused_ordering(509) 00:09:44.270 fused_ordering(510) 00:09:44.270 fused_ordering(511) 00:09:44.270 fused_ordering(512) 00:09:44.270 fused_ordering(513) 00:09:44.270 fused_ordering(514) 00:09:44.270 fused_ordering(515) 00:09:44.270 fused_ordering(516) 00:09:44.270 fused_ordering(517) 00:09:44.270 fused_ordering(518) 00:09:44.270 fused_ordering(519) 00:09:44.271 fused_ordering(520) 00:09:44.271 fused_ordering(521) 00:09:44.271 fused_ordering(522) 00:09:44.271 fused_ordering(523) 00:09:44.271 fused_ordering(524) 00:09:44.271 fused_ordering(525) 00:09:44.271 fused_ordering(526) 00:09:44.271 fused_ordering(527) 00:09:44.271 fused_ordering(528) 00:09:44.271 fused_ordering(529) 00:09:44.271 fused_ordering(530) 00:09:44.271 fused_ordering(531) 00:09:44.271 fused_ordering(532) 00:09:44.271 fused_ordering(533) 00:09:44.271 fused_ordering(534) 00:09:44.271 fused_ordering(535) 00:09:44.271 fused_ordering(536) 00:09:44.271 fused_ordering(537) 00:09:44.271 fused_ordering(538) 00:09:44.271 fused_ordering(539) 00:09:44.271 fused_ordering(540) 00:09:44.271 fused_ordering(541) 00:09:44.271 fused_ordering(542) 00:09:44.271 fused_ordering(543) 00:09:44.271 fused_ordering(544) 00:09:44.271 fused_ordering(545) 00:09:44.271 fused_ordering(546) 00:09:44.271 fused_ordering(547) 00:09:44.271 fused_ordering(548) 00:09:44.271 fused_ordering(549) 00:09:44.271 fused_ordering(550) 00:09:44.271 fused_ordering(551) 00:09:44.271 fused_ordering(552) 00:09:44.271 fused_ordering(553) 00:09:44.271 fused_ordering(554) 00:09:44.271 fused_ordering(555) 00:09:44.271 fused_ordering(556) 00:09:44.271 fused_ordering(557) 00:09:44.271 fused_ordering(558) 00:09:44.271 fused_ordering(559) 00:09:44.271 fused_ordering(560) 00:09:44.271 fused_ordering(561) 00:09:44.271 fused_ordering(562) 00:09:44.271 fused_ordering(563) 00:09:44.271 fused_ordering(564) 00:09:44.271 fused_ordering(565) 00:09:44.271 fused_ordering(566) 00:09:44.271 fused_ordering(567) 00:09:44.271 fused_ordering(568) 00:09:44.271 fused_ordering(569) 00:09:44.271 fused_ordering(570) 00:09:44.271 fused_ordering(571) 00:09:44.271 fused_ordering(572) 00:09:44.271 fused_ordering(573) 00:09:44.271 fused_ordering(574) 00:09:44.271 fused_ordering(575) 00:09:44.271 fused_ordering(576) 00:09:44.271 fused_ordering(577) 00:09:44.271 fused_ordering(578) 00:09:44.271 fused_ordering(579) 00:09:44.271 fused_ordering(580) 00:09:44.271 fused_ordering(581) 00:09:44.271 fused_ordering(582) 00:09:44.271 fused_ordering(583) 00:09:44.271 fused_ordering(584) 00:09:44.271 fused_ordering(585) 00:09:44.271 fused_ordering(586) 00:09:44.271 fused_ordering(587) 00:09:44.271 fused_ordering(588) 00:09:44.271 fused_ordering(589) 00:09:44.271 fused_ordering(590) 00:09:44.271 fused_ordering(591) 00:09:44.271 fused_ordering(592) 00:09:44.271 fused_ordering(593) 00:09:44.271 fused_ordering(594) 00:09:44.271 fused_ordering(595) 00:09:44.271 fused_ordering(596) 00:09:44.271 fused_ordering(597) 00:09:44.271 fused_ordering(598) 00:09:44.271 fused_ordering(599) 00:09:44.271 fused_ordering(600) 00:09:44.271 fused_ordering(601) 00:09:44.271 fused_ordering(602) 00:09:44.271 fused_ordering(603) 00:09:44.271 fused_ordering(604) 00:09:44.271 fused_ordering(605) 00:09:44.271 fused_ordering(606) 00:09:44.271 fused_ordering(607) 00:09:44.271 fused_ordering(608) 00:09:44.271 fused_ordering(609) 00:09:44.271 fused_ordering(610) 00:09:44.271 fused_ordering(611) 00:09:44.271 fused_ordering(612) 00:09:44.271 fused_ordering(613) 00:09:44.271 fused_ordering(614) 00:09:44.271 fused_ordering(615) 00:09:44.529 fused_ordering(616) 00:09:44.529 fused_ordering(617) 00:09:44.529 fused_ordering(618) 00:09:44.529 fused_ordering(619) 00:09:44.529 fused_ordering(620) 00:09:44.529 fused_ordering(621) 00:09:44.529 fused_ordering(622) 00:09:44.529 fused_ordering(623) 00:09:44.529 fused_ordering(624) 00:09:44.529 fused_ordering(625) 00:09:44.529 fused_ordering(626) 00:09:44.529 fused_ordering(627) 00:09:44.529 fused_ordering(628) 00:09:44.529 fused_ordering(629) 00:09:44.529 fused_ordering(630) 00:09:44.529 fused_ordering(631) 00:09:44.529 fused_ordering(632) 00:09:44.529 fused_ordering(633) 00:09:44.529 fused_ordering(634) 00:09:44.529 fused_ordering(635) 00:09:44.529 fused_ordering(636) 00:09:44.529 fused_ordering(637) 00:09:44.529 fused_ordering(638) 00:09:44.529 fused_ordering(639) 00:09:44.529 fused_ordering(640) 00:09:44.529 fused_ordering(641) 00:09:44.529 fused_ordering(642) 00:09:44.529 fused_ordering(643) 00:09:44.529 fused_ordering(644) 00:09:44.529 fused_ordering(645) 00:09:44.529 fused_ordering(646) 00:09:44.529 fused_ordering(647) 00:09:44.529 fused_ordering(648) 00:09:44.529 fused_ordering(649) 00:09:44.529 fused_ordering(650) 00:09:44.529 fused_ordering(651) 00:09:44.529 fused_ordering(652) 00:09:44.529 fused_ordering(653) 00:09:44.529 fused_ordering(654) 00:09:44.529 fused_ordering(655) 00:09:44.529 fused_ordering(656) 00:09:44.529 fused_ordering(657) 00:09:44.529 fused_ordering(658) 00:09:44.529 fused_ordering(659) 00:09:44.529 fused_ordering(660) 00:09:44.529 fused_ordering(661) 00:09:44.529 fused_ordering(662) 00:09:44.529 fused_ordering(663) 00:09:44.529 fused_ordering(664) 00:09:44.529 fused_ordering(665) 00:09:44.529 fused_ordering(666) 00:09:44.529 fused_ordering(667) 00:09:44.529 fused_ordering(668) 00:09:44.529 fused_ordering(669) 00:09:44.529 fused_ordering(670) 00:09:44.529 fused_ordering(671) 00:09:44.529 fused_ordering(672) 00:09:44.529 fused_ordering(673) 00:09:44.529 fused_ordering(674) 00:09:44.529 fused_ordering(675) 00:09:44.529 fused_ordering(676) 00:09:44.529 fused_ordering(677) 00:09:44.529 fused_ordering(678) 00:09:44.529 fused_ordering(679) 00:09:44.529 fused_ordering(680) 00:09:44.529 fused_ordering(681) 00:09:44.529 fused_ordering(682) 00:09:44.529 fused_ordering(683) 00:09:44.529 fused_ordering(684) 00:09:44.529 fused_ordering(685) 00:09:44.529 fused_ordering(686) 00:09:44.529 fused_ordering(687) 00:09:44.529 fused_ordering(688) 00:09:44.529 fused_ordering(689) 00:09:44.529 fused_ordering(690) 00:09:44.529 fused_ordering(691) 00:09:44.529 fused_ordering(692) 00:09:44.529 fused_ordering(693) 00:09:44.529 fused_ordering(694) 00:09:44.530 fused_ordering(695) 00:09:44.530 fused_ordering(696) 00:09:44.530 fused_ordering(697) 00:09:44.530 fused_ordering(698) 00:09:44.530 fused_ordering(699) 00:09:44.530 fused_ordering(700) 00:09:44.530 fused_ordering(701) 00:09:44.530 fused_ordering(702) 00:09:44.530 fused_ordering(703) 00:09:44.530 fused_ordering(704) 00:09:44.530 fused_ordering(705) 00:09:44.530 fused_ordering(706) 00:09:44.530 fused_ordering(707) 00:09:44.530 fused_ordering(708) 00:09:44.530 fused_ordering(709) 00:09:44.530 fused_ordering(710) 00:09:44.530 fused_ordering(711) 00:09:44.530 fused_ordering(712) 00:09:44.530 fused_ordering(713) 00:09:44.530 fused_ordering(714) 00:09:44.530 fused_ordering(715) 00:09:44.530 fused_ordering(716) 00:09:44.530 fused_ordering(717) 00:09:44.530 fused_ordering(718) 00:09:44.530 fused_ordering(719) 00:09:44.530 fused_ordering(720) 00:09:44.530 fused_ordering(721) 00:09:44.530 fused_ordering(722) 00:09:44.530 fused_ordering(723) 00:09:44.530 fused_ordering(724) 00:09:44.530 fused_ordering(725) 00:09:44.530 fused_ordering(726) 00:09:44.530 fused_ordering(727) 00:09:44.530 fused_ordering(728) 00:09:44.530 fused_ordering(729) 00:09:44.530 fused_ordering(730) 00:09:44.530 fused_ordering(731) 00:09:44.530 fused_ordering(732) 00:09:44.530 fused_ordering(733) 00:09:44.530 fused_ordering(734) 00:09:44.530 fused_ordering(735) 00:09:44.530 fused_ordering(736) 00:09:44.530 fused_ordering(737) 00:09:44.530 fused_ordering(738) 00:09:44.530 fused_ordering(739) 00:09:44.530 fused_ordering(740) 00:09:44.530 fused_ordering(741) 00:09:44.530 fused_ordering(742) 00:09:44.530 fused_ordering(743) 00:09:44.530 fused_ordering(744) 00:09:44.530 fused_ordering(745) 00:09:44.530 fused_ordering(746) 00:09:44.530 fused_ordering(747) 00:09:44.530 fused_ordering(748) 00:09:44.530 fused_ordering(749) 00:09:44.530 fused_ordering(750) 00:09:44.530 fused_ordering(751) 00:09:44.530 fused_ordering(752) 00:09:44.530 fused_ordering(753) 00:09:44.530 fused_ordering(754) 00:09:44.530 fused_ordering(755) 00:09:44.530 fused_ordering(756) 00:09:44.530 fused_ordering(757) 00:09:44.530 fused_ordering(758) 00:09:44.530 fused_ordering(759) 00:09:44.530 fused_ordering(760) 00:09:44.530 fused_ordering(761) 00:09:44.530 fused_ordering(762) 00:09:44.530 fused_ordering(763) 00:09:44.530 fused_ordering(764) 00:09:44.530 fused_ordering(765) 00:09:44.530 fused_ordering(766) 00:09:44.530 fused_ordering(767) 00:09:44.530 fused_ordering(768) 00:09:44.530 fused_ordering(769) 00:09:44.530 fused_ordering(770) 00:09:44.530 fused_ordering(771) 00:09:44.530 fused_ordering(772) 00:09:44.530 fused_ordering(773) 00:09:44.530 fused_ordering(774) 00:09:44.530 fused_ordering(775) 00:09:44.530 fused_ordering(776) 00:09:44.530 fused_ordering(777) 00:09:44.530 fused_ordering(778) 00:09:44.530 fused_ordering(779) 00:09:44.530 fused_ordering(780) 00:09:44.530 fused_ordering(781) 00:09:44.530 fused_ordering(782) 00:09:44.530 fused_ordering(783) 00:09:44.530 fused_ordering(784) 00:09:44.530 fused_ordering(785) 00:09:44.530 fused_ordering(786) 00:09:44.530 fused_ordering(787) 00:09:44.530 fused_ordering(788) 00:09:44.530 fused_ordering(789) 00:09:44.530 fused_ordering(790) 00:09:44.530 fused_ordering(791) 00:09:44.530 fused_ordering(792) 00:09:44.530 fused_ordering(793) 00:09:44.530 fused_ordering(794) 00:09:44.530 fused_ordering(795) 00:09:44.530 fused_ordering(796) 00:09:44.530 fused_ordering(797) 00:09:44.530 fused_ordering(798) 00:09:44.530 fused_ordering(799) 00:09:44.530 fused_ordering(800) 00:09:44.530 fused_ordering(801) 00:09:44.530 fused_ordering(802) 00:09:44.530 fused_ordering(803) 00:09:44.530 fused_ordering(804) 00:09:44.530 fused_ordering(805) 00:09:44.530 fused_ordering(806) 00:09:44.530 fused_ordering(807) 00:09:44.530 fused_ordering(808) 00:09:44.530 fused_ordering(809) 00:09:44.530 fused_ordering(810) 00:09:44.530 fused_ordering(811) 00:09:44.530 fused_ordering(812) 00:09:44.530 fused_ordering(813) 00:09:44.530 fused_ordering(814) 00:09:44.530 fused_ordering(815) 00:09:44.530 fused_ordering(816) 00:09:44.530 fused_ordering(817) 00:09:44.530 fused_ordering(818) 00:09:44.530 fused_ordering(819) 00:09:44.530 fused_ordering(820) 00:09:45.097 fused_ordering(821) 00:09:45.097 fused_ordering(822) 00:09:45.097 fused_ordering(823) 00:09:45.097 fused_ordering(824) 00:09:45.097 fused_ordering(825) 00:09:45.097 fused_ordering(826) 00:09:45.097 fused_ordering(827) 00:09:45.097 fused_ordering(828) 00:09:45.097 fused_ordering(829) 00:09:45.097 fused_ordering(830) 00:09:45.097 fused_ordering(831) 00:09:45.097 fused_ordering(832) 00:09:45.097 fused_ordering(833) 00:09:45.097 fused_ordering(834) 00:09:45.097 fused_ordering(835) 00:09:45.097 fused_ordering(836) 00:09:45.097 fused_ordering(837) 00:09:45.097 fused_ordering(838) 00:09:45.097 fused_ordering(839) 00:09:45.097 fused_ordering(840) 00:09:45.097 fused_ordering(841) 00:09:45.097 fused_ordering(842) 00:09:45.097 fused_ordering(843) 00:09:45.097 fused_ordering(844) 00:09:45.097 fused_ordering(845) 00:09:45.097 fused_ordering(846) 00:09:45.097 fused_ordering(847) 00:09:45.097 fused_ordering(848) 00:09:45.097 fused_ordering(849) 00:09:45.097 fused_ordering(850) 00:09:45.097 fused_ordering(851) 00:09:45.097 fused_ordering(852) 00:09:45.097 fused_ordering(853) 00:09:45.097 fused_ordering(854) 00:09:45.097 fused_ordering(855) 00:09:45.097 fused_ordering(856) 00:09:45.097 fused_ordering(857) 00:09:45.097 fused_ordering(858) 00:09:45.097 fused_ordering(859) 00:09:45.097 fused_ordering(860) 00:09:45.097 fused_ordering(861) 00:09:45.097 fused_ordering(862) 00:09:45.097 fused_ordering(863) 00:09:45.097 fused_ordering(864) 00:09:45.097 fused_ordering(865) 00:09:45.097 fused_ordering(866) 00:09:45.097 fused_ordering(867) 00:09:45.097 fused_ordering(868) 00:09:45.097 fused_ordering(869) 00:09:45.097 fused_ordering(870) 00:09:45.097 fused_ordering(871) 00:09:45.097 fused_ordering(872) 00:09:45.097 fused_ordering(873) 00:09:45.097 fused_ordering(874) 00:09:45.097 fused_ordering(875) 00:09:45.097 fused_ordering(876) 00:09:45.097 fused_ordering(877) 00:09:45.097 fused_ordering(878) 00:09:45.097 fused_ordering(879) 00:09:45.097 fused_ordering(880) 00:09:45.097 fused_ordering(881) 00:09:45.097 fused_ordering(882) 00:09:45.097 fused_ordering(883) 00:09:45.097 fused_ordering(884) 00:09:45.097 fused_ordering(885) 00:09:45.097 fused_ordering(886) 00:09:45.097 fused_ordering(887) 00:09:45.097 fused_ordering(888) 00:09:45.097 fused_ordering(889) 00:09:45.097 fused_ordering(890) 00:09:45.097 fused_ordering(891) 00:09:45.097 fused_ordering(892) 00:09:45.097 fused_ordering(893) 00:09:45.097 fused_ordering(894) 00:09:45.097 fused_ordering(895) 00:09:45.097 fused_ordering(896) 00:09:45.097 fused_ordering(897) 00:09:45.097 fused_ordering(898) 00:09:45.097 fused_ordering(899) 00:09:45.097 fused_ordering(900) 00:09:45.097 fused_ordering(901) 00:09:45.097 fused_ordering(902) 00:09:45.097 fused_ordering(903) 00:09:45.097 fused_ordering(904) 00:09:45.097 fused_ordering(905) 00:09:45.097 fused_ordering(906) 00:09:45.097 fused_ordering(907) 00:09:45.097 fused_ordering(908) 00:09:45.097 fused_ordering(909) 00:09:45.097 fused_ordering(910) 00:09:45.097 fused_ordering(911) 00:09:45.097 fused_ordering(912) 00:09:45.097 fused_ordering(913) 00:09:45.097 fused_ordering(914) 00:09:45.097 fused_ordering(915) 00:09:45.097 fused_ordering(916) 00:09:45.097 fused_ordering(917) 00:09:45.097 fused_ordering(918) 00:09:45.097 fused_ordering(919) 00:09:45.097 fused_ordering(920) 00:09:45.097 fused_ordering(921) 00:09:45.097 fused_ordering(922) 00:09:45.097 fused_ordering(923) 00:09:45.097 fused_ordering(924) 00:09:45.097 fused_ordering(925) 00:09:45.097 fused_ordering(926) 00:09:45.097 fused_ordering(927) 00:09:45.097 fused_ordering(928) 00:09:45.097 fused_ordering(929) 00:09:45.097 fused_ordering(930) 00:09:45.097 fused_ordering(931) 00:09:45.097 fused_ordering(932) 00:09:45.097 fused_ordering(933) 00:09:45.097 fused_ordering(934) 00:09:45.097 fused_ordering(935) 00:09:45.097 fused_ordering(936) 00:09:45.097 fused_ordering(937) 00:09:45.097 fused_ordering(938) 00:09:45.097 fused_ordering(939) 00:09:45.097 fused_ordering(940) 00:09:45.097 fused_ordering(941) 00:09:45.097 fused_ordering(942) 00:09:45.097 fused_ordering(943) 00:09:45.097 fused_ordering(944) 00:09:45.097 fused_ordering(945) 00:09:45.097 fused_ordering(946) 00:09:45.097 fused_ordering(947) 00:09:45.097 fused_ordering(948) 00:09:45.097 fused_ordering(949) 00:09:45.097 fused_ordering(950) 00:09:45.097 fused_ordering(951) 00:09:45.097 fused_ordering(952) 00:09:45.097 fused_ordering(953) 00:09:45.097 fused_ordering(954) 00:09:45.097 fused_ordering(955) 00:09:45.097 fused_ordering(956) 00:09:45.097 fused_ordering(957) 00:09:45.098 fused_ordering(958) 00:09:45.098 fused_ordering(959) 00:09:45.098 fused_ordering(960) 00:09:45.098 fused_ordering(961) 00:09:45.098 fused_ordering(962) 00:09:45.098 fused_ordering(963) 00:09:45.098 fused_ordering(964) 00:09:45.098 fused_ordering(965) 00:09:45.098 fused_ordering(966) 00:09:45.098 fused_ordering(967) 00:09:45.098 fused_ordering(968) 00:09:45.098 fused_ordering(969) 00:09:45.098 fused_ordering(970) 00:09:45.098 fused_ordering(971) 00:09:45.098 fused_ordering(972) 00:09:45.098 fused_ordering(973) 00:09:45.098 fused_ordering(974) 00:09:45.098 fused_ordering(975) 00:09:45.098 fused_ordering(976) 00:09:45.098 fused_ordering(977) 00:09:45.098 fused_ordering(978) 00:09:45.098 fused_ordering(979) 00:09:45.098 fused_ordering(980) 00:09:45.098 fused_ordering(981) 00:09:45.098 fused_ordering(982) 00:09:45.098 fused_ordering(983) 00:09:45.098 fused_ordering(984) 00:09:45.098 fused_ordering(985) 00:09:45.098 fused_ordering(986) 00:09:45.098 fused_ordering(987) 00:09:45.098 fused_ordering(988) 00:09:45.098 fused_ordering(989) 00:09:45.098 fused_ordering(990) 00:09:45.098 fused_ordering(991) 00:09:45.098 fused_ordering(992) 00:09:45.098 fused_ordering(993) 00:09:45.098 fused_ordering(994) 00:09:45.098 fused_ordering(995) 00:09:45.098 fused_ordering(996) 00:09:45.098 fused_ordering(997) 00:09:45.098 fused_ordering(998) 00:09:45.098 fused_ordering(999) 00:09:45.098 fused_ordering(1000) 00:09:45.098 fused_ordering(1001) 00:09:45.098 fused_ordering(1002) 00:09:45.098 fused_ordering(1003) 00:09:45.098 fused_ordering(1004) 00:09:45.098 fused_ordering(1005) 00:09:45.098 fused_ordering(1006) 00:09:45.098 fused_ordering(1007) 00:09:45.098 fused_ordering(1008) 00:09:45.098 fused_ordering(1009) 00:09:45.098 fused_ordering(1010) 00:09:45.098 fused_ordering(1011) 00:09:45.098 fused_ordering(1012) 00:09:45.098 fused_ordering(1013) 00:09:45.098 fused_ordering(1014) 00:09:45.098 fused_ordering(1015) 00:09:45.098 fused_ordering(1016) 00:09:45.098 fused_ordering(1017) 00:09:45.098 fused_ordering(1018) 00:09:45.098 fused_ordering(1019) 00:09:45.098 fused_ordering(1020) 00:09:45.098 fused_ordering(1021) 00:09:45.098 fused_ordering(1022) 00:09:45.098 fused_ordering(1023) 00:09:45.098 00:11:03 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:09:45.098 00:11:03 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:09:45.098 00:11:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:45.098 00:11:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:09:45.098 00:11:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:45.098 00:11:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:09:45.098 00:11:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:45.098 00:11:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:45.098 rmmod nvme_tcp 00:09:45.098 rmmod nvme_fabrics 00:09:45.098 rmmod nvme_keyring 00:09:45.098 00:11:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:45.098 00:11:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:09:45.098 00:11:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:09:45.098 00:11:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 1412227 ']' 00:09:45.098 00:11:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 1412227 00:09:45.098 00:11:03 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@942 -- # '[' -z 1412227 ']' 00:09:45.098 00:11:03 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@946 -- # kill -0 1412227 00:09:45.098 00:11:03 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@947 -- # uname 00:09:45.098 00:11:03 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:09:45.098 00:11:03 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1412227 00:09:45.373 00:11:03 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:09:45.373 00:11:03 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:09:45.373 00:11:03 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1412227' 00:09:45.373 killing process with pid 1412227 00:09:45.373 00:11:03 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@961 -- # kill 1412227 00:09:45.373 00:11:03 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # wait 1412227 00:09:45.373 00:11:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:45.373 00:11:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:45.373 00:11:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:45.373 00:11:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:45.373 00:11:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:45.373 00:11:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:45.373 00:11:04 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:45.373 00:11:04 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:47.910 00:11:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:47.910 00:09:47.910 real 0m10.344s 00:09:47.910 user 0m5.367s 00:09:47.910 sys 0m5.403s 00:09:47.910 00:11:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1118 -- # xtrace_disable 00:09:47.910 00:11:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:47.910 ************************************ 00:09:47.910 END TEST nvmf_fused_ordering 00:09:47.910 ************************************ 00:09:47.910 00:11:06 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:09:47.910 00:11:06 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:09:47.910 00:11:06 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:09:47.910 00:11:06 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:09:47.910 00:11:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:47.910 ************************************ 00:09:47.910 START TEST nvmf_delete_subsystem 00:09:47.910 ************************************ 00:09:47.910 00:11:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:09:47.910 * Looking for test storage... 00:09:47.910 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:47.910 00:11:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:47.910 00:11:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:09:47.910 00:11:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:47.910 00:11:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:47.910 00:11:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:47.910 00:11:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:47.910 00:11:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:47.910 00:11:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:47.910 00:11:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:47.910 00:11:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:47.910 00:11:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:47.910 00:11:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:47.910 00:11:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:47.910 00:11:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:47.910 00:11:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:47.910 00:11:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:47.910 00:11:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:47.910 00:11:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:47.910 00:11:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:47.910 00:11:06 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:47.910 00:11:06 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:47.910 00:11:06 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:47.910 00:11:06 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.910 00:11:06 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.910 00:11:06 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.910 00:11:06 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:09:47.910 00:11:06 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.910 00:11:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:09:47.910 00:11:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:47.910 00:11:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:47.910 00:11:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:47.910 00:11:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:47.910 00:11:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:47.910 00:11:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:47.910 00:11:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:47.910 00:11:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:47.910 00:11:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:09:47.910 00:11:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:47.910 00:11:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:47.910 00:11:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:47.910 00:11:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:47.910 00:11:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:47.910 00:11:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:47.910 00:11:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:47.910 00:11:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:47.910 00:11:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:47.910 00:11:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:47.910 00:11:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:09:47.910 00:11:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:53.185 00:11:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:53.185 00:11:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:09:53.185 00:11:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:53.185 00:11:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:53.185 00:11:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:53.185 00:11:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:53.185 00:11:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:53.185 00:11:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:09:53.185 00:11:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:53.185 00:11:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:09:53.185 00:11:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:09:53.185 00:11:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:09:53.185 00:11:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:09:53.185 00:11:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:09:53.185 00:11:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:09:53.185 00:11:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:53.185 00:11:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:53.185 00:11:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:53.185 00:11:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:53.185 00:11:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:53.185 00:11:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:53.185 00:11:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:53.185 00:11:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:53.185 00:11:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:53.185 00:11:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:53.185 00:11:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:53.185 00:11:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:53.185 00:11:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:53.185 00:11:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:53.185 00:11:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:53.185 00:11:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:53.185 00:11:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:53.185 00:11:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:53.185 00:11:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:53.185 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:53.185 00:11:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:53.185 00:11:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:53.185 00:11:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:53.185 00:11:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:53.185 00:11:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:53.185 00:11:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:53.185 00:11:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:53.185 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:53.185 00:11:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:53.185 00:11:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:53.185 00:11:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:53.185 00:11:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:53.185 00:11:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:53.185 00:11:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:53.185 00:11:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:53.185 00:11:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:53.185 00:11:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:53.185 00:11:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:53.185 00:11:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:53.185 00:11:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:53.185 00:11:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:53.185 00:11:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:53.185 00:11:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:53.185 00:11:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:53.185 Found net devices under 0000:86:00.0: cvl_0_0 00:09:53.185 00:11:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:53.185 00:11:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:53.185 00:11:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:53.185 00:11:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:53.185 00:11:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:53.185 00:11:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:53.185 00:11:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:53.185 00:11:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:53.185 00:11:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:53.185 Found net devices under 0000:86:00.1: cvl_0_1 00:09:53.185 00:11:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:53.185 00:11:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:53.185 00:11:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:09:53.185 00:11:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:53.185 00:11:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:53.185 00:11:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:53.185 00:11:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:53.185 00:11:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:53.185 00:11:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:53.185 00:11:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:53.185 00:11:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:53.185 00:11:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:53.185 00:11:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:53.185 00:11:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:53.185 00:11:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:53.185 00:11:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:53.185 00:11:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:53.185 00:11:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:53.185 00:11:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:53.186 00:11:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:53.186 00:11:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:53.186 00:11:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:53.186 00:11:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:53.186 00:11:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:53.186 00:11:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:53.186 00:11:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:53.186 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:53.186 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:09:53.186 00:09:53.186 --- 10.0.0.2 ping statistics --- 00:09:53.186 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:53.186 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:09:53.186 00:11:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:53.186 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:53.186 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:09:53.186 00:09:53.186 --- 10.0.0.1 ping statistics --- 00:09:53.186 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:53.186 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:09:53.186 00:11:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:53.186 00:11:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:09:53.186 00:11:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:53.186 00:11:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:53.186 00:11:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:53.186 00:11:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:53.186 00:11:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:53.186 00:11:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:53.186 00:11:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:53.186 00:11:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:09:53.186 00:11:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:53.186 00:11:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:53.186 00:11:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:53.186 00:11:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=1416216 00:09:53.186 00:11:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 1416216 00:09:53.186 00:11:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:09:53.186 00:11:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@823 -- # '[' -z 1416216 ']' 00:09:53.186 00:11:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:53.186 00:11:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@828 -- # local max_retries=100 00:09:53.186 00:11:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:53.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:53.186 00:11:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@832 -- # xtrace_disable 00:09:53.186 00:11:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:53.186 [2024-07-16 00:11:11.797203] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:09:53.186 [2024-07-16 00:11:11.797258] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:53.186 [2024-07-16 00:11:11.853401] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:53.186 [2024-07-16 00:11:11.933066] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:53.186 [2024-07-16 00:11:11.933099] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:53.186 [2024-07-16 00:11:11.933109] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:53.186 [2024-07-16 00:11:11.933117] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:53.186 [2024-07-16 00:11:11.933123] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:53.186 [2024-07-16 00:11:11.933165] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:53.186 [2024-07-16 00:11:11.933168] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:53.754 00:11:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:09:53.754 00:11:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@856 -- # return 0 00:09:53.754 00:11:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:53.754 00:11:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:53.754 00:11:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:54.013 00:11:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:54.013 00:11:12 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:54.013 00:11:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:54.013 00:11:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:54.013 [2024-07-16 00:11:12.645175] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:54.013 00:11:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:54.013 00:11:12 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:54.013 00:11:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:54.013 00:11:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:54.013 00:11:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:54.013 00:11:12 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:54.013 00:11:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:54.013 00:11:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:54.013 [2024-07-16 00:11:12.661312] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:54.013 00:11:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:54.013 00:11:12 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:09:54.013 00:11:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:54.013 00:11:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:54.013 NULL1 00:09:54.013 00:11:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:54.013 00:11:12 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:54.013 00:11:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:54.013 00:11:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:54.013 Delay0 00:09:54.013 00:11:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:54.013 00:11:12 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:54.013 00:11:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:54.013 00:11:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:54.013 00:11:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:54.013 00:11:12 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1416462 00:09:54.013 00:11:12 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:09:54.013 00:11:12 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:09:54.013 [2024-07-16 00:11:12.735837] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:09:55.917 00:11:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:55.917 00:11:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:55.917 00:11:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:56.178 Read completed with error (sct=0, sc=8) 00:09:56.178 Write completed with error (sct=0, sc=8) 00:09:56.178 Read completed with error (sct=0, sc=8) 00:09:56.178 Read completed with error (sct=0, sc=8) 00:09:56.178 starting I/O failed: -6 00:09:56.178 Write completed with error (sct=0, sc=8) 00:09:56.178 Write completed with error (sct=0, sc=8) 00:09:56.178 Read completed with error (sct=0, sc=8) 00:09:56.178 Read completed with error (sct=0, sc=8) 00:09:56.178 starting I/O failed: -6 00:09:56.178 Write completed with error (sct=0, sc=8) 00:09:56.178 Read completed with error (sct=0, sc=8) 00:09:56.178 Read completed with error (sct=0, sc=8) 00:09:56.178 Write completed with error (sct=0, sc=8) 00:09:56.178 starting I/O failed: -6 00:09:56.178 Read completed with error (sct=0, sc=8) 00:09:56.178 Read completed with error (sct=0, sc=8) 00:09:56.178 Read completed with error (sct=0, sc=8) 00:09:56.178 Write completed with error (sct=0, sc=8) 00:09:56.178 starting I/O failed: -6 00:09:56.178 Read completed with error (sct=0, sc=8) 00:09:56.178 Read completed with error (sct=0, sc=8) 00:09:56.178 Read completed with error (sct=0, sc=8) 00:09:56.178 Read completed with error (sct=0, sc=8) 00:09:56.178 starting I/O failed: -6 00:09:56.178 Read completed with error (sct=0, sc=8) 00:09:56.178 Write completed with error (sct=0, sc=8) 00:09:56.178 Read completed with error (sct=0, sc=8) 00:09:56.178 Read completed with error (sct=0, sc=8) 00:09:56.178 starting I/O failed: -6 00:09:56.178 Read completed with error (sct=0, sc=8) 00:09:56.178 Read completed with error (sct=0, sc=8) 00:09:56.178 Read completed with error (sct=0, sc=8) 00:09:56.178 Write completed with error (sct=0, sc=8) 00:09:56.178 starting I/O failed: -6 00:09:56.178 Read completed with error (sct=0, sc=8) 00:09:56.178 Read completed with error (sct=0, sc=8) 00:09:56.178 Read completed with error (sct=0, sc=8) 00:09:56.178 Write completed with error (sct=0, sc=8) 00:09:56.178 starting I/O failed: -6 00:09:56.178 Read completed with error (sct=0, sc=8) 00:09:56.178 Read completed with error (sct=0, sc=8) 00:09:56.178 Write completed with error (sct=0, sc=8) 00:09:56.178 Read completed with error (sct=0, sc=8) 00:09:56.178 starting I/O failed: -6 00:09:56.178 Read completed with error (sct=0, sc=8) 00:09:56.178 Write completed with error (sct=0, sc=8) 00:09:56.178 [2024-07-16 00:11:14.945382] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f77e000d600 is same with the state(5) to be set 00:09:56.178 Read completed with error (sct=0, sc=8) 00:09:56.178 Read completed with error (sct=0, sc=8) 00:09:56.178 Read completed with error (sct=0, sc=8) 00:09:56.178 starting I/O failed: -6 00:09:56.178 Write completed with error (sct=0, sc=8) 00:09:56.178 Read completed with error (sct=0, sc=8) 00:09:56.178 Read completed with error (sct=0, sc=8) 00:09:56.178 Read completed with error (sct=0, sc=8) 00:09:56.178 starting I/O failed: -6 00:09:56.178 Read completed with error (sct=0, sc=8) 00:09:56.178 Read completed with error (sct=0, sc=8) 00:09:56.178 Read completed with error (sct=0, sc=8) 00:09:56.178 Read completed with error (sct=0, sc=8) 00:09:56.178 starting I/O failed: -6 00:09:56.178 Write completed with error (sct=0, sc=8) 00:09:56.178 Read completed with error (sct=0, sc=8) 00:09:56.178 Write completed with error (sct=0, sc=8) 00:09:56.178 Read completed with error (sct=0, sc=8) 00:09:56.178 starting I/O failed: -6 00:09:56.178 Read completed with error (sct=0, sc=8) 00:09:56.178 Write completed with error (sct=0, sc=8) 00:09:56.178 Read completed with error (sct=0, sc=8) 00:09:56.178 Write completed with error (sct=0, sc=8) 00:09:56.178 starting I/O failed: -6 00:09:56.178 Read completed with error (sct=0, sc=8) 00:09:56.178 Read completed with error (sct=0, sc=8) 00:09:56.178 Read completed with error (sct=0, sc=8) 00:09:56.178 Read completed with error (sct=0, sc=8) 00:09:56.178 starting I/O failed: -6 00:09:56.178 Write completed with error (sct=0, sc=8) 00:09:56.178 Read completed with error (sct=0, sc=8) 00:09:56.178 Write completed with error (sct=0, sc=8) 00:09:56.178 Write completed with error (sct=0, sc=8) 00:09:56.178 starting I/O failed: -6 00:09:56.178 Read completed with error (sct=0, sc=8) 00:09:56.178 Read completed with error (sct=0, sc=8) 00:09:56.178 Read completed with error (sct=0, sc=8) 00:09:56.178 Read completed with error (sct=0, sc=8) 00:09:56.178 starting I/O failed: -6 00:09:56.178 Read completed with error (sct=0, sc=8) 00:09:56.178 Read completed with error (sct=0, sc=8) 00:09:56.178 Write completed with error (sct=0, sc=8) 00:09:56.178 Read completed with error (sct=0, sc=8) 00:09:56.178 starting I/O failed: -6 00:09:56.178 Read completed with error (sct=0, sc=8) 00:09:56.178 Read completed with error (sct=0, sc=8) 00:09:56.178 Read completed with error (sct=0, sc=8) 00:09:56.178 Read completed with error (sct=0, sc=8) 00:09:56.178 starting I/O failed: -6 00:09:56.178 Write completed with error (sct=0, sc=8) 00:09:56.178 Read completed with error (sct=0, sc=8) 00:09:56.178 Read completed with error (sct=0, sc=8) 00:09:56.178 Read completed with error (sct=0, sc=8) 00:09:56.178 starting I/O failed: -6 00:09:56.178 Write completed with error (sct=0, sc=8) 00:09:56.178 Write completed with error (sct=0, sc=8) 00:09:56.178 Read completed with error (sct=0, sc=8) 00:09:56.178 Read completed with error (sct=0, sc=8) 00:09:56.178 starting I/O failed: -6 00:09:56.178 Write completed with error (sct=0, sc=8) 00:09:56.178 Write completed with error (sct=0, sc=8) 00:09:56.178 Read completed with error (sct=0, sc=8) 00:09:56.178 Write completed with error (sct=0, sc=8) 00:09:56.178 starting I/O failed: -6 00:09:56.178 Write completed with error (sct=0, sc=8) 00:09:56.178 Write completed with error (sct=0, sc=8) 00:09:56.178 Read completed with error (sct=0, sc=8) 00:09:56.178 Read completed with error (sct=0, sc=8) 00:09:56.178 starting I/O failed: -6 00:09:56.178 Read completed with error (sct=0, sc=8) 00:09:56.178 [2024-07-16 00:11:14.945839] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec35c0 is same with the state(5) to be set 00:09:56.178 Write completed with error (sct=0, sc=8) 00:09:56.178 Read completed with error (sct=0, sc=8) 00:09:56.178 Write completed with error (sct=0, sc=8) 00:09:56.178 Read completed with error (sct=0, sc=8) 00:09:56.178 Read completed with error (sct=0, sc=8) 00:09:56.178 Write completed with error (sct=0, sc=8) 00:09:56.178 Read completed with error (sct=0, sc=8) 00:09:56.178 Read completed with error (sct=0, sc=8) 00:09:56.178 Read completed with error (sct=0, sc=8) 00:09:56.178 Write completed with error (sct=0, sc=8) 00:09:56.178 Write completed with error (sct=0, sc=8) 00:09:56.178 Read completed with error (sct=0, sc=8) 00:09:56.178 Write completed with error (sct=0, sc=8) 00:09:56.178 Write completed with error (sct=0, sc=8) 00:09:56.178 Write completed with error (sct=0, sc=8) 00:09:56.178 Read completed with error (sct=0, sc=8) 00:09:56.178 Read completed with error (sct=0, sc=8) 00:09:56.178 Read completed with error (sct=0, sc=8) 00:09:56.178 Read completed with error (sct=0, sc=8) 00:09:56.179 Write completed with error (sct=0, sc=8) 00:09:56.179 Write completed with error (sct=0, sc=8) 00:09:56.179 Read completed with error (sct=0, sc=8) 00:09:56.179 Read completed with error (sct=0, sc=8) 00:09:56.179 Write completed with error (sct=0, sc=8) 00:09:56.179 Read completed with error (sct=0, sc=8) 00:09:56.179 Read completed with error (sct=0, sc=8) 00:09:56.179 Read completed with error (sct=0, sc=8) 00:09:56.179 Write completed with error (sct=0, sc=8) 00:09:56.179 Read completed with error (sct=0, sc=8) 00:09:56.179 Read completed with error (sct=0, sc=8) 00:09:56.179 Read completed with error (sct=0, sc=8) 00:09:56.179 Write completed with error (sct=0, sc=8) 00:09:56.179 Read completed with error (sct=0, sc=8) 00:09:56.179 Read completed with error (sct=0, sc=8) 00:09:56.179 Write completed with error (sct=0, sc=8) 00:09:56.179 Read completed with error (sct=0, sc=8) 00:09:56.179 Read completed with error (sct=0, sc=8) 00:09:56.179 Write completed with error (sct=0, sc=8) 00:09:56.179 Read completed with error (sct=0, sc=8) 00:09:56.179 Read completed with error (sct=0, sc=8) 00:09:56.179 Write completed with error (sct=0, sc=8) 00:09:56.179 Write completed with error (sct=0, sc=8) 00:09:56.179 Write completed with error (sct=0, sc=8) 00:09:56.179 Read completed with error (sct=0, sc=8) 00:09:56.179 Read completed with error (sct=0, sc=8) 00:09:56.179 Read completed with error (sct=0, sc=8) 00:09:56.179 Read completed with error (sct=0, sc=8) 00:09:56.179 Read completed with error (sct=0, sc=8) 00:09:56.179 [2024-07-16 00:11:14.946037] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f77e0000c00 is same with the state(5) to be set 00:09:56.179 Read completed with error (sct=0, sc=8) 00:09:56.179 Read completed with error (sct=0, sc=8) 00:09:56.179 Read completed with error (sct=0, sc=8) 00:09:56.179 Read completed with error (sct=0, sc=8) 00:09:56.179 Read completed with error (sct=0, sc=8) 00:09:56.179 Write completed with error (sct=0, sc=8) 00:09:56.179 Read completed with error (sct=0, sc=8) 00:09:56.179 Read completed with error (sct=0, sc=8) 00:09:56.179 Read completed with error (sct=0, sc=8) 00:09:56.179 Read completed with error (sct=0, sc=8) 00:09:56.179 Read completed with error (sct=0, sc=8) 00:09:56.179 Write completed with error (sct=0, sc=8) 00:09:56.179 Read completed with error (sct=0, sc=8) 00:09:56.179 Write completed with error (sct=0, sc=8) 00:09:56.179 Write completed with error (sct=0, sc=8) 00:09:56.179 Read completed with error (sct=0, sc=8) 00:09:56.179 Write completed with error (sct=0, sc=8) 00:09:56.179 Write completed with error (sct=0, sc=8) 00:09:56.179 Read completed with error (sct=0, sc=8) 00:09:56.179 Read completed with error (sct=0, sc=8) 00:09:56.179 Read completed with error (sct=0, sc=8) 00:09:56.179 Read completed with error (sct=0, sc=8) 00:09:56.179 Read completed with error (sct=0, sc=8) 00:09:56.179 Read completed with error (sct=0, sc=8) 00:09:56.179 Write completed with error (sct=0, sc=8) 00:09:56.179 Read completed with error (sct=0, sc=8) 00:09:56.179 Read completed with error (sct=0, sc=8) 00:09:56.179 Read completed with error (sct=0, sc=8) 00:09:56.179 Read completed with error (sct=0, sc=8) 00:09:56.179 Read completed with error (sct=0, sc=8) 00:09:56.179 Read completed with error (sct=0, sc=8) 00:09:56.179 Read completed with error (sct=0, sc=8) 00:09:56.179 Read completed with error (sct=0, sc=8) 00:09:56.179 Read completed with error (sct=0, sc=8) 00:09:56.179 Read completed with error (sct=0, sc=8) 00:09:56.179 Write completed with error (sct=0, sc=8) 00:09:56.179 Read completed with error (sct=0, sc=8) 00:09:56.179 Read completed with error (sct=0, sc=8) 00:09:56.179 Read completed with error (sct=0, sc=8) 00:09:56.179 Read completed with error (sct=0, sc=8) 00:09:56.179 Read completed with error (sct=0, sc=8) 00:09:56.179 Read completed with error (sct=0, sc=8) 00:09:56.179 Read completed with error (sct=0, sc=8) 00:09:56.179 Read completed with error (sct=0, sc=8) 00:09:56.179 Read completed with error (sct=0, sc=8) 00:09:56.179 Read completed with error (sct=0, sc=8) 00:09:56.179 Read completed with error (sct=0, sc=8) 00:09:56.179 Read completed with error (sct=0, sc=8) 00:09:56.179 [2024-07-16 00:11:14.946234] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f77e000cfe0 is same with the state(5) to be set 00:09:57.189 [2024-07-16 00:11:15.913714] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec4ac0 is same with the state(5) to be set 00:09:57.189 Read completed with error (sct=0, sc=8) 00:09:57.189 Read completed with error (sct=0, sc=8) 00:09:57.189 Write completed with error (sct=0, sc=8) 00:09:57.189 Read completed with error (sct=0, sc=8) 00:09:57.189 Read completed with error (sct=0, sc=8) 00:09:57.189 Read completed with error (sct=0, sc=8) 00:09:57.189 Write completed with error (sct=0, sc=8) 00:09:57.189 Read completed with error (sct=0, sc=8) 00:09:57.189 Read completed with error (sct=0, sc=8) 00:09:57.189 Read completed with error (sct=0, sc=8) 00:09:57.189 Read completed with error (sct=0, sc=8) 00:09:57.189 Read completed with error (sct=0, sc=8) 00:09:57.189 Read completed with error (sct=0, sc=8) 00:09:57.189 Read completed with error (sct=0, sc=8) 00:09:57.189 Read completed with error (sct=0, sc=8) 00:09:57.189 [2024-07-16 00:11:15.947859] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f77e000d2f0 is same with the state(5) to be set 00:09:57.189 Read completed with error (sct=0, sc=8) 00:09:57.189 Read completed with error (sct=0, sc=8) 00:09:57.189 Read completed with error (sct=0, sc=8) 00:09:57.189 Write completed with error (sct=0, sc=8) 00:09:57.189 Read completed with error (sct=0, sc=8) 00:09:57.189 Write completed with error (sct=0, sc=8) 00:09:57.189 Read completed with error (sct=0, sc=8) 00:09:57.189 Write completed with error (sct=0, sc=8) 00:09:57.189 Read completed with error (sct=0, sc=8) 00:09:57.189 Read completed with error (sct=0, sc=8) 00:09:57.189 Read completed with error (sct=0, sc=8) 00:09:57.189 Write completed with error (sct=0, sc=8) 00:09:57.189 Read completed with error (sct=0, sc=8) 00:09:57.189 Read completed with error (sct=0, sc=8) 00:09:57.189 Read completed with error (sct=0, sc=8) 00:09:57.189 Write completed with error (sct=0, sc=8) 00:09:57.189 Read completed with error (sct=0, sc=8) 00:09:57.189 Write completed with error (sct=0, sc=8) 00:09:57.189 Write completed with error (sct=0, sc=8) 00:09:57.189 Read completed with error (sct=0, sc=8) 00:09:57.189 Write completed with error (sct=0, sc=8) 00:09:57.189 Read completed with error (sct=0, sc=8) 00:09:57.189 Write completed with error (sct=0, sc=8) 00:09:57.189 Read completed with error (sct=0, sc=8) 00:09:57.189 Read completed with error (sct=0, sc=8) 00:09:57.189 Write completed with error (sct=0, sc=8) 00:09:57.189 Write completed with error (sct=0, sc=8) 00:09:57.189 Read completed with error (sct=0, sc=8) 00:09:57.189 Read completed with error (sct=0, sc=8) 00:09:57.189 Write completed with error (sct=0, sc=8) 00:09:57.189 Read completed with error (sct=0, sc=8) 00:09:57.189 Read completed with error (sct=0, sc=8) 00:09:57.189 Write completed with error (sct=0, sc=8) 00:09:57.189 Read completed with error (sct=0, sc=8) 00:09:57.189 Read completed with error (sct=0, sc=8) 00:09:57.189 Write completed with error (sct=0, sc=8) 00:09:57.189 Write completed with error (sct=0, sc=8) 00:09:57.189 Read completed with error (sct=0, sc=8) 00:09:57.189 [2024-07-16 00:11:15.948147] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3000 is same with the state(5) to be set 00:09:57.189 Read completed with error (sct=0, sc=8) 00:09:57.189 Read completed with error (sct=0, sc=8) 00:09:57.189 Write completed with error (sct=0, sc=8) 00:09:57.189 Read completed with error (sct=0, sc=8) 00:09:57.189 Write completed with error (sct=0, sc=8) 00:09:57.189 Write completed with error (sct=0, sc=8) 00:09:57.189 Read completed with error (sct=0, sc=8) 00:09:57.189 Read completed with error (sct=0, sc=8) 00:09:57.189 Read completed with error (sct=0, sc=8) 00:09:57.189 Read completed with error (sct=0, sc=8) 00:09:57.189 Read completed with error (sct=0, sc=8) 00:09:57.189 Read completed with error (sct=0, sc=8) 00:09:57.189 Write completed with error (sct=0, sc=8) 00:09:57.189 Write completed with error (sct=0, sc=8) 00:09:57.189 Read completed with error (sct=0, sc=8) 00:09:57.189 Read completed with error (sct=0, sc=8) 00:09:57.189 Read completed with error (sct=0, sc=8) 00:09:57.189 Read completed with error (sct=0, sc=8) 00:09:57.189 Write completed with error (sct=0, sc=8) 00:09:57.189 Read completed with error (sct=0, sc=8) 00:09:57.189 Write completed with error (sct=0, sc=8) 00:09:57.189 Read completed with error (sct=0, sc=8) 00:09:57.189 Read completed with error (sct=0, sc=8) 00:09:57.189 Read completed with error (sct=0, sc=8) 00:09:57.189 Write completed with error (sct=0, sc=8) 00:09:57.189 Read completed with error (sct=0, sc=8) 00:09:57.189 Read completed with error (sct=0, sc=8) 00:09:57.189 Read completed with error (sct=0, sc=8) 00:09:57.189 Write completed with error (sct=0, sc=8) 00:09:57.189 Read completed with error (sct=0, sc=8) 00:09:57.189 Read completed with error (sct=0, sc=8) 00:09:57.189 Read completed with error (sct=0, sc=8) 00:09:57.189 Write completed with error (sct=0, sc=8) 00:09:57.189 Read completed with error (sct=0, sc=8) 00:09:57.189 Read completed with error (sct=0, sc=8) 00:09:57.189 Read completed with error (sct=0, sc=8) 00:09:57.189 Read completed with error (sct=0, sc=8) 00:09:57.189 Write completed with error (sct=0, sc=8) 00:09:57.189 [2024-07-16 00:11:15.948314] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec37a0 is same with the state(5) to be set 00:09:57.189 Write completed with error (sct=0, sc=8) 00:09:57.189 Read completed with error (sct=0, sc=8) 00:09:57.189 Read completed with error (sct=0, sc=8) 00:09:57.189 Read completed with error (sct=0, sc=8) 00:09:57.189 Read completed with error (sct=0, sc=8) 00:09:57.189 Write completed with error (sct=0, sc=8) 00:09:57.189 Write completed with error (sct=0, sc=8) 00:09:57.189 Write completed with error (sct=0, sc=8) 00:09:57.189 Read completed with error (sct=0, sc=8) 00:09:57.189 Read completed with error (sct=0, sc=8) 00:09:57.189 Read completed with error (sct=0, sc=8) 00:09:57.189 Read completed with error (sct=0, sc=8) 00:09:57.189 Read completed with error (sct=0, sc=8) 00:09:57.189 Write completed with error (sct=0, sc=8) 00:09:57.189 Read completed with error (sct=0, sc=8) 00:09:57.189 Read completed with error (sct=0, sc=8) 00:09:57.189 Write completed with error (sct=0, sc=8) 00:09:57.189 Write completed with error (sct=0, sc=8) 00:09:57.189 Write completed with error (sct=0, sc=8) 00:09:57.189 Write completed with error (sct=0, sc=8) 00:09:57.189 Read completed with error (sct=0, sc=8) 00:09:57.189 Write completed with error (sct=0, sc=8) 00:09:57.189 Write completed with error (sct=0, sc=8) 00:09:57.189 Read completed with error (sct=0, sc=8) 00:09:57.189 Write completed with error (sct=0, sc=8) 00:09:57.189 Write completed with error (sct=0, sc=8) 00:09:57.189 Read completed with error (sct=0, sc=8) 00:09:57.189 Write completed with error (sct=0, sc=8) 00:09:57.189 Read completed with error (sct=0, sc=8) 00:09:57.189 Write completed with error (sct=0, sc=8) 00:09:57.189 Write completed with error (sct=0, sc=8) 00:09:57.189 Read completed with error (sct=0, sc=8) 00:09:57.189 Write completed with error (sct=0, sc=8) 00:09:57.189 Write completed with error (sct=0, sc=8) 00:09:57.189 Write completed with error (sct=0, sc=8) 00:09:57.189 Read completed with error (sct=0, sc=8) 00:09:57.189 Read completed with error (sct=0, sc=8) 00:09:57.189 Read completed with error (sct=0, sc=8) 00:09:57.189 [2024-07-16 00:11:15.948458] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec33e0 is same with the state(5) to be set 00:09:57.189 Initializing NVMe Controllers 00:09:57.189 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:57.189 Controller IO queue size 128, less than required. 00:09:57.189 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:57.189 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:09:57.189 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:09:57.189 Initialization complete. Launching workers. 00:09:57.190 ======================================================== 00:09:57.190 Latency(us) 00:09:57.190 Device Information : IOPS MiB/s Average min max 00:09:57.190 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 195.68 0.10 943480.14 769.22 1010845.86 00:09:57.190 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 153.47 0.07 882589.49 290.94 1011470.89 00:09:57.190 ======================================================== 00:09:57.190 Total : 349.15 0.17 916715.97 290.94 1011470.89 00:09:57.190 00:09:57.190 [2024-07-16 00:11:15.948963] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec4ac0 (9): Bad file descriptor 00:09:57.190 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:09:57.190 00:11:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:57.190 00:11:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:09:57.190 00:11:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1416462 00:09:57.190 00:11:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:09:57.757 00:11:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:09:57.757 00:11:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1416462 00:09:57.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1416462) - No such process 00:09:57.757 00:11:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1416462 00:09:57.757 00:11:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # local es=0 00:09:57.757 00:11:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # valid_exec_arg wait 1416462 00:09:57.757 00:11:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@630 -- # local arg=wait 00:09:57.757 00:11:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:09:57.757 00:11:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@634 -- # type -t wait 00:09:57.757 00:11:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:09:57.757 00:11:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@645 -- # wait 1416462 00:09:57.757 00:11:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@645 -- # es=1 00:09:57.757 00:11:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:09:57.757 00:11:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:09:57.757 00:11:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:09:57.757 00:11:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:57.757 00:11:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:57.757 00:11:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:57.757 00:11:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:57.757 00:11:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:57.757 00:11:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:57.757 00:11:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:57.757 [2024-07-16 00:11:16.478023] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:57.757 00:11:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:57.757 00:11:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:57.757 00:11:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:57.757 00:11:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:57.757 00:11:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:57.757 00:11:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1416989 00:09:57.757 00:11:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:09:57.757 00:11:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:09:57.757 00:11:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1416989 00:09:57.757 00:11:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:57.757 [2024-07-16 00:11:16.538353] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:09:58.322 00:11:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:58.322 00:11:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1416989 00:09:58.322 00:11:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:58.891 00:11:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:58.891 00:11:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1416989 00:09:58.891 00:11:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:59.458 00:11:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:59.458 00:11:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1416989 00:09:59.458 00:11:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:59.716 00:11:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:59.716 00:11:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1416989 00:09:59.716 00:11:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:00.282 00:11:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:00.282 00:11:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1416989 00:10:00.282 00:11:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:00.848 00:11:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:00.848 00:11:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1416989 00:10:00.848 00:11:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:00.848 Initializing NVMe Controllers 00:10:00.848 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:00.848 Controller IO queue size 128, less than required. 00:10:00.848 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:00.848 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:10:00.848 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:10:00.848 Initialization complete. Launching workers. 00:10:00.848 ======================================================== 00:10:00.848 Latency(us) 00:10:00.848 Device Information : IOPS MiB/s Average min max 00:10:00.848 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002856.48 1000209.48 1009216.91 00:10:00.848 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005026.00 1000202.27 1012253.06 00:10:00.848 ======================================================== 00:10:00.848 Total : 256.00 0.12 1003941.24 1000202.27 1012253.06 00:10:00.848 00:10:01.413 00:11:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:01.413 00:11:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1416989 00:10:01.413 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1416989) - No such process 00:10:01.413 00:11:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1416989 00:10:01.413 00:11:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:10:01.413 00:11:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:10:01.413 00:11:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:01.413 00:11:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:10:01.413 00:11:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:01.413 00:11:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:10:01.413 00:11:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:01.413 00:11:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:01.413 rmmod nvme_tcp 00:10:01.413 rmmod nvme_fabrics 00:10:01.413 rmmod nvme_keyring 00:10:01.413 00:11:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:01.413 00:11:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:10:01.413 00:11:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:10:01.413 00:11:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 1416216 ']' 00:10:01.413 00:11:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 1416216 00:10:01.413 00:11:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@942 -- # '[' -z 1416216 ']' 00:10:01.413 00:11:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@946 -- # kill -0 1416216 00:10:01.413 00:11:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@947 -- # uname 00:10:01.413 00:11:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:10:01.413 00:11:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1416216 00:10:01.413 00:11:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:10:01.413 00:11:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:10:01.413 00:11:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1416216' 00:10:01.413 killing process with pid 1416216 00:10:01.413 00:11:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@961 -- # kill 1416216 00:10:01.413 00:11:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # wait 1416216 00:10:01.672 00:11:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:01.672 00:11:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:01.672 00:11:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:01.672 00:11:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:01.672 00:11:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:01.672 00:11:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:01.672 00:11:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:01.672 00:11:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:03.577 00:11:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:03.577 00:10:03.577 real 0m16.081s 00:10:03.577 user 0m30.442s 00:10:03.577 sys 0m4.807s 00:10:03.577 00:11:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1118 -- # xtrace_disable 00:10:03.578 00:11:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:03.578 ************************************ 00:10:03.578 END TEST nvmf_delete_subsystem 00:10:03.578 ************************************ 00:10:03.578 00:11:22 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:10:03.578 00:11:22 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:10:03.578 00:11:22 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:10:03.578 00:11:22 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:10:03.578 00:11:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:03.837 ************************************ 00:10:03.837 START TEST nvmf_ns_masking 00:10:03.837 ************************************ 00:10:03.837 00:11:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1117 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:10:03.837 * Looking for test storage... 00:10:03.837 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:03.837 00:11:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:03.837 00:11:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:10:03.837 00:11:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:03.837 00:11:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:03.837 00:11:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:03.837 00:11:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:03.837 00:11:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:03.837 00:11:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:03.837 00:11:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:03.837 00:11:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:03.837 00:11:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:03.837 00:11:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:03.837 00:11:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:03.837 00:11:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:03.837 00:11:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:03.837 00:11:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:03.837 00:11:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:03.837 00:11:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:03.837 00:11:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:03.837 00:11:22 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:03.837 00:11:22 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:03.837 00:11:22 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:03.838 00:11:22 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:03.838 00:11:22 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:03.838 00:11:22 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:03.838 00:11:22 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:10:03.838 00:11:22 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:03.838 00:11:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:10:03.838 00:11:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:03.838 00:11:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:03.838 00:11:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:03.838 00:11:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:03.838 00:11:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:03.838 00:11:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:03.838 00:11:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:03.838 00:11:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:03.838 00:11:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:03.838 00:11:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:10:03.838 00:11:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:10:03.838 00:11:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:10:03.838 00:11:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=0f265f38-f9c3-4d38-8d52-ff995d239273 00:10:03.838 00:11:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:10:03.838 00:11:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=e109cd50-b7f5-4059-83a2-0306c3127b79 00:10:03.838 00:11:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:10:03.838 00:11:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:10:03.838 00:11:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:10:03.838 00:11:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:10:03.838 00:11:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=a4233429-68a9-4b14-ac68-f37bf29c8a9f 00:10:03.838 00:11:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:10:03.838 00:11:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:03.838 00:11:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:03.838 00:11:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:03.838 00:11:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:03.838 00:11:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:03.838 00:11:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:03.838 00:11:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:03.838 00:11:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:03.838 00:11:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:03.838 00:11:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:03.838 00:11:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:10:03.838 00:11:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:10:09.108 00:11:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:09.108 00:11:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:10:09.108 00:11:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:09.108 00:11:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:09.108 00:11:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:09.108 00:11:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:09.108 00:11:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:09.108 00:11:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:10:09.108 00:11:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:09.108 00:11:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:10:09.108 00:11:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:10:09.108 00:11:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:10:09.108 00:11:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:10:09.108 00:11:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:10:09.108 00:11:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:10:09.108 00:11:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:09.108 00:11:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:09.108 00:11:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:09.108 00:11:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:09.108 00:11:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:09.108 00:11:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:09.108 00:11:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:09.108 00:11:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:09.108 00:11:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:09.108 00:11:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:09.108 00:11:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:09.108 00:11:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:09.108 00:11:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:09.108 00:11:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:09.108 00:11:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:09.108 00:11:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:09.108 00:11:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:09.108 00:11:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:09.108 00:11:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:09.108 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:09.108 00:11:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:09.108 00:11:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:09.108 00:11:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:09.108 00:11:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:09.108 00:11:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:09.108 00:11:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:09.108 00:11:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:09.108 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:09.108 00:11:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:09.108 00:11:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:09.108 00:11:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:09.108 00:11:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:09.108 00:11:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:09.108 00:11:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:09.108 00:11:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:09.108 00:11:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:09.108 00:11:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:09.109 00:11:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:09.109 00:11:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:09.109 00:11:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:09.109 00:11:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:09.109 00:11:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:09.109 00:11:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:09.109 00:11:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:09.109 Found net devices under 0000:86:00.0: cvl_0_0 00:10:09.109 00:11:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:09.109 00:11:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:09.109 00:11:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:09.109 00:11:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:09.109 00:11:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:09.109 00:11:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:09.109 00:11:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:09.109 00:11:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:09.109 00:11:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:09.109 Found net devices under 0000:86:00.1: cvl_0_1 00:10:09.109 00:11:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:09.109 00:11:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:09.109 00:11:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:10:09.109 00:11:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:09.109 00:11:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:09.109 00:11:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:09.109 00:11:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:09.109 00:11:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:09.109 00:11:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:09.109 00:11:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:09.109 00:11:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:09.109 00:11:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:09.109 00:11:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:09.109 00:11:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:09.109 00:11:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:09.109 00:11:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:09.109 00:11:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:09.109 00:11:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:09.109 00:11:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:09.109 00:11:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:09.109 00:11:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:09.109 00:11:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:09.109 00:11:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:09.109 00:11:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:09.109 00:11:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:09.109 00:11:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:09.109 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:09.109 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.159 ms 00:10:09.109 00:10:09.109 --- 10.0.0.2 ping statistics --- 00:10:09.109 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:09.109 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:10:09.109 00:11:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:09.109 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:09.109 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.171 ms 00:10:09.109 00:10:09.109 --- 10.0.0.1 ping statistics --- 00:10:09.109 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:09.109 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:10:09.109 00:11:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:09.109 00:11:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:10:09.109 00:11:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:09.109 00:11:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:09.109 00:11:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:09.109 00:11:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:09.109 00:11:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:09.109 00:11:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:09.109 00:11:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:09.109 00:11:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:10:09.109 00:11:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:09.109 00:11:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:09.109 00:11:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:10:09.109 00:11:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=1420928 00:10:09.109 00:11:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:10:09.109 00:11:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 1420928 00:10:09.109 00:11:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@823 -- # '[' -z 1420928 ']' 00:10:09.109 00:11:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:09.109 00:11:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@828 -- # local max_retries=100 00:10:09.109 00:11:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:09.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:09.109 00:11:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@832 -- # xtrace_disable 00:10:09.109 00:11:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:10:09.109 [2024-07-16 00:11:27.501633] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:10:09.109 [2024-07-16 00:11:27.501675] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:09.109 [2024-07-16 00:11:27.559375] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:09.109 [2024-07-16 00:11:27.635870] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:09.109 [2024-07-16 00:11:27.635908] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:09.109 [2024-07-16 00:11:27.635918] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:09.109 [2024-07-16 00:11:27.635925] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:09.109 [2024-07-16 00:11:27.635931] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:09.109 [2024-07-16 00:11:27.635961] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:09.678 00:11:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:10:09.678 00:11:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@856 -- # return 0 00:10:09.678 00:11:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:09.678 00:11:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:09.678 00:11:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:10:09.678 00:11:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:09.678 00:11:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:09.678 [2024-07-16 00:11:28.491864] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:09.678 00:11:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:10:09.678 00:11:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:10:09.678 00:11:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:10:09.938 Malloc1 00:10:09.938 00:11:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:10:10.197 Malloc2 00:10:10.197 00:11:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:10.456 00:11:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:10:10.456 00:11:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:10.716 [2024-07-16 00:11:29.424613] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:10.716 00:11:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:10:10.716 00:11:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I a4233429-68a9-4b14-ac68-f37bf29c8a9f -a 10.0.0.2 -s 4420 -i 4 00:10:10.975 00:11:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:10:10.975 00:11:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1192 -- # local i=0 00:10:10.975 00:11:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1193 -- # local nvme_device_counter=1 nvme_devices=0 00:10:10.975 00:11:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # [[ -n '' ]] 00:10:10.975 00:11:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # sleep 2 00:10:12.882 00:11:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # (( i++ <= 15 )) 00:10:12.882 00:11:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # lsblk -l -o NAME,SERIAL 00:10:12.882 00:11:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # grep -c SPDKISFASTANDAWESOME 00:10:12.882 00:11:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_devices=1 00:10:12.882 00:11:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( nvme_devices == nvme_device_counter )) 00:10:12.882 00:11:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # return 0 00:10:12.882 00:11:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:10:12.882 00:11:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:10:12.882 00:11:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:10:12.882 00:11:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:10:12.882 00:11:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:10:12.882 00:11:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:12.882 00:11:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:10:12.882 [ 0]:0x1 00:10:12.882 00:11:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:12.882 00:11:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:13.140 00:11:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6201530a402b4f61bd5dd8c4e52fe461 00:10:13.140 00:11:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6201530a402b4f61bd5dd8c4e52fe461 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:13.140 00:11:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:10:13.140 00:11:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:10:13.140 00:11:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:13.140 00:11:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:10:13.140 [ 0]:0x1 00:10:13.141 00:11:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:13.141 00:11:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:13.399 00:11:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6201530a402b4f61bd5dd8c4e52fe461 00:10:13.399 00:11:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6201530a402b4f61bd5dd8c4e52fe461 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:13.399 00:11:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:10:13.399 00:11:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:13.400 00:11:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:10:13.400 [ 1]:0x2 00:10:13.400 00:11:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:13.400 00:11:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:13.400 00:11:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=965de2d163b74d00b6348f44a6590532 00:10:13.400 00:11:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 965de2d163b74d00b6348f44a6590532 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:13.400 00:11:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:10:13.400 00:11:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:13.400 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:13.400 00:11:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:13.658 00:11:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:10:13.658 00:11:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:10:13.658 00:11:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I a4233429-68a9-4b14-ac68-f37bf29c8a9f -a 10.0.0.2 -s 4420 -i 4 00:10:13.962 00:11:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:10:13.962 00:11:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1192 -- # local i=0 00:10:13.962 00:11:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1193 -- # local nvme_device_counter=1 nvme_devices=0 00:10:13.962 00:11:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # [[ -n 1 ]] 00:10:13.962 00:11:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # nvme_device_counter=1 00:10:13.962 00:11:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # sleep 2 00:10:15.934 00:11:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # (( i++ <= 15 )) 00:10:15.934 00:11:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # lsblk -l -o NAME,SERIAL 00:10:15.934 00:11:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # grep -c SPDKISFASTANDAWESOME 00:10:15.934 00:11:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_devices=1 00:10:15.934 00:11:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( nvme_devices == nvme_device_counter )) 00:10:15.934 00:11:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # return 0 00:10:15.934 00:11:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:10:15.934 00:11:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:10:15.934 00:11:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:10:15.934 00:11:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:10:15.934 00:11:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:10:15.934 00:11:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # local es=0 00:10:15.934 00:11:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@644 -- # valid_exec_arg ns_is_visible 0x1 00:10:15.934 00:11:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@630 -- # local arg=ns_is_visible 00:10:15.934 00:11:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:10:15.934 00:11:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@634 -- # type -t ns_is_visible 00:10:15.934 00:11:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:10:15.934 00:11:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@645 -- # ns_is_visible 0x1 00:10:15.934 00:11:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:15.934 00:11:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:10:15.934 00:11:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:15.934 00:11:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:15.934 00:11:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:10:15.934 00:11:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:15.934 00:11:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@645 -- # es=1 00:10:15.934 00:11:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:10:15.934 00:11:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:10:15.934 00:11:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:10:15.934 00:11:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:10:15.934 00:11:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:15.934 00:11:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:10:15.934 [ 0]:0x2 00:10:15.934 00:11:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:15.934 00:11:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:15.934 00:11:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=965de2d163b74d00b6348f44a6590532 00:10:15.934 00:11:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 965de2d163b74d00b6348f44a6590532 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:15.934 00:11:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:10:16.193 00:11:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:10:16.193 00:11:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:16.193 00:11:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:10:16.193 [ 0]:0x1 00:10:16.193 00:11:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:16.193 00:11:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:16.193 00:11:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6201530a402b4f61bd5dd8c4e52fe461 00:10:16.193 00:11:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6201530a402b4f61bd5dd8c4e52fe461 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:16.193 00:11:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:10:16.193 00:11:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:10:16.193 00:11:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:16.193 [ 1]:0x2 00:10:16.193 00:11:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:16.193 00:11:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:16.453 00:11:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=965de2d163b74d00b6348f44a6590532 00:10:16.453 00:11:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 965de2d163b74d00b6348f44a6590532 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:16.453 00:11:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:10:16.453 00:11:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:10:16.453 00:11:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # local es=0 00:10:16.453 00:11:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@644 -- # valid_exec_arg ns_is_visible 0x1 00:10:16.453 00:11:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@630 -- # local arg=ns_is_visible 00:10:16.453 00:11:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:10:16.453 00:11:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@634 -- # type -t ns_is_visible 00:10:16.453 00:11:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:10:16.453 00:11:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@645 -- # ns_is_visible 0x1 00:10:16.453 00:11:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:16.453 00:11:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:10:16.453 00:11:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:16.453 00:11:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:16.453 00:11:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:10:16.453 00:11:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:16.453 00:11:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@645 -- # es=1 00:10:16.453 00:11:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:10:16.453 00:11:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:10:16.453 00:11:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:10:16.453 00:11:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:10:16.453 00:11:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:16.453 00:11:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:10:16.453 [ 0]:0x2 00:10:16.453 00:11:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:16.453 00:11:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:16.712 00:11:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=965de2d163b74d00b6348f44a6590532 00:10:16.712 00:11:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 965de2d163b74d00b6348f44a6590532 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:16.712 00:11:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:10:16.712 00:11:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:16.712 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:16.712 00:11:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:10:16.712 00:11:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:10:16.712 00:11:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I a4233429-68a9-4b14-ac68-f37bf29c8a9f -a 10.0.0.2 -s 4420 -i 4 00:10:16.971 00:11:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:10:16.971 00:11:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1192 -- # local i=0 00:10:16.971 00:11:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1193 -- # local nvme_device_counter=1 nvme_devices=0 00:10:16.971 00:11:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # [[ -n 2 ]] 00:10:16.971 00:11:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # nvme_device_counter=2 00:10:16.971 00:11:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # sleep 2 00:10:19.506 00:11:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # (( i++ <= 15 )) 00:10:19.506 00:11:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # lsblk -l -o NAME,SERIAL 00:10:19.506 00:11:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # grep -c SPDKISFASTANDAWESOME 00:10:19.506 00:11:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_devices=2 00:10:19.506 00:11:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( nvme_devices == nvme_device_counter )) 00:10:19.506 00:11:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # return 0 00:10:19.506 00:11:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:10:19.506 00:11:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:10:19.506 00:11:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:10:19.506 00:11:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:10:19.506 00:11:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:10:19.506 00:11:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:19.506 00:11:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:10:19.506 [ 0]:0x1 00:10:19.506 00:11:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:19.506 00:11:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:19.506 00:11:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6201530a402b4f61bd5dd8c4e52fe461 00:10:19.506 00:11:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6201530a402b4f61bd5dd8c4e52fe461 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:19.506 00:11:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:10:19.506 00:11:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:19.506 00:11:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:10:19.506 [ 1]:0x2 00:10:19.506 00:11:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:19.506 00:11:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:19.506 00:11:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=965de2d163b74d00b6348f44a6590532 00:10:19.506 00:11:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 965de2d163b74d00b6348f44a6590532 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:19.506 00:11:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:10:19.506 00:11:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:10:19.506 00:11:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # local es=0 00:10:19.506 00:11:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@644 -- # valid_exec_arg ns_is_visible 0x1 00:10:19.506 00:11:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@630 -- # local arg=ns_is_visible 00:10:19.506 00:11:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:10:19.506 00:11:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@634 -- # type -t ns_is_visible 00:10:19.506 00:11:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:10:19.506 00:11:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@645 -- # ns_is_visible 0x1 00:10:19.506 00:11:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:19.506 00:11:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:10:19.506 00:11:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:19.506 00:11:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:19.506 00:11:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:10:19.506 00:11:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:19.506 00:11:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@645 -- # es=1 00:10:19.506 00:11:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:10:19.506 00:11:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:10:19.506 00:11:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:10:19.506 00:11:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:10:19.506 00:11:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:19.506 00:11:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:10:19.506 [ 0]:0x2 00:10:19.506 00:11:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:19.506 00:11:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:19.506 00:11:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=965de2d163b74d00b6348f44a6590532 00:10:19.506 00:11:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 965de2d163b74d00b6348f44a6590532 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:19.506 00:11:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:10:19.506 00:11:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # local es=0 00:10:19.506 00:11:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@644 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:10:19.506 00:11:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@630 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:19.506 00:11:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:10:19.506 00:11:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@634 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:19.506 00:11:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:10:19.506 00:11:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:19.506 00:11:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:10:19.506 00:11:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:19.507 00:11:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:10:19.507 00:11:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@645 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:10:19.766 [2024-07-16 00:11:38.394076] nvmf_rpc.c:1798:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:10:19.766 request: 00:10:19.766 { 00:10:19.766 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:10:19.766 "nsid": 2, 00:10:19.766 "host": "nqn.2016-06.io.spdk:host1", 00:10:19.766 "method": "nvmf_ns_remove_host", 00:10:19.766 "req_id": 1 00:10:19.766 } 00:10:19.766 Got JSON-RPC error response 00:10:19.766 response: 00:10:19.766 { 00:10:19.766 "code": -32602, 00:10:19.766 "message": "Invalid parameters" 00:10:19.766 } 00:10:19.766 00:11:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@645 -- # es=1 00:10:19.766 00:11:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:10:19.766 00:11:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:10:19.766 00:11:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:10:19.766 00:11:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:10:19.766 00:11:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # local es=0 00:10:19.766 00:11:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@644 -- # valid_exec_arg ns_is_visible 0x1 00:10:19.766 00:11:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@630 -- # local arg=ns_is_visible 00:10:19.766 00:11:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:10:19.767 00:11:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@634 -- # type -t ns_is_visible 00:10:19.767 00:11:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:10:19.767 00:11:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@645 -- # ns_is_visible 0x1 00:10:19.767 00:11:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:19.767 00:11:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:10:19.767 00:11:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:19.767 00:11:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:19.767 00:11:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:10:19.767 00:11:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:19.767 00:11:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@645 -- # es=1 00:10:19.767 00:11:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:10:19.767 00:11:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:10:19.767 00:11:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:10:19.767 00:11:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:10:19.767 00:11:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:19.767 00:11:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:10:19.767 [ 0]:0x2 00:10:19.767 00:11:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:19.767 00:11:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:19.767 00:11:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=965de2d163b74d00b6348f44a6590532 00:10:19.767 00:11:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 965de2d163b74d00b6348f44a6590532 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:19.767 00:11:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:10:19.767 00:11:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:20.027 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:20.027 00:11:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=1422938 00:10:20.027 00:11:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:10:20.027 00:11:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:10:20.027 00:11:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 1422938 /var/tmp/host.sock 00:10:20.027 00:11:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@823 -- # '[' -z 1422938 ']' 00:10:20.027 00:11:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/host.sock 00:10:20.027 00:11:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@828 -- # local max_retries=100 00:10:20.027 00:11:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:10:20.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:10:20.027 00:11:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@832 -- # xtrace_disable 00:10:20.027 00:11:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:10:20.027 [2024-07-16 00:11:38.745501] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:10:20.027 [2024-07-16 00:11:38.745547] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1422938 ] 00:10:20.027 [2024-07-16 00:11:38.800157] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:20.027 [2024-07-16 00:11:38.873189] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:20.964 00:11:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:10:20.964 00:11:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@856 -- # return 0 00:10:20.964 00:11:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:20.964 00:11:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:21.222 00:11:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 0f265f38-f9c3-4d38-8d52-ff995d239273 00:10:21.222 00:11:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:10:21.222 00:11:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 0F265F38F9C34D388D52FF995D239273 -i 00:10:21.222 00:11:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid e109cd50-b7f5-4059-83a2-0306c3127b79 00:10:21.222 00:11:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:10:21.223 00:11:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g E109CD50B7F5405983A20306C3127B79 -i 00:10:21.482 00:11:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:10:21.741 00:11:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:10:21.741 00:11:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:10:21.741 00:11:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:10:22.309 nvme0n1 00:10:22.309 00:11:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:10:22.309 00:11:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:10:22.568 nvme1n2 00:10:22.568 00:11:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:10:22.568 00:11:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:10:22.568 00:11:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:10:22.568 00:11:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:10:22.568 00:11:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:10:22.827 00:11:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:10:22.827 00:11:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:10:22.827 00:11:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:10:22.827 00:11:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:10:23.086 00:11:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 0f265f38-f9c3-4d38-8d52-ff995d239273 == \0\f\2\6\5\f\3\8\-\f\9\c\3\-\4\d\3\8\-\8\d\5\2\-\f\f\9\9\5\d\2\3\9\2\7\3 ]] 00:10:23.086 00:11:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:10:23.086 00:11:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:10:23.086 00:11:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:10:23.086 00:11:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ e109cd50-b7f5-4059-83a2-0306c3127b79 == \e\1\0\9\c\d\5\0\-\b\7\f\5\-\4\0\5\9\-\8\3\a\2\-\0\3\0\6\c\3\1\2\7\b\7\9 ]] 00:10:23.086 00:11:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 1422938 00:10:23.086 00:11:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@942 -- # '[' -z 1422938 ']' 00:10:23.086 00:11:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@946 -- # kill -0 1422938 00:10:23.086 00:11:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@947 -- # uname 00:10:23.086 00:11:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:10:23.086 00:11:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1422938 00:10:23.086 00:11:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:10:23.086 00:11:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:10:23.086 00:11:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1422938' 00:10:23.086 killing process with pid 1422938 00:10:23.086 00:11:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@961 -- # kill 1422938 00:10:23.086 00:11:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # wait 1422938 00:10:23.653 00:11:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:23.653 00:11:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:10:23.653 00:11:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:10:23.653 00:11:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:23.653 00:11:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:10:23.653 00:11:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:23.653 00:11:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:10:23.653 00:11:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:23.653 00:11:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:23.653 rmmod nvme_tcp 00:10:23.653 rmmod nvme_fabrics 00:10:23.653 rmmod nvme_keyring 00:10:23.653 00:11:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:23.653 00:11:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:10:23.653 00:11:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:10:23.653 00:11:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 1420928 ']' 00:10:23.653 00:11:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 1420928 00:10:23.653 00:11:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@942 -- # '[' -z 1420928 ']' 00:10:23.653 00:11:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@946 -- # kill -0 1420928 00:10:23.653 00:11:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@947 -- # uname 00:10:23.653 00:11:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:10:23.653 00:11:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1420928 00:10:23.911 00:11:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:10:23.911 00:11:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:10:23.911 00:11:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1420928' 00:10:23.911 killing process with pid 1420928 00:10:23.911 00:11:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@961 -- # kill 1420928 00:10:23.911 00:11:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # wait 1420928 00:10:23.911 00:11:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:23.911 00:11:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:23.911 00:11:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:23.911 00:11:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:23.911 00:11:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:23.911 00:11:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:23.911 00:11:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:23.911 00:11:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:26.443 00:11:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:26.443 00:10:26.443 real 0m22.365s 00:10:26.443 user 0m24.543s 00:10:26.443 sys 0m5.805s 00:10:26.443 00:11:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1118 -- # xtrace_disable 00:10:26.443 00:11:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:10:26.443 ************************************ 00:10:26.443 END TEST nvmf_ns_masking 00:10:26.443 ************************************ 00:10:26.443 00:11:44 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:10:26.443 00:11:44 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:10:26.443 00:11:44 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:10:26.443 00:11:44 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:10:26.443 00:11:44 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:10:26.443 00:11:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:26.443 ************************************ 00:10:26.443 START TEST nvmf_nvme_cli 00:10:26.443 ************************************ 00:10:26.443 00:11:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:10:26.443 * Looking for test storage... 00:10:26.443 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:26.443 00:11:44 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:26.443 00:11:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:10:26.443 00:11:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:26.443 00:11:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:26.443 00:11:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:26.443 00:11:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:26.443 00:11:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:26.443 00:11:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:26.443 00:11:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:26.443 00:11:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:26.443 00:11:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:26.443 00:11:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:26.443 00:11:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:26.443 00:11:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:26.443 00:11:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:26.443 00:11:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:26.443 00:11:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:26.443 00:11:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:26.443 00:11:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:26.443 00:11:44 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:26.443 00:11:44 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:26.443 00:11:44 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:26.443 00:11:44 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:26.443 00:11:44 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:26.443 00:11:44 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:26.443 00:11:44 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:10:26.443 00:11:44 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:26.443 00:11:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:10:26.443 00:11:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:26.443 00:11:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:26.443 00:11:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:26.443 00:11:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:26.443 00:11:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:26.443 00:11:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:26.443 00:11:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:26.443 00:11:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:26.443 00:11:44 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:26.443 00:11:44 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:26.443 00:11:44 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:10:26.443 00:11:44 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:10:26.443 00:11:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:26.443 00:11:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:26.443 00:11:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:26.443 00:11:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:26.443 00:11:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:26.443 00:11:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:26.443 00:11:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:26.443 00:11:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:26.443 00:11:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:26.443 00:11:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:26.443 00:11:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:10:26.443 00:11:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:31.721 00:11:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:31.721 00:11:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:10:31.721 00:11:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:31.721 00:11:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:31.721 00:11:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:31.721 00:11:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:31.721 00:11:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:31.721 00:11:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:10:31.721 00:11:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:31.721 00:11:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:10:31.721 00:11:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:10:31.721 00:11:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:10:31.721 00:11:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:10:31.721 00:11:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:10:31.721 00:11:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:10:31.721 00:11:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:31.721 00:11:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:31.721 00:11:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:31.721 00:11:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:31.721 00:11:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:31.721 00:11:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:31.721 00:11:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:31.721 00:11:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:31.721 00:11:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:31.721 00:11:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:31.721 00:11:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:31.721 00:11:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:31.721 00:11:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:31.721 00:11:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:31.721 00:11:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:31.721 00:11:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:31.721 00:11:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:31.721 00:11:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:31.721 00:11:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:31.721 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:31.721 00:11:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:31.721 00:11:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:31.721 00:11:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:31.721 00:11:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:31.721 00:11:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:31.721 00:11:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:31.721 00:11:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:31.721 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:31.721 00:11:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:31.721 00:11:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:31.721 00:11:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:31.721 00:11:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:31.721 00:11:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:31.721 00:11:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:31.721 00:11:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:31.721 00:11:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:31.721 00:11:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:31.721 00:11:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:31.721 00:11:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:31.721 00:11:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:31.721 00:11:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:31.721 00:11:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:31.721 00:11:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:31.721 00:11:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:31.721 Found net devices under 0000:86:00.0: cvl_0_0 00:10:31.721 00:11:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:31.721 00:11:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:31.721 00:11:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:31.721 00:11:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:31.721 00:11:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:31.721 00:11:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:31.721 00:11:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:31.721 00:11:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:31.721 00:11:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:31.721 Found net devices under 0000:86:00.1: cvl_0_1 00:10:31.721 00:11:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:31.721 00:11:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:31.721 00:11:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:10:31.722 00:11:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:31.722 00:11:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:31.722 00:11:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:31.722 00:11:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:31.722 00:11:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:31.722 00:11:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:31.722 00:11:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:31.722 00:11:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:31.722 00:11:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:31.722 00:11:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:31.722 00:11:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:31.722 00:11:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:31.722 00:11:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:31.722 00:11:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:31.722 00:11:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:31.722 00:11:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:31.722 00:11:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:31.722 00:11:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:31.722 00:11:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:31.722 00:11:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:31.722 00:11:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:31.722 00:11:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:31.722 00:11:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:31.722 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:31.722 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.166 ms 00:10:31.722 00:10:31.722 --- 10.0.0.2 ping statistics --- 00:10:31.722 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:31.722 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:10:31.722 00:11:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:31.722 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:31.722 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:10:31.722 00:10:31.722 --- 10.0.0.1 ping statistics --- 00:10:31.722 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:31.722 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:10:31.722 00:11:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:31.722 00:11:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:10:31.722 00:11:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:31.722 00:11:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:31.722 00:11:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:31.722 00:11:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:31.722 00:11:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:31.722 00:11:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:31.722 00:11:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:31.722 00:11:50 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:10:31.722 00:11:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:31.722 00:11:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:31.722 00:11:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:31.722 00:11:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=1427106 00:10:31.722 00:11:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 1427106 00:10:31.722 00:11:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@823 -- # '[' -z 1427106 ']' 00:10:31.722 00:11:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:31.722 00:11:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@828 -- # local max_retries=100 00:10:31.722 00:11:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:31.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:31.722 00:11:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@832 -- # xtrace_disable 00:10:31.722 00:11:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:31.722 00:11:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:31.722 [2024-07-16 00:11:50.138269] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:10:31.722 [2024-07-16 00:11:50.138316] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:31.722 [2024-07-16 00:11:50.196024] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:31.722 [2024-07-16 00:11:50.284501] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:31.722 [2024-07-16 00:11:50.284534] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:31.722 [2024-07-16 00:11:50.284543] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:31.722 [2024-07-16 00:11:50.284553] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:31.722 [2024-07-16 00:11:50.284560] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:31.722 [2024-07-16 00:11:50.284603] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:31.722 [2024-07-16 00:11:50.284696] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:31.722 [2024-07-16 00:11:50.284713] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:31.722 [2024-07-16 00:11:50.284716] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:32.291 00:11:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:10:32.291 00:11:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@856 -- # return 0 00:10:32.291 00:11:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:32.291 00:11:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:32.291 00:11:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:32.291 00:11:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:32.291 00:11:50 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:32.291 00:11:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@553 -- # xtrace_disable 00:10:32.291 00:11:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:32.291 [2024-07-16 00:11:50.993131] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:32.291 00:11:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:10:32.291 00:11:50 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:32.291 00:11:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@553 -- # xtrace_disable 00:10:32.291 00:11:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:32.291 Malloc0 00:10:32.291 00:11:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:10:32.291 00:11:51 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:10:32.291 00:11:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@553 -- # xtrace_disable 00:10:32.291 00:11:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:32.291 Malloc1 00:10:32.291 00:11:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:10:32.291 00:11:51 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:10:32.291 00:11:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@553 -- # xtrace_disable 00:10:32.291 00:11:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:32.291 00:11:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:10:32.291 00:11:51 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:32.291 00:11:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@553 -- # xtrace_disable 00:10:32.291 00:11:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:32.291 00:11:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:10:32.291 00:11:51 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:32.291 00:11:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@553 -- # xtrace_disable 00:10:32.291 00:11:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:32.291 00:11:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:10:32.291 00:11:51 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:32.291 00:11:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@553 -- # xtrace_disable 00:10:32.291 00:11:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:32.291 [2024-07-16 00:11:51.070934] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:32.291 00:11:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:10:32.291 00:11:51 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:32.291 00:11:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@553 -- # xtrace_disable 00:10:32.291 00:11:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:32.291 00:11:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:10:32.291 00:11:51 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:10:32.551 00:10:32.551 Discovery Log Number of Records 2, Generation counter 2 00:10:32.551 =====Discovery Log Entry 0====== 00:10:32.551 trtype: tcp 00:10:32.551 adrfam: ipv4 00:10:32.551 subtype: current discovery subsystem 00:10:32.551 treq: not required 00:10:32.551 portid: 0 00:10:32.551 trsvcid: 4420 00:10:32.551 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:32.551 traddr: 10.0.0.2 00:10:32.551 eflags: explicit discovery connections, duplicate discovery information 00:10:32.551 sectype: none 00:10:32.551 =====Discovery Log Entry 1====== 00:10:32.551 trtype: tcp 00:10:32.551 adrfam: ipv4 00:10:32.551 subtype: nvme subsystem 00:10:32.551 treq: not required 00:10:32.551 portid: 0 00:10:32.551 trsvcid: 4420 00:10:32.551 subnqn: nqn.2016-06.io.spdk:cnode1 00:10:32.551 traddr: 10.0.0.2 00:10:32.551 eflags: none 00:10:32.551 sectype: none 00:10:32.551 00:11:51 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:10:32.551 00:11:51 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:10:32.551 00:11:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:10:32.551 00:11:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:32.551 00:11:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:10:32.551 00:11:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:10:32.551 00:11:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:32.551 00:11:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:10:32.551 00:11:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:32.551 00:11:51 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:10:32.551 00:11:51 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:33.490 00:11:52 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:10:33.490 00:11:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1192 -- # local i=0 00:10:33.490 00:11:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1193 -- # local nvme_device_counter=1 nvme_devices=0 00:10:33.490 00:11:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1194 -- # [[ -n 2 ]] 00:10:33.490 00:11:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1195 -- # nvme_device_counter=2 00:10:33.491 00:11:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # sleep 2 00:10:36.033 00:11:54 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # (( i++ <= 15 )) 00:10:36.033 00:11:54 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # lsblk -l -o NAME,SERIAL 00:10:36.033 00:11:54 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # grep -c SPDKISFASTANDAWESOME 00:10:36.033 00:11:54 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_devices=2 00:10:36.033 00:11:54 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # (( nvme_devices == nvme_device_counter )) 00:10:36.033 00:11:54 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # return 0 00:10:36.033 00:11:54 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:10:36.033 00:11:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:10:36.033 00:11:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:36.033 00:11:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:10:36.033 00:11:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:10:36.033 00:11:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:36.033 00:11:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:10:36.033 00:11:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:36.033 00:11:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:10:36.033 00:11:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:10:36.033 00:11:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:36.033 00:11:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:10:36.033 00:11:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:10:36.033 00:11:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:36.033 00:11:54 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:10:36.033 /dev/nvme0n1 ]] 00:10:36.033 00:11:54 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:10:36.033 00:11:54 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:10:36.033 00:11:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:10:36.033 00:11:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:36.033 00:11:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:10:36.033 00:11:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:10:36.033 00:11:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:36.033 00:11:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:10:36.033 00:11:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:36.033 00:11:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:10:36.033 00:11:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:10:36.033 00:11:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:36.033 00:11:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:10:36.033 00:11:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:10:36.033 00:11:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:36.033 00:11:54 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:10:36.033 00:11:54 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:36.033 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:36.033 00:11:54 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:36.033 00:11:54 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1213 -- # local i=0 00:10:36.033 00:11:54 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1214 -- # lsblk -o NAME,SERIAL 00:10:36.033 00:11:54 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1214 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:36.033 00:11:54 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1221 -- # lsblk -l -o NAME,SERIAL 00:10:36.033 00:11:54 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1221 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:36.293 00:11:54 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1225 -- # return 0 00:10:36.293 00:11:54 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:10:36.293 00:11:54 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:36.293 00:11:54 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@553 -- # xtrace_disable 00:10:36.293 00:11:54 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:36.293 00:11:54 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:10:36.293 00:11:54 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:10:36.293 00:11:54 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:10:36.293 00:11:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:36.293 00:11:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:10:36.293 00:11:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:36.293 00:11:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:10:36.293 00:11:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:36.293 00:11:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:36.293 rmmod nvme_tcp 00:10:36.293 rmmod nvme_fabrics 00:10:36.293 rmmod nvme_keyring 00:10:36.293 00:11:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:36.293 00:11:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:10:36.293 00:11:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:10:36.293 00:11:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 1427106 ']' 00:10:36.293 00:11:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 1427106 00:10:36.293 00:11:54 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@942 -- # '[' -z 1427106 ']' 00:10:36.293 00:11:54 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@946 -- # kill -0 1427106 00:10:36.293 00:11:54 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@947 -- # uname 00:10:36.293 00:11:54 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:10:36.293 00:11:54 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1427106 00:10:36.293 00:11:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:10:36.293 00:11:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:10:36.293 00:11:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1427106' 00:10:36.293 killing process with pid 1427106 00:10:36.293 00:11:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@961 -- # kill 1427106 00:10:36.293 00:11:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # wait 1427106 00:10:36.553 00:11:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:36.553 00:11:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:36.553 00:11:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:36.553 00:11:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:36.553 00:11:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:36.553 00:11:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:36.553 00:11:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:36.553 00:11:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:38.472 00:11:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:38.472 00:10:38.472 real 0m12.415s 00:10:38.472 user 0m21.061s 00:10:38.472 sys 0m4.483s 00:10:38.472 00:11:57 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1118 -- # xtrace_disable 00:10:38.472 00:11:57 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:38.472 ************************************ 00:10:38.472 END TEST nvmf_nvme_cli 00:10:38.472 ************************************ 00:10:38.732 00:11:57 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:10:38.732 00:11:57 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:10:38.732 00:11:57 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:10:38.732 00:11:57 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:10:38.732 00:11:57 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:10:38.732 00:11:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:38.732 ************************************ 00:10:38.732 START TEST nvmf_vfio_user 00:10:38.732 ************************************ 00:10:38.732 00:11:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:10:38.732 * Looking for test storage... 00:10:38.732 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:38.732 00:11:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:38.732 00:11:57 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:10:38.732 00:11:57 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:38.732 00:11:57 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:38.732 00:11:57 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:38.732 00:11:57 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:38.732 00:11:57 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:38.732 00:11:57 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:38.732 00:11:57 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:38.732 00:11:57 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:38.732 00:11:57 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:38.732 00:11:57 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:38.732 00:11:57 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:38.732 00:11:57 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:38.732 00:11:57 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:38.732 00:11:57 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:38.732 00:11:57 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:38.732 00:11:57 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:38.732 00:11:57 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:38.732 00:11:57 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:38.732 00:11:57 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:38.732 00:11:57 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:38.732 00:11:57 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.732 00:11:57 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.732 00:11:57 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.732 00:11:57 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:10:38.732 00:11:57 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.732 00:11:57 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:10:38.732 00:11:57 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:38.732 00:11:57 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:38.732 00:11:57 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:38.732 00:11:57 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:38.732 00:11:57 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:38.732 00:11:57 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:38.732 00:11:57 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:38.732 00:11:57 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:38.732 00:11:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:10:38.732 00:11:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:38.732 00:11:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:10:38.732 00:11:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:38.732 00:11:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:10:38.732 00:11:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:10:38.732 00:11:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:10:38.732 00:11:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:10:38.733 00:11:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:10:38.733 00:11:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:10:38.733 00:11:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1428455 00:10:38.733 00:11:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1428455' 00:10:38.733 Process pid: 1428455 00:10:38.733 00:11:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:10:38.733 00:11:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1428455 00:10:38.733 00:11:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@823 -- # '[' -z 1428455 ']' 00:10:38.733 00:11:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:38.733 00:11:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@828 -- # local max_retries=100 00:10:38.733 00:11:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:38.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:38.733 00:11:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@832 -- # xtrace_disable 00:10:38.733 00:11:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:10:38.733 00:11:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:10:38.733 [2024-07-16 00:11:57.521726] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:10:38.733 [2024-07-16 00:11:57.521773] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:38.733 [2024-07-16 00:11:57.576077] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:38.992 [2024-07-16 00:11:57.657113] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:38.992 [2024-07-16 00:11:57.657147] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:38.992 [2024-07-16 00:11:57.657157] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:38.992 [2024-07-16 00:11:57.657164] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:38.992 [2024-07-16 00:11:57.657171] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:38.992 [2024-07-16 00:11:57.657217] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:38.992 [2024-07-16 00:11:57.657237] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:38.992 [2024-07-16 00:11:57.657323] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:38.992 [2024-07-16 00:11:57.657326] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:39.560 00:11:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:10:39.560 00:11:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@856 -- # return 0 00:10:39.560 00:11:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:10:40.497 00:11:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:10:40.756 00:11:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:10:40.756 00:11:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:10:40.756 00:11:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:40.756 00:11:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:10:40.756 00:11:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:10:41.015 Malloc1 00:10:41.015 00:11:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:10:41.274 00:11:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:10:41.532 00:12:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:10:41.532 00:12:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:41.532 00:12:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:10:41.532 00:12:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:10:41.791 Malloc2 00:10:41.791 00:12:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:10:42.050 00:12:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:10:42.309 00:12:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:10:42.309 00:12:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:10:42.309 00:12:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:10:42.309 00:12:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:42.309 00:12:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:10:42.309 00:12:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:10:42.309 00:12:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:10:42.309 [2024-07-16 00:12:01.139510] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:10:42.309 [2024-07-16 00:12:01.139543] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1429129 ] 00:10:42.570 [2024-07-16 00:12:01.169745] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:10:42.570 [2024-07-16 00:12:01.172640] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:10:42.570 [2024-07-16 00:12:01.172658] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fb8d8907000 00:10:42.570 [2024-07-16 00:12:01.173642] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:42.570 [2024-07-16 00:12:01.174644] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:42.570 [2024-07-16 00:12:01.175647] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:42.570 [2024-07-16 00:12:01.176652] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:10:42.570 [2024-07-16 00:12:01.177662] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:10:42.570 [2024-07-16 00:12:01.178668] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:42.570 [2024-07-16 00:12:01.179674] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:10:42.570 [2024-07-16 00:12:01.180674] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:42.570 [2024-07-16 00:12:01.181681] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:10:42.570 [2024-07-16 00:12:01.181690] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fb8d88fc000 00:10:42.570 [2024-07-16 00:12:01.182633] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:10:42.570 [2024-07-16 00:12:01.191260] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:10:42.570 [2024-07-16 00:12:01.191284] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:10:42.570 [2024-07-16 00:12:01.196778] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:10:42.570 [2024-07-16 00:12:01.196817] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:10:42.570 [2024-07-16 00:12:01.196886] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:10:42.570 [2024-07-16 00:12:01.196903] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:10:42.570 [2024-07-16 00:12:01.196908] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:10:42.570 [2024-07-16 00:12:01.197778] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:10:42.570 [2024-07-16 00:12:01.197786] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:10:42.570 [2024-07-16 00:12:01.197793] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:10:42.570 [2024-07-16 00:12:01.198778] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:10:42.570 [2024-07-16 00:12:01.198786] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:10:42.570 [2024-07-16 00:12:01.198793] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:10:42.570 [2024-07-16 00:12:01.199783] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:10:42.570 [2024-07-16 00:12:01.199792] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:10:42.570 [2024-07-16 00:12:01.200789] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:10:42.570 [2024-07-16 00:12:01.200797] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:10:42.570 [2024-07-16 00:12:01.200802] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:10:42.570 [2024-07-16 00:12:01.200808] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:10:42.570 [2024-07-16 00:12:01.200913] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:10:42.570 [2024-07-16 00:12:01.200917] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:10:42.570 [2024-07-16 00:12:01.200922] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:10:42.570 [2024-07-16 00:12:01.201795] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:10:42.570 [2024-07-16 00:12:01.202805] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:10:42.570 [2024-07-16 00:12:01.203807] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:10:42.570 [2024-07-16 00:12:01.204803] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:42.570 [2024-07-16 00:12:01.204867] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:10:42.570 [2024-07-16 00:12:01.205816] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:10:42.570 [2024-07-16 00:12:01.205830] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:10:42.570 [2024-07-16 00:12:01.205836] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:10:42.570 [2024-07-16 00:12:01.205854] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:10:42.570 [2024-07-16 00:12:01.205862] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:10:42.570 [2024-07-16 00:12:01.205877] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:10:42.570 [2024-07-16 00:12:01.205881] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:42.570 [2024-07-16 00:12:01.205895] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:42.570 [2024-07-16 00:12:01.205931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:10:42.570 [2024-07-16 00:12:01.205940] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:10:42.570 [2024-07-16 00:12:01.205948] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:10:42.570 [2024-07-16 00:12:01.205952] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:10:42.570 [2024-07-16 00:12:01.205957] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:10:42.570 [2024-07-16 00:12:01.205961] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:10:42.570 [2024-07-16 00:12:01.205967] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:10:42.570 [2024-07-16 00:12:01.205971] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:10:42.570 [2024-07-16 00:12:01.205978] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:10:42.570 [2024-07-16 00:12:01.205986] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:10:42.570 [2024-07-16 00:12:01.205996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:10:42.570 [2024-07-16 00:12:01.206008] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:42.570 [2024-07-16 00:12:01.206015] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:42.571 [2024-07-16 00:12:01.206023] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:42.571 [2024-07-16 00:12:01.206030] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:42.571 [2024-07-16 00:12:01.206034] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:10:42.571 [2024-07-16 00:12:01.206041] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:10:42.571 [2024-07-16 00:12:01.206049] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:10:42.571 [2024-07-16 00:12:01.206059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:10:42.571 [2024-07-16 00:12:01.206064] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:10:42.571 [2024-07-16 00:12:01.206069] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:10:42.571 [2024-07-16 00:12:01.206074] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:10:42.571 [2024-07-16 00:12:01.206080] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:10:42.571 [2024-07-16 00:12:01.206088] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:10:42.571 [2024-07-16 00:12:01.206102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:10:42.571 [2024-07-16 00:12:01.206151] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:10:42.571 [2024-07-16 00:12:01.206158] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:10:42.571 [2024-07-16 00:12:01.206165] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:10:42.571 [2024-07-16 00:12:01.206169] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:10:42.571 [2024-07-16 00:12:01.206174] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:10:42.571 [2024-07-16 00:12:01.206185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:10:42.571 [2024-07-16 00:12:01.206194] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:10:42.571 [2024-07-16 00:12:01.206201] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:10:42.571 [2024-07-16 00:12:01.206208] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:10:42.571 [2024-07-16 00:12:01.206214] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:10:42.571 [2024-07-16 00:12:01.206218] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:42.571 [2024-07-16 00:12:01.206230] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:42.571 [2024-07-16 00:12:01.206246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:10:42.571 [2024-07-16 00:12:01.206258] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:10:42.571 [2024-07-16 00:12:01.206265] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:10:42.571 [2024-07-16 00:12:01.206271] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:10:42.571 [2024-07-16 00:12:01.206275] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:42.571 [2024-07-16 00:12:01.206280] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:42.571 [2024-07-16 00:12:01.206291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:10:42.571 [2024-07-16 00:12:01.206298] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:10:42.571 [2024-07-16 00:12:01.206304] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:10:42.571 [2024-07-16 00:12:01.206310] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:10:42.571 [2024-07-16 00:12:01.206317] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:10:42.571 [2024-07-16 00:12:01.206321] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:10:42.571 [2024-07-16 00:12:01.206326] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:10:42.571 [2024-07-16 00:12:01.206330] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:10:42.571 [2024-07-16 00:12:01.206334] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:10:42.571 [2024-07-16 00:12:01.206338] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:10:42.571 [2024-07-16 00:12:01.206354] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:10:42.571 [2024-07-16 00:12:01.206364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:10:42.571 [2024-07-16 00:12:01.206374] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:10:42.571 [2024-07-16 00:12:01.206382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:10:42.571 [2024-07-16 00:12:01.206392] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:10:42.571 [2024-07-16 00:12:01.206400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:10:42.571 [2024-07-16 00:12:01.206410] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:10:42.571 [2024-07-16 00:12:01.206422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:10:42.571 [2024-07-16 00:12:01.206434] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:10:42.571 [2024-07-16 00:12:01.206438] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:10:42.571 [2024-07-16 00:12:01.206441] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:10:42.571 [2024-07-16 00:12:01.206444] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:10:42.571 [2024-07-16 00:12:01.206449] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:10:42.571 [2024-07-16 00:12:01.206456] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:10:42.571 [2024-07-16 00:12:01.206460] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:10:42.571 [2024-07-16 00:12:01.206465] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:10:42.571 [2024-07-16 00:12:01.206472] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:10:42.571 [2024-07-16 00:12:01.206475] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:42.571 [2024-07-16 00:12:01.206480] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:42.571 [2024-07-16 00:12:01.206487] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:10:42.571 [2024-07-16 00:12:01.206491] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:10:42.571 [2024-07-16 00:12:01.206496] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:10:42.571 [2024-07-16 00:12:01.206502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:10:42.571 [2024-07-16 00:12:01.206513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:10:42.571 [2024-07-16 00:12:01.206523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:10:42.571 [2024-07-16 00:12:01.206529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:10:42.571 ===================================================== 00:10:42.571 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:10:42.571 ===================================================== 00:10:42.571 Controller Capabilities/Features 00:10:42.571 ================================ 00:10:42.571 Vendor ID: 4e58 00:10:42.571 Subsystem Vendor ID: 4e58 00:10:42.571 Serial Number: SPDK1 00:10:42.571 Model Number: SPDK bdev Controller 00:10:42.571 Firmware Version: 24.09 00:10:42.571 Recommended Arb Burst: 6 00:10:42.571 IEEE OUI Identifier: 8d 6b 50 00:10:42.571 Multi-path I/O 00:10:42.571 May have multiple subsystem ports: Yes 00:10:42.571 May have multiple controllers: Yes 00:10:42.571 Associated with SR-IOV VF: No 00:10:42.571 Max Data Transfer Size: 131072 00:10:42.571 Max Number of Namespaces: 32 00:10:42.571 Max Number of I/O Queues: 127 00:10:42.571 NVMe Specification Version (VS): 1.3 00:10:42.571 NVMe Specification Version (Identify): 1.3 00:10:42.571 Maximum Queue Entries: 256 00:10:42.571 Contiguous Queues Required: Yes 00:10:42.571 Arbitration Mechanisms Supported 00:10:42.571 Weighted Round Robin: Not Supported 00:10:42.571 Vendor Specific: Not Supported 00:10:42.571 Reset Timeout: 15000 ms 00:10:42.571 Doorbell Stride: 4 bytes 00:10:42.571 NVM Subsystem Reset: Not Supported 00:10:42.571 Command Sets Supported 00:10:42.571 NVM Command Set: Supported 00:10:42.571 Boot Partition: Not Supported 00:10:42.571 Memory Page Size Minimum: 4096 bytes 00:10:42.571 Memory Page Size Maximum: 4096 bytes 00:10:42.571 Persistent Memory Region: Not Supported 00:10:42.571 Optional Asynchronous Events Supported 00:10:42.571 Namespace Attribute Notices: Supported 00:10:42.571 Firmware Activation Notices: Not Supported 00:10:42.571 ANA Change Notices: Not Supported 00:10:42.571 PLE Aggregate Log Change Notices: Not Supported 00:10:42.571 LBA Status Info Alert Notices: Not Supported 00:10:42.571 EGE Aggregate Log Change Notices: Not Supported 00:10:42.571 Normal NVM Subsystem Shutdown event: Not Supported 00:10:42.571 Zone Descriptor Change Notices: Not Supported 00:10:42.571 Discovery Log Change Notices: Not Supported 00:10:42.571 Controller Attributes 00:10:42.571 128-bit Host Identifier: Supported 00:10:42.572 Non-Operational Permissive Mode: Not Supported 00:10:42.572 NVM Sets: Not Supported 00:10:42.572 Read Recovery Levels: Not Supported 00:10:42.572 Endurance Groups: Not Supported 00:10:42.572 Predictable Latency Mode: Not Supported 00:10:42.572 Traffic Based Keep ALive: Not Supported 00:10:42.572 Namespace Granularity: Not Supported 00:10:42.572 SQ Associations: Not Supported 00:10:42.572 UUID List: Not Supported 00:10:42.572 Multi-Domain Subsystem: Not Supported 00:10:42.572 Fixed Capacity Management: Not Supported 00:10:42.572 Variable Capacity Management: Not Supported 00:10:42.572 Delete Endurance Group: Not Supported 00:10:42.572 Delete NVM Set: Not Supported 00:10:42.572 Extended LBA Formats Supported: Not Supported 00:10:42.572 Flexible Data Placement Supported: Not Supported 00:10:42.572 00:10:42.572 Controller Memory Buffer Support 00:10:42.572 ================================ 00:10:42.572 Supported: No 00:10:42.572 00:10:42.572 Persistent Memory Region Support 00:10:42.572 ================================ 00:10:42.572 Supported: No 00:10:42.572 00:10:42.572 Admin Command Set Attributes 00:10:42.572 ============================ 00:10:42.572 Security Send/Receive: Not Supported 00:10:42.572 Format NVM: Not Supported 00:10:42.572 Firmware Activate/Download: Not Supported 00:10:42.572 Namespace Management: Not Supported 00:10:42.572 Device Self-Test: Not Supported 00:10:42.572 Directives: Not Supported 00:10:42.572 NVMe-MI: Not Supported 00:10:42.572 Virtualization Management: Not Supported 00:10:42.572 Doorbell Buffer Config: Not Supported 00:10:42.572 Get LBA Status Capability: Not Supported 00:10:42.572 Command & Feature Lockdown Capability: Not Supported 00:10:42.572 Abort Command Limit: 4 00:10:42.572 Async Event Request Limit: 4 00:10:42.572 Number of Firmware Slots: N/A 00:10:42.572 Firmware Slot 1 Read-Only: N/A 00:10:42.572 Firmware Activation Without Reset: N/A 00:10:42.572 Multiple Update Detection Support: N/A 00:10:42.572 Firmware Update Granularity: No Information Provided 00:10:42.572 Per-Namespace SMART Log: No 00:10:42.572 Asymmetric Namespace Access Log Page: Not Supported 00:10:42.572 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:10:42.572 Command Effects Log Page: Supported 00:10:42.572 Get Log Page Extended Data: Supported 00:10:42.572 Telemetry Log Pages: Not Supported 00:10:42.572 Persistent Event Log Pages: Not Supported 00:10:42.572 Supported Log Pages Log Page: May Support 00:10:42.572 Commands Supported & Effects Log Page: Not Supported 00:10:42.572 Feature Identifiers & Effects Log Page:May Support 00:10:42.572 NVMe-MI Commands & Effects Log Page: May Support 00:10:42.572 Data Area 4 for Telemetry Log: Not Supported 00:10:42.572 Error Log Page Entries Supported: 128 00:10:42.572 Keep Alive: Supported 00:10:42.572 Keep Alive Granularity: 10000 ms 00:10:42.572 00:10:42.572 NVM Command Set Attributes 00:10:42.572 ========================== 00:10:42.572 Submission Queue Entry Size 00:10:42.572 Max: 64 00:10:42.572 Min: 64 00:10:42.572 Completion Queue Entry Size 00:10:42.572 Max: 16 00:10:42.572 Min: 16 00:10:42.572 Number of Namespaces: 32 00:10:42.572 Compare Command: Supported 00:10:42.572 Write Uncorrectable Command: Not Supported 00:10:42.572 Dataset Management Command: Supported 00:10:42.572 Write Zeroes Command: Supported 00:10:42.572 Set Features Save Field: Not Supported 00:10:42.572 Reservations: Not Supported 00:10:42.572 Timestamp: Not Supported 00:10:42.572 Copy: Supported 00:10:42.572 Volatile Write Cache: Present 00:10:42.572 Atomic Write Unit (Normal): 1 00:10:42.572 Atomic Write Unit (PFail): 1 00:10:42.572 Atomic Compare & Write Unit: 1 00:10:42.572 Fused Compare & Write: Supported 00:10:42.572 Scatter-Gather List 00:10:42.572 SGL Command Set: Supported (Dword aligned) 00:10:42.572 SGL Keyed: Not Supported 00:10:42.572 SGL Bit Bucket Descriptor: Not Supported 00:10:42.572 SGL Metadata Pointer: Not Supported 00:10:42.572 Oversized SGL: Not Supported 00:10:42.572 SGL Metadata Address: Not Supported 00:10:42.572 SGL Offset: Not Supported 00:10:42.572 Transport SGL Data Block: Not Supported 00:10:42.572 Replay Protected Memory Block: Not Supported 00:10:42.572 00:10:42.572 Firmware Slot Information 00:10:42.572 ========================= 00:10:42.572 Active slot: 1 00:10:42.572 Slot 1 Firmware Revision: 24.09 00:10:42.572 00:10:42.572 00:10:42.572 Commands Supported and Effects 00:10:42.572 ============================== 00:10:42.572 Admin Commands 00:10:42.572 -------------- 00:10:42.572 Get Log Page (02h): Supported 00:10:42.572 Identify (06h): Supported 00:10:42.572 Abort (08h): Supported 00:10:42.572 Set Features (09h): Supported 00:10:42.572 Get Features (0Ah): Supported 00:10:42.572 Asynchronous Event Request (0Ch): Supported 00:10:42.572 Keep Alive (18h): Supported 00:10:42.572 I/O Commands 00:10:42.572 ------------ 00:10:42.572 Flush (00h): Supported LBA-Change 00:10:42.572 Write (01h): Supported LBA-Change 00:10:42.572 Read (02h): Supported 00:10:42.572 Compare (05h): Supported 00:10:42.572 Write Zeroes (08h): Supported LBA-Change 00:10:42.572 Dataset Management (09h): Supported LBA-Change 00:10:42.572 Copy (19h): Supported LBA-Change 00:10:42.572 00:10:42.572 Error Log 00:10:42.572 ========= 00:10:42.572 00:10:42.572 Arbitration 00:10:42.572 =========== 00:10:42.572 Arbitration Burst: 1 00:10:42.572 00:10:42.572 Power Management 00:10:42.572 ================ 00:10:42.572 Number of Power States: 1 00:10:42.572 Current Power State: Power State #0 00:10:42.572 Power State #0: 00:10:42.572 Max Power: 0.00 W 00:10:42.572 Non-Operational State: Operational 00:10:42.572 Entry Latency: Not Reported 00:10:42.572 Exit Latency: Not Reported 00:10:42.572 Relative Read Throughput: 0 00:10:42.572 Relative Read Latency: 0 00:10:42.572 Relative Write Throughput: 0 00:10:42.572 Relative Write Latency: 0 00:10:42.572 Idle Power: Not Reported 00:10:42.572 Active Power: Not Reported 00:10:42.572 Non-Operational Permissive Mode: Not Supported 00:10:42.572 00:10:42.572 Health Information 00:10:42.572 ================== 00:10:42.572 Critical Warnings: 00:10:42.572 Available Spare Space: OK 00:10:42.572 Temperature: OK 00:10:42.572 Device Reliability: OK 00:10:42.572 Read Only: No 00:10:42.572 Volatile Memory Backup: OK 00:10:42.572 Current Temperature: 0 Kelvin (-273 Celsius) 00:10:42.572 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:10:42.572 Available Spare: 0% 00:10:42.572 Available Sp[2024-07-16 00:12:01.206621] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:10:42.572 [2024-07-16 00:12:01.206629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:10:42.572 [2024-07-16 00:12:01.206655] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:10:42.572 [2024-07-16 00:12:01.206663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:42.572 [2024-07-16 00:12:01.206671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:42.572 [2024-07-16 00:12:01.206676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:42.572 [2024-07-16 00:12:01.206682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:42.572 [2024-07-16 00:12:01.210233] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:10:42.572 [2024-07-16 00:12:01.210244] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:10:42.572 [2024-07-16 00:12:01.210844] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:42.572 [2024-07-16 00:12:01.210891] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:10:42.572 [2024-07-16 00:12:01.210899] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:10:42.572 [2024-07-16 00:12:01.211858] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:10:42.572 [2024-07-16 00:12:01.211870] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:10:42.572 [2024-07-16 00:12:01.211918] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:10:42.572 [2024-07-16 00:12:01.213888] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:10:42.572 are Threshold: 0% 00:10:42.572 Life Percentage Used: 0% 00:10:42.572 Data Units Read: 0 00:10:42.572 Data Units Written: 0 00:10:42.572 Host Read Commands: 0 00:10:42.572 Host Write Commands: 0 00:10:42.572 Controller Busy Time: 0 minutes 00:10:42.572 Power Cycles: 0 00:10:42.572 Power On Hours: 0 hours 00:10:42.572 Unsafe Shutdowns: 0 00:10:42.572 Unrecoverable Media Errors: 0 00:10:42.572 Lifetime Error Log Entries: 0 00:10:42.572 Warning Temperature Time: 0 minutes 00:10:42.572 Critical Temperature Time: 0 minutes 00:10:42.572 00:10:42.572 Number of Queues 00:10:42.572 ================ 00:10:42.572 Number of I/O Submission Queues: 127 00:10:42.572 Number of I/O Completion Queues: 127 00:10:42.572 00:10:42.572 Active Namespaces 00:10:42.572 ================= 00:10:42.572 Namespace ID:1 00:10:42.572 Error Recovery Timeout: Unlimited 00:10:42.572 Command Set Identifier: NVM (00h) 00:10:42.573 Deallocate: Supported 00:10:42.573 Deallocated/Unwritten Error: Not Supported 00:10:42.573 Deallocated Read Value: Unknown 00:10:42.573 Deallocate in Write Zeroes: Not Supported 00:10:42.573 Deallocated Guard Field: 0xFFFF 00:10:42.573 Flush: Supported 00:10:42.573 Reservation: Supported 00:10:42.573 Namespace Sharing Capabilities: Multiple Controllers 00:10:42.573 Size (in LBAs): 131072 (0GiB) 00:10:42.573 Capacity (in LBAs): 131072 (0GiB) 00:10:42.573 Utilization (in LBAs): 131072 (0GiB) 00:10:42.573 NGUID: E30B269D37FC40918311B546A806FABC 00:10:42.573 UUID: e30b269d-37fc-4091-8311-b546a806fabc 00:10:42.573 Thin Provisioning: Not Supported 00:10:42.573 Per-NS Atomic Units: Yes 00:10:42.573 Atomic Boundary Size (Normal): 0 00:10:42.573 Atomic Boundary Size (PFail): 0 00:10:42.573 Atomic Boundary Offset: 0 00:10:42.573 Maximum Single Source Range Length: 65535 00:10:42.573 Maximum Copy Length: 65535 00:10:42.573 Maximum Source Range Count: 1 00:10:42.573 NGUID/EUI64 Never Reused: No 00:10:42.573 Namespace Write Protected: No 00:10:42.573 Number of LBA Formats: 1 00:10:42.573 Current LBA Format: LBA Format #00 00:10:42.573 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:42.573 00:10:42.573 00:12:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:10:42.573 [2024-07-16 00:12:01.417968] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:47.840 Initializing NVMe Controllers 00:10:47.840 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:10:47.840 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:10:47.840 Initialization complete. Launching workers. 00:10:47.840 ======================================================== 00:10:47.840 Latency(us) 00:10:47.840 Device Information : IOPS MiB/s Average min max 00:10:47.840 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39918.45 155.93 3206.13 972.08 8591.79 00:10:47.840 ======================================================== 00:10:47.840 Total : 39918.45 155.93 3206.13 972.08 8591.79 00:10:47.840 00:10:47.840 [2024-07-16 00:12:06.436934] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:47.840 00:12:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:10:47.840 [2024-07-16 00:12:06.658969] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:53.109 Initializing NVMe Controllers 00:10:53.109 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:10:53.109 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:10:53.109 Initialization complete. Launching workers. 00:10:53.109 ======================================================== 00:10:53.109 Latency(us) 00:10:53.109 Device Information : IOPS MiB/s Average min max 00:10:53.109 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16039.28 62.65 7979.71 7586.70 8055.42 00:10:53.109 ======================================================== 00:10:53.109 Total : 16039.28 62.65 7979.71 7586.70 8055.42 00:10:53.109 00:10:53.109 [2024-07-16 00:12:11.693060] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:53.109 00:12:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:10:53.109 [2024-07-16 00:12:11.891030] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:58.381 [2024-07-16 00:12:16.950475] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:58.381 Initializing NVMe Controllers 00:10:58.381 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:10:58.381 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:10:58.381 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:10:58.381 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:10:58.381 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:10:58.381 Initialization complete. Launching workers. 00:10:58.381 Starting thread on core 2 00:10:58.381 Starting thread on core 3 00:10:58.381 Starting thread on core 1 00:10:58.381 00:12:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:10:58.381 [2024-07-16 00:12:17.232655] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:01.721 [2024-07-16 00:12:20.293788] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:01.721 Initializing NVMe Controllers 00:11:01.721 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:11:01.721 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:11:01.721 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:11:01.721 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:11:01.721 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:11:01.721 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:11:01.721 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:11:01.721 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:11:01.721 Initialization complete. Launching workers. 00:11:01.721 Starting thread on core 1 with urgent priority queue 00:11:01.721 Starting thread on core 2 with urgent priority queue 00:11:01.721 Starting thread on core 3 with urgent priority queue 00:11:01.721 Starting thread on core 0 with urgent priority queue 00:11:01.721 SPDK bdev Controller (SPDK1 ) core 0: 8445.67 IO/s 11.84 secs/100000 ios 00:11:01.721 SPDK bdev Controller (SPDK1 ) core 1: 8512.33 IO/s 11.75 secs/100000 ios 00:11:01.721 SPDK bdev Controller (SPDK1 ) core 2: 7695.67 IO/s 12.99 secs/100000 ios 00:11:01.721 SPDK bdev Controller (SPDK1 ) core 3: 11033.00 IO/s 9.06 secs/100000 ios 00:11:01.721 ======================================================== 00:11:01.721 00:11:01.721 00:12:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:11:01.980 [2024-07-16 00:12:20.575714] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:01.980 Initializing NVMe Controllers 00:11:01.980 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:11:01.980 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:11:01.980 Namespace ID: 1 size: 0GB 00:11:01.980 Initialization complete. 00:11:01.980 INFO: using host memory buffer for IO 00:11:01.980 Hello world! 00:11:01.980 [2024-07-16 00:12:20.609933] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:01.980 00:12:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:11:02.239 [2024-07-16 00:12:20.872906] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:03.178 Initializing NVMe Controllers 00:11:03.178 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:11:03.178 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:11:03.178 Initialization complete. Launching workers. 00:11:03.178 submit (in ns) avg, min, max = 6916.1, 3226.1, 4001112.2 00:11:03.178 complete (in ns) avg, min, max = 19688.9, 1799.1, 4003558.3 00:11:03.178 00:11:03.178 Submit histogram 00:11:03.178 ================ 00:11:03.178 Range in us Cumulative Count 00:11:03.178 3.214 - 3.228: 0.0122% ( 2) 00:11:03.178 3.228 - 3.242: 0.0306% ( 3) 00:11:03.178 3.242 - 3.256: 0.0367% ( 1) 00:11:03.178 3.256 - 3.270: 0.0551% ( 3) 00:11:03.178 3.270 - 3.283: 0.0918% ( 6) 00:11:03.178 3.283 - 3.297: 0.4284% ( 55) 00:11:03.178 3.297 - 3.311: 1.5914% ( 190) 00:11:03.178 3.311 - 3.325: 2.8828% ( 211) 00:11:03.178 3.325 - 3.339: 4.5966% ( 280) 00:11:03.178 3.339 - 3.353: 7.2775% ( 438) 00:11:03.178 3.353 - 3.367: 11.9170% ( 758) 00:11:03.178 3.367 - 3.381: 17.1012% ( 847) 00:11:03.178 3.381 - 3.395: 23.2342% ( 1002) 00:11:03.178 3.395 - 3.409: 29.3855% ( 1005) 00:11:03.178 3.409 - 3.423: 34.9492% ( 909) 00:11:03.178 3.423 - 3.437: 39.9437% ( 816) 00:11:03.178 3.437 - 3.450: 45.6543% ( 933) 00:11:03.178 3.450 - 3.464: 50.4468% ( 783) 00:11:03.178 3.464 - 3.478: 54.4069% ( 647) 00:11:03.178 3.478 - 3.492: 58.9607% ( 744) 00:11:03.178 3.492 - 3.506: 64.6162% ( 924) 00:11:03.178 3.506 - 3.520: 70.1187% ( 899) 00:11:03.178 3.520 - 3.534: 73.5953% ( 568) 00:11:03.178 3.534 - 3.548: 77.6350% ( 660) 00:11:03.178 3.548 - 3.562: 81.8215% ( 684) 00:11:03.178 3.562 - 3.590: 86.0142% ( 685) 00:11:03.178 3.590 - 3.617: 87.8565% ( 301) 00:11:03.178 3.617 - 3.645: 88.8664% ( 165) 00:11:03.178 3.645 - 3.673: 90.2926% ( 233) 00:11:03.178 3.673 - 3.701: 91.9635% ( 273) 00:11:03.178 3.701 - 3.729: 93.5182% ( 254) 00:11:03.178 3.729 - 3.757: 95.1218% ( 262) 00:11:03.178 3.757 - 3.784: 96.6765% ( 254) 00:11:03.178 3.784 - 3.812: 97.8027% ( 184) 00:11:03.178 3.812 - 3.840: 98.5372% ( 120) 00:11:03.178 3.840 - 3.868: 99.0697% ( 87) 00:11:03.178 3.868 - 3.896: 99.3145% ( 40) 00:11:03.178 3.896 - 3.923: 99.4920% ( 29) 00:11:03.178 3.923 - 3.951: 99.5654% ( 12) 00:11:03.178 3.951 - 3.979: 99.5838% ( 3) 00:11:03.178 4.981 - 5.009: 99.5899% ( 1) 00:11:03.178 5.064 - 5.092: 99.5960% ( 1) 00:11:03.178 5.092 - 5.120: 99.6022% ( 1) 00:11:03.178 5.148 - 5.176: 99.6083% ( 1) 00:11:03.178 5.231 - 5.259: 99.6144% ( 1) 00:11:03.178 5.398 - 5.426: 99.6205% ( 1) 00:11:03.178 5.426 - 5.454: 99.6328% ( 2) 00:11:03.178 5.482 - 5.510: 99.6389% ( 1) 00:11:03.178 5.510 - 5.537: 99.6450% ( 1) 00:11:03.178 5.537 - 5.565: 99.6511% ( 1) 00:11:03.178 5.621 - 5.649: 99.6634% ( 2) 00:11:03.178 5.649 - 5.677: 99.6695% ( 1) 00:11:03.178 5.677 - 5.704: 99.6756% ( 1) 00:11:03.178 5.704 - 5.732: 99.6817% ( 1) 00:11:03.178 5.760 - 5.788: 99.6940% ( 2) 00:11:03.178 5.788 - 5.816: 99.7001% ( 1) 00:11:03.178 5.871 - 5.899: 99.7123% ( 2) 00:11:03.178 6.038 - 6.066: 99.7246% ( 2) 00:11:03.178 6.122 - 6.150: 99.7307% ( 1) 00:11:03.178 6.150 - 6.177: 99.7368% ( 1) 00:11:03.178 6.177 - 6.205: 99.7429% ( 1) 00:11:03.178 6.233 - 6.261: 99.7674% ( 4) 00:11:03.178 6.261 - 6.289: 99.7735% ( 1) 00:11:03.178 6.317 - 6.344: 99.7797% ( 1) 00:11:03.178 6.344 - 6.372: 99.8041% ( 4) 00:11:03.178 6.428 - 6.456: 99.8103% ( 1) 00:11:03.178 6.483 - 6.511: 99.8347% ( 4) 00:11:03.178 6.511 - 6.539: 99.8409% ( 1) 00:11:03.178 6.734 - 6.762: 99.8470% ( 1) 00:11:03.178 6.845 - 6.873: 99.8531% ( 1) 00:11:03.178 6.873 - 6.901: 99.8592% ( 1) 00:11:03.178 7.012 - 7.040: 99.8653% ( 1) 00:11:03.178 7.068 - 7.096: 99.8715% ( 1) 00:11:03.178 7.346 - 7.402: 99.8776% ( 1) 00:11:03.178 7.457 - 7.513: 99.8837% ( 1) 00:11:03.178 7.513 - 7.569: 99.8898% ( 1) 00:11:03.178 7.624 - 7.680: 99.8959% ( 1) 00:11:03.178 7.736 - 7.791: 99.9021% ( 1) 00:11:03.178 9.016 - 9.071: 99.9082% ( 1) 00:11:03.178 [2024-07-16 00:12:21.892784] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:03.178 10.351 - 10.407: 99.9143% ( 1) 00:11:03.178 3989.148 - 4017.642: 100.0000% ( 14) 00:11:03.178 00:11:03.178 Complete histogram 00:11:03.178 ================== 00:11:03.178 Range in us Cumulative Count 00:11:03.178 1.795 - 1.809: 0.0061% ( 1) 00:11:03.178 1.809 - 1.823: 0.1224% ( 19) 00:11:03.178 1.823 - 1.837: 1.1507% ( 168) 00:11:03.178 1.837 - 1.850: 2.3871% ( 202) 00:11:03.178 1.850 - 1.864: 3.0910% ( 115) 00:11:03.178 1.864 - 1.878: 19.3781% ( 2661) 00:11:03.178 1.878 - 1.892: 74.3849% ( 8987) 00:11:03.178 1.892 - 1.906: 91.9880% ( 2876) 00:11:03.178 1.906 - 1.920: 95.0055% ( 493) 00:11:03.178 1.920 - 1.934: 96.1501% ( 187) 00:11:03.178 1.934 - 1.948: 96.7989% ( 106) 00:11:03.178 1.948 - 1.962: 98.0047% ( 197) 00:11:03.178 1.962 - 1.976: 98.9289% ( 151) 00:11:03.178 1.976 - 1.990: 99.2166% ( 47) 00:11:03.178 1.990 - 2.003: 99.3022% ( 14) 00:11:03.178 2.003 - 2.017: 99.3451% ( 7) 00:11:03.178 2.017 - 2.031: 99.3757% ( 5) 00:11:03.178 2.073 - 2.087: 99.3879% ( 2) 00:11:03.178 2.337 - 2.351: 99.4002% ( 2) 00:11:03.178 3.506 - 3.520: 99.4063% ( 1) 00:11:03.178 3.562 - 3.590: 99.4124% ( 1) 00:11:03.178 3.617 - 3.645: 99.4185% ( 1) 00:11:03.178 3.757 - 3.784: 99.4247% ( 1) 00:11:03.178 3.840 - 3.868: 99.4308% ( 1) 00:11:03.178 3.868 - 3.896: 99.4369% ( 1) 00:11:03.178 3.923 - 3.951: 99.4430% ( 1) 00:11:03.178 4.146 - 4.174: 99.4491% ( 1) 00:11:03.178 4.202 - 4.230: 99.4553% ( 1) 00:11:03.178 4.230 - 4.257: 99.4614% ( 1) 00:11:03.178 4.257 - 4.285: 99.4675% ( 1) 00:11:03.178 4.313 - 4.341: 99.4736% ( 1) 00:11:03.178 4.591 - 4.619: 99.4797% ( 1) 00:11:03.178 4.619 - 4.647: 99.4920% ( 2) 00:11:03.178 4.647 - 4.675: 99.5042% ( 2) 00:11:03.178 4.703 - 4.730: 99.5103% ( 1) 00:11:03.178 4.786 - 4.814: 99.5165% ( 1) 00:11:03.178 4.925 - 4.953: 99.5226% ( 1) 00:11:03.178 4.953 - 4.981: 99.5287% ( 1) 00:11:03.178 5.760 - 5.788: 99.5348% ( 1) 00:11:03.178 5.899 - 5.927: 99.5409% ( 1) 00:11:03.178 6.094 - 6.122: 99.5471% ( 1) 00:11:03.178 8.403 - 8.459: 99.5532% ( 1) 00:11:03.178 3177.071 - 3191.318: 99.5593% ( 1) 00:11:03.178 3989.148 - 4017.642: 100.0000% ( 72) 00:11:03.178 00:11:03.178 00:12:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:11:03.178 00:12:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:11:03.178 00:12:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:11:03.178 00:12:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:11:03.178 00:12:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:11:03.437 [ 00:11:03.437 { 00:11:03.437 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:03.437 "subtype": "Discovery", 00:11:03.437 "listen_addresses": [], 00:11:03.437 "allow_any_host": true, 00:11:03.437 "hosts": [] 00:11:03.437 }, 00:11:03.437 { 00:11:03.437 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:11:03.437 "subtype": "NVMe", 00:11:03.437 "listen_addresses": [ 00:11:03.437 { 00:11:03.437 "trtype": "VFIOUSER", 00:11:03.437 "adrfam": "IPv4", 00:11:03.437 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:11:03.437 "trsvcid": "0" 00:11:03.437 } 00:11:03.437 ], 00:11:03.437 "allow_any_host": true, 00:11:03.437 "hosts": [], 00:11:03.437 "serial_number": "SPDK1", 00:11:03.437 "model_number": "SPDK bdev Controller", 00:11:03.437 "max_namespaces": 32, 00:11:03.437 "min_cntlid": 1, 00:11:03.437 "max_cntlid": 65519, 00:11:03.437 "namespaces": [ 00:11:03.437 { 00:11:03.437 "nsid": 1, 00:11:03.437 "bdev_name": "Malloc1", 00:11:03.437 "name": "Malloc1", 00:11:03.437 "nguid": "E30B269D37FC40918311B546A806FABC", 00:11:03.437 "uuid": "e30b269d-37fc-4091-8311-b546a806fabc" 00:11:03.437 } 00:11:03.437 ] 00:11:03.437 }, 00:11:03.437 { 00:11:03.437 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:11:03.437 "subtype": "NVMe", 00:11:03.438 "listen_addresses": [ 00:11:03.438 { 00:11:03.438 "trtype": "VFIOUSER", 00:11:03.438 "adrfam": "IPv4", 00:11:03.438 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:11:03.438 "trsvcid": "0" 00:11:03.438 } 00:11:03.438 ], 00:11:03.438 "allow_any_host": true, 00:11:03.438 "hosts": [], 00:11:03.438 "serial_number": "SPDK2", 00:11:03.438 "model_number": "SPDK bdev Controller", 00:11:03.438 "max_namespaces": 32, 00:11:03.438 "min_cntlid": 1, 00:11:03.438 "max_cntlid": 65519, 00:11:03.438 "namespaces": [ 00:11:03.438 { 00:11:03.438 "nsid": 1, 00:11:03.438 "bdev_name": "Malloc2", 00:11:03.438 "name": "Malloc2", 00:11:03.438 "nguid": "5CA5FF1D7DF3476E8421A7C4031C7B9E", 00:11:03.438 "uuid": "5ca5ff1d-7df3-476e-8421-a7c4031c7b9e" 00:11:03.438 } 00:11:03.438 ] 00:11:03.438 } 00:11:03.438 ] 00:11:03.438 00:12:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:11:03.438 00:12:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:11:03.438 00:12:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1433132 00:11:03.438 00:12:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:11:03.438 00:12:22 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1259 -- # local i=0 00:11:03.438 00:12:22 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1260 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:11:03.438 00:12:22 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:11:03.438 00:12:22 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # return 0 00:11:03.438 00:12:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:11:03.438 00:12:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:11:03.438 [2024-07-16 00:12:22.265644] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:03.697 Malloc3 00:11:03.697 00:12:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:11:03.697 [2024-07-16 00:12:22.498379] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:03.697 00:12:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:11:03.697 Asynchronous Event Request test 00:11:03.697 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:11:03.697 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:11:03.697 Registering asynchronous event callbacks... 00:11:03.697 Starting namespace attribute notice tests for all controllers... 00:11:03.697 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:11:03.697 aer_cb - Changed Namespace 00:11:03.697 Cleaning up... 00:11:03.957 [ 00:11:03.957 { 00:11:03.957 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:03.957 "subtype": "Discovery", 00:11:03.957 "listen_addresses": [], 00:11:03.957 "allow_any_host": true, 00:11:03.957 "hosts": [] 00:11:03.957 }, 00:11:03.957 { 00:11:03.957 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:11:03.957 "subtype": "NVMe", 00:11:03.957 "listen_addresses": [ 00:11:03.957 { 00:11:03.957 "trtype": "VFIOUSER", 00:11:03.957 "adrfam": "IPv4", 00:11:03.957 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:11:03.957 "trsvcid": "0" 00:11:03.957 } 00:11:03.957 ], 00:11:03.957 "allow_any_host": true, 00:11:03.957 "hosts": [], 00:11:03.957 "serial_number": "SPDK1", 00:11:03.957 "model_number": "SPDK bdev Controller", 00:11:03.957 "max_namespaces": 32, 00:11:03.957 "min_cntlid": 1, 00:11:03.957 "max_cntlid": 65519, 00:11:03.957 "namespaces": [ 00:11:03.957 { 00:11:03.957 "nsid": 1, 00:11:03.957 "bdev_name": "Malloc1", 00:11:03.957 "name": "Malloc1", 00:11:03.957 "nguid": "E30B269D37FC40918311B546A806FABC", 00:11:03.957 "uuid": "e30b269d-37fc-4091-8311-b546a806fabc" 00:11:03.957 }, 00:11:03.957 { 00:11:03.957 "nsid": 2, 00:11:03.957 "bdev_name": "Malloc3", 00:11:03.957 "name": "Malloc3", 00:11:03.957 "nguid": "6EEF61AF4AD24BEDB4DC430FDAC0B7FA", 00:11:03.957 "uuid": "6eef61af-4ad2-4bed-b4dc-430fdac0b7fa" 00:11:03.957 } 00:11:03.957 ] 00:11:03.957 }, 00:11:03.957 { 00:11:03.957 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:11:03.957 "subtype": "NVMe", 00:11:03.957 "listen_addresses": [ 00:11:03.957 { 00:11:03.957 "trtype": "VFIOUSER", 00:11:03.957 "adrfam": "IPv4", 00:11:03.957 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:11:03.957 "trsvcid": "0" 00:11:03.957 } 00:11:03.957 ], 00:11:03.957 "allow_any_host": true, 00:11:03.957 "hosts": [], 00:11:03.957 "serial_number": "SPDK2", 00:11:03.957 "model_number": "SPDK bdev Controller", 00:11:03.957 "max_namespaces": 32, 00:11:03.957 "min_cntlid": 1, 00:11:03.957 "max_cntlid": 65519, 00:11:03.957 "namespaces": [ 00:11:03.957 { 00:11:03.957 "nsid": 1, 00:11:03.957 "bdev_name": "Malloc2", 00:11:03.957 "name": "Malloc2", 00:11:03.957 "nguid": "5CA5FF1D7DF3476E8421A7C4031C7B9E", 00:11:03.957 "uuid": "5ca5ff1d-7df3-476e-8421-a7c4031c7b9e" 00:11:03.957 } 00:11:03.957 ] 00:11:03.957 } 00:11:03.957 ] 00:11:03.957 00:12:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1433132 00:11:03.958 00:12:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:03.958 00:12:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:11:03.958 00:12:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:11:03.958 00:12:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:11:03.958 [2024-07-16 00:12:22.721942] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:11:03.958 [2024-07-16 00:12:22.721988] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1433144 ] 00:11:03.958 [2024-07-16 00:12:22.751621] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:11:03.958 [2024-07-16 00:12:22.759447] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:11:03.958 [2024-07-16 00:12:22.759467] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fdfafe21000 00:11:03.958 [2024-07-16 00:12:22.760449] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:03.958 [2024-07-16 00:12:22.761456] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:03.958 [2024-07-16 00:12:22.762462] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:03.958 [2024-07-16 00:12:22.763467] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:03.958 [2024-07-16 00:12:22.764478] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:03.958 [2024-07-16 00:12:22.765482] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:03.958 [2024-07-16 00:12:22.766496] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:03.958 [2024-07-16 00:12:22.767498] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:03.958 [2024-07-16 00:12:22.768508] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:11:03.958 [2024-07-16 00:12:22.768518] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fdfafe16000 00:11:03.958 [2024-07-16 00:12:22.769457] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:11:03.958 [2024-07-16 00:12:22.778970] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:11:03.958 [2024-07-16 00:12:22.778994] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:11:03.958 [2024-07-16 00:12:22.784071] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:11:03.958 [2024-07-16 00:12:22.784105] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:11:03.958 [2024-07-16 00:12:22.784172] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:11:03.958 [2024-07-16 00:12:22.784187] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:11:03.958 [2024-07-16 00:12:22.784193] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:11:03.958 [2024-07-16 00:12:22.785078] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:11:03.958 [2024-07-16 00:12:22.785087] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:11:03.958 [2024-07-16 00:12:22.785093] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:11:03.958 [2024-07-16 00:12:22.786086] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:11:03.958 [2024-07-16 00:12:22.786095] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:11:03.958 [2024-07-16 00:12:22.786104] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:11:03.958 [2024-07-16 00:12:22.787094] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:11:03.958 [2024-07-16 00:12:22.787103] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:11:03.958 [2024-07-16 00:12:22.788105] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:11:03.958 [2024-07-16 00:12:22.788114] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:11:03.958 [2024-07-16 00:12:22.788118] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:11:03.958 [2024-07-16 00:12:22.788124] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:11:03.958 [2024-07-16 00:12:22.788229] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:11:03.958 [2024-07-16 00:12:22.788233] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:11:03.958 [2024-07-16 00:12:22.788238] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:11:03.958 [2024-07-16 00:12:22.789117] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:11:03.958 [2024-07-16 00:12:22.790123] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:11:03.958 [2024-07-16 00:12:22.791135] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:11:03.958 [2024-07-16 00:12:22.792131] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:03.958 [2024-07-16 00:12:22.792172] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:11:03.958 [2024-07-16 00:12:22.793146] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:11:03.958 [2024-07-16 00:12:22.793157] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:11:03.958 [2024-07-16 00:12:22.793161] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:11:03.958 [2024-07-16 00:12:22.793179] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:11:03.958 [2024-07-16 00:12:22.793186] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:11:03.958 [2024-07-16 00:12:22.793197] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:03.958 [2024-07-16 00:12:22.793201] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:03.958 [2024-07-16 00:12:22.793212] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:03.958 [2024-07-16 00:12:22.801233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:11:03.958 [2024-07-16 00:12:22.801243] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:11:03.958 [2024-07-16 00:12:22.801252] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:11:03.958 [2024-07-16 00:12:22.801256] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:11:03.958 [2024-07-16 00:12:22.801260] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:11:03.958 [2024-07-16 00:12:22.801264] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:11:03.958 [2024-07-16 00:12:22.801268] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:11:03.958 [2024-07-16 00:12:22.801272] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:11:03.958 [2024-07-16 00:12:22.801279] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:11:03.958 [2024-07-16 00:12:22.801288] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:11:04.219 [2024-07-16 00:12:22.809230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:11:04.219 [2024-07-16 00:12:22.809244] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:04.219 [2024-07-16 00:12:22.809252] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:04.219 [2024-07-16 00:12:22.809259] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:04.219 [2024-07-16 00:12:22.809266] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:04.219 [2024-07-16 00:12:22.809271] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:11:04.219 [2024-07-16 00:12:22.809278] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:11:04.219 [2024-07-16 00:12:22.809287] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:11:04.219 [2024-07-16 00:12:22.817418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:11:04.219 [2024-07-16 00:12:22.817427] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:11:04.219 [2024-07-16 00:12:22.817432] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:11:04.219 [2024-07-16 00:12:22.817438] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:11:04.219 [2024-07-16 00:12:22.817443] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:11:04.219 [2024-07-16 00:12:22.817451] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:04.219 [2024-07-16 00:12:22.825231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:11:04.219 [2024-07-16 00:12:22.825283] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:11:04.219 [2024-07-16 00:12:22.825290] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:11:04.219 [2024-07-16 00:12:22.825300] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:11:04.219 [2024-07-16 00:12:22.825304] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:11:04.219 [2024-07-16 00:12:22.825310] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:11:04.219 [2024-07-16 00:12:22.833230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:11:04.219 [2024-07-16 00:12:22.833241] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:11:04.219 [2024-07-16 00:12:22.833253] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:11:04.219 [2024-07-16 00:12:22.833260] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:11:04.219 [2024-07-16 00:12:22.833266] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:04.219 [2024-07-16 00:12:22.833270] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:04.219 [2024-07-16 00:12:22.833276] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:04.219 [2024-07-16 00:12:22.841229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:11:04.219 [2024-07-16 00:12:22.841242] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:11:04.219 [2024-07-16 00:12:22.841249] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:11:04.219 [2024-07-16 00:12:22.841256] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:04.220 [2024-07-16 00:12:22.841260] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:04.220 [2024-07-16 00:12:22.841266] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:04.220 [2024-07-16 00:12:22.849230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:11:04.220 [2024-07-16 00:12:22.849239] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:11:04.220 [2024-07-16 00:12:22.849245] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:11:04.220 [2024-07-16 00:12:22.849253] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:11:04.220 [2024-07-16 00:12:22.849258] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:11:04.220 [2024-07-16 00:12:22.849262] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:11:04.220 [2024-07-16 00:12:22.849267] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:11:04.220 [2024-07-16 00:12:22.849271] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:11:04.220 [2024-07-16 00:12:22.849275] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:11:04.220 [2024-07-16 00:12:22.849284] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:11:04.220 [2024-07-16 00:12:22.849300] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:11:04.220 [2024-07-16 00:12:22.857230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:11:04.220 [2024-07-16 00:12:22.857246] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:11:04.220 [2024-07-16 00:12:22.865231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:11:04.220 [2024-07-16 00:12:22.865243] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:11:04.220 [2024-07-16 00:12:22.873229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:11:04.220 [2024-07-16 00:12:22.873241] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:04.220 [2024-07-16 00:12:22.881230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:11:04.220 [2024-07-16 00:12:22.881248] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:11:04.220 [2024-07-16 00:12:22.881253] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:11:04.220 [2024-07-16 00:12:22.881256] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:11:04.220 [2024-07-16 00:12:22.881259] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:11:04.220 [2024-07-16 00:12:22.881265] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:11:04.220 [2024-07-16 00:12:22.881272] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:11:04.220 [2024-07-16 00:12:22.881276] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:11:04.220 [2024-07-16 00:12:22.881281] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:11:04.220 [2024-07-16 00:12:22.881287] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:11:04.220 [2024-07-16 00:12:22.881291] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:04.220 [2024-07-16 00:12:22.881296] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:04.220 [2024-07-16 00:12:22.881303] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:11:04.220 [2024-07-16 00:12:22.881307] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:11:04.220 [2024-07-16 00:12:22.881312] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:11:04.220 [2024-07-16 00:12:22.889233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:11:04.220 [2024-07-16 00:12:22.889250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:11:04.220 [2024-07-16 00:12:22.889259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:11:04.220 [2024-07-16 00:12:22.889265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:11:04.220 ===================================================== 00:11:04.220 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:11:04.220 ===================================================== 00:11:04.220 Controller Capabilities/Features 00:11:04.220 ================================ 00:11:04.220 Vendor ID: 4e58 00:11:04.220 Subsystem Vendor ID: 4e58 00:11:04.220 Serial Number: SPDK2 00:11:04.220 Model Number: SPDK bdev Controller 00:11:04.220 Firmware Version: 24.09 00:11:04.220 Recommended Arb Burst: 6 00:11:04.220 IEEE OUI Identifier: 8d 6b 50 00:11:04.220 Multi-path I/O 00:11:04.220 May have multiple subsystem ports: Yes 00:11:04.220 May have multiple controllers: Yes 00:11:04.220 Associated with SR-IOV VF: No 00:11:04.220 Max Data Transfer Size: 131072 00:11:04.220 Max Number of Namespaces: 32 00:11:04.220 Max Number of I/O Queues: 127 00:11:04.220 NVMe Specification Version (VS): 1.3 00:11:04.220 NVMe Specification Version (Identify): 1.3 00:11:04.220 Maximum Queue Entries: 256 00:11:04.220 Contiguous Queues Required: Yes 00:11:04.220 Arbitration Mechanisms Supported 00:11:04.220 Weighted Round Robin: Not Supported 00:11:04.220 Vendor Specific: Not Supported 00:11:04.220 Reset Timeout: 15000 ms 00:11:04.220 Doorbell Stride: 4 bytes 00:11:04.220 NVM Subsystem Reset: Not Supported 00:11:04.220 Command Sets Supported 00:11:04.220 NVM Command Set: Supported 00:11:04.220 Boot Partition: Not Supported 00:11:04.220 Memory Page Size Minimum: 4096 bytes 00:11:04.220 Memory Page Size Maximum: 4096 bytes 00:11:04.220 Persistent Memory Region: Not Supported 00:11:04.220 Optional Asynchronous Events Supported 00:11:04.220 Namespace Attribute Notices: Supported 00:11:04.220 Firmware Activation Notices: Not Supported 00:11:04.220 ANA Change Notices: Not Supported 00:11:04.220 PLE Aggregate Log Change Notices: Not Supported 00:11:04.220 LBA Status Info Alert Notices: Not Supported 00:11:04.220 EGE Aggregate Log Change Notices: Not Supported 00:11:04.220 Normal NVM Subsystem Shutdown event: Not Supported 00:11:04.220 Zone Descriptor Change Notices: Not Supported 00:11:04.220 Discovery Log Change Notices: Not Supported 00:11:04.220 Controller Attributes 00:11:04.220 128-bit Host Identifier: Supported 00:11:04.220 Non-Operational Permissive Mode: Not Supported 00:11:04.220 NVM Sets: Not Supported 00:11:04.220 Read Recovery Levels: Not Supported 00:11:04.220 Endurance Groups: Not Supported 00:11:04.220 Predictable Latency Mode: Not Supported 00:11:04.220 Traffic Based Keep ALive: Not Supported 00:11:04.220 Namespace Granularity: Not Supported 00:11:04.220 SQ Associations: Not Supported 00:11:04.220 UUID List: Not Supported 00:11:04.220 Multi-Domain Subsystem: Not Supported 00:11:04.220 Fixed Capacity Management: Not Supported 00:11:04.220 Variable Capacity Management: Not Supported 00:11:04.220 Delete Endurance Group: Not Supported 00:11:04.220 Delete NVM Set: Not Supported 00:11:04.220 Extended LBA Formats Supported: Not Supported 00:11:04.220 Flexible Data Placement Supported: Not Supported 00:11:04.220 00:11:04.220 Controller Memory Buffer Support 00:11:04.220 ================================ 00:11:04.220 Supported: No 00:11:04.220 00:11:04.220 Persistent Memory Region Support 00:11:04.220 ================================ 00:11:04.220 Supported: No 00:11:04.220 00:11:04.220 Admin Command Set Attributes 00:11:04.220 ============================ 00:11:04.220 Security Send/Receive: Not Supported 00:11:04.220 Format NVM: Not Supported 00:11:04.220 Firmware Activate/Download: Not Supported 00:11:04.220 Namespace Management: Not Supported 00:11:04.220 Device Self-Test: Not Supported 00:11:04.220 Directives: Not Supported 00:11:04.220 NVMe-MI: Not Supported 00:11:04.220 Virtualization Management: Not Supported 00:11:04.220 Doorbell Buffer Config: Not Supported 00:11:04.220 Get LBA Status Capability: Not Supported 00:11:04.220 Command & Feature Lockdown Capability: Not Supported 00:11:04.220 Abort Command Limit: 4 00:11:04.220 Async Event Request Limit: 4 00:11:04.220 Number of Firmware Slots: N/A 00:11:04.220 Firmware Slot 1 Read-Only: N/A 00:11:04.220 Firmware Activation Without Reset: N/A 00:11:04.220 Multiple Update Detection Support: N/A 00:11:04.220 Firmware Update Granularity: No Information Provided 00:11:04.220 Per-Namespace SMART Log: No 00:11:04.220 Asymmetric Namespace Access Log Page: Not Supported 00:11:04.220 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:11:04.220 Command Effects Log Page: Supported 00:11:04.220 Get Log Page Extended Data: Supported 00:11:04.220 Telemetry Log Pages: Not Supported 00:11:04.220 Persistent Event Log Pages: Not Supported 00:11:04.220 Supported Log Pages Log Page: May Support 00:11:04.220 Commands Supported & Effects Log Page: Not Supported 00:11:04.220 Feature Identifiers & Effects Log Page:May Support 00:11:04.220 NVMe-MI Commands & Effects Log Page: May Support 00:11:04.220 Data Area 4 for Telemetry Log: Not Supported 00:11:04.220 Error Log Page Entries Supported: 128 00:11:04.220 Keep Alive: Supported 00:11:04.220 Keep Alive Granularity: 10000 ms 00:11:04.220 00:11:04.220 NVM Command Set Attributes 00:11:04.220 ========================== 00:11:04.220 Submission Queue Entry Size 00:11:04.220 Max: 64 00:11:04.220 Min: 64 00:11:04.220 Completion Queue Entry Size 00:11:04.221 Max: 16 00:11:04.221 Min: 16 00:11:04.221 Number of Namespaces: 32 00:11:04.221 Compare Command: Supported 00:11:04.221 Write Uncorrectable Command: Not Supported 00:11:04.221 Dataset Management Command: Supported 00:11:04.221 Write Zeroes Command: Supported 00:11:04.221 Set Features Save Field: Not Supported 00:11:04.221 Reservations: Not Supported 00:11:04.221 Timestamp: Not Supported 00:11:04.221 Copy: Supported 00:11:04.221 Volatile Write Cache: Present 00:11:04.221 Atomic Write Unit (Normal): 1 00:11:04.221 Atomic Write Unit (PFail): 1 00:11:04.221 Atomic Compare & Write Unit: 1 00:11:04.221 Fused Compare & Write: Supported 00:11:04.221 Scatter-Gather List 00:11:04.221 SGL Command Set: Supported (Dword aligned) 00:11:04.221 SGL Keyed: Not Supported 00:11:04.221 SGL Bit Bucket Descriptor: Not Supported 00:11:04.221 SGL Metadata Pointer: Not Supported 00:11:04.221 Oversized SGL: Not Supported 00:11:04.221 SGL Metadata Address: Not Supported 00:11:04.221 SGL Offset: Not Supported 00:11:04.221 Transport SGL Data Block: Not Supported 00:11:04.221 Replay Protected Memory Block: Not Supported 00:11:04.221 00:11:04.221 Firmware Slot Information 00:11:04.221 ========================= 00:11:04.221 Active slot: 1 00:11:04.221 Slot 1 Firmware Revision: 24.09 00:11:04.221 00:11:04.221 00:11:04.221 Commands Supported and Effects 00:11:04.221 ============================== 00:11:04.221 Admin Commands 00:11:04.221 -------------- 00:11:04.221 Get Log Page (02h): Supported 00:11:04.221 Identify (06h): Supported 00:11:04.221 Abort (08h): Supported 00:11:04.221 Set Features (09h): Supported 00:11:04.221 Get Features (0Ah): Supported 00:11:04.221 Asynchronous Event Request (0Ch): Supported 00:11:04.221 Keep Alive (18h): Supported 00:11:04.221 I/O Commands 00:11:04.221 ------------ 00:11:04.221 Flush (00h): Supported LBA-Change 00:11:04.221 Write (01h): Supported LBA-Change 00:11:04.221 Read (02h): Supported 00:11:04.221 Compare (05h): Supported 00:11:04.221 Write Zeroes (08h): Supported LBA-Change 00:11:04.221 Dataset Management (09h): Supported LBA-Change 00:11:04.221 Copy (19h): Supported LBA-Change 00:11:04.221 00:11:04.221 Error Log 00:11:04.221 ========= 00:11:04.221 00:11:04.221 Arbitration 00:11:04.221 =========== 00:11:04.221 Arbitration Burst: 1 00:11:04.221 00:11:04.221 Power Management 00:11:04.221 ================ 00:11:04.221 Number of Power States: 1 00:11:04.221 Current Power State: Power State #0 00:11:04.221 Power State #0: 00:11:04.221 Max Power: 0.00 W 00:11:04.221 Non-Operational State: Operational 00:11:04.221 Entry Latency: Not Reported 00:11:04.221 Exit Latency: Not Reported 00:11:04.221 Relative Read Throughput: 0 00:11:04.221 Relative Read Latency: 0 00:11:04.221 Relative Write Throughput: 0 00:11:04.221 Relative Write Latency: 0 00:11:04.221 Idle Power: Not Reported 00:11:04.221 Active Power: Not Reported 00:11:04.221 Non-Operational Permissive Mode: Not Supported 00:11:04.221 00:11:04.221 Health Information 00:11:04.221 ================== 00:11:04.221 Critical Warnings: 00:11:04.221 Available Spare Space: OK 00:11:04.221 Temperature: OK 00:11:04.221 Device Reliability: OK 00:11:04.221 Read Only: No 00:11:04.221 Volatile Memory Backup: OK 00:11:04.221 Current Temperature: 0 Kelvin (-273 Celsius) 00:11:04.221 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:11:04.221 Available Spare: 0% 00:11:04.221 Available Sp[2024-07-16 00:12:22.889354] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:11:04.221 [2024-07-16 00:12:22.897232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:11:04.221 [2024-07-16 00:12:22.897266] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:11:04.221 [2024-07-16 00:12:22.897274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.221 [2024-07-16 00:12:22.897280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.221 [2024-07-16 00:12:22.897285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.221 [2024-07-16 00:12:22.897290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.221 [2024-07-16 00:12:22.897342] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:11:04.221 [2024-07-16 00:12:22.897352] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:11:04.221 [2024-07-16 00:12:22.898352] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:04.221 [2024-07-16 00:12:22.898400] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:11:04.221 [2024-07-16 00:12:22.898408] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:11:04.221 [2024-07-16 00:12:22.899369] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:11:04.221 [2024-07-16 00:12:22.899386] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:11:04.221 [2024-07-16 00:12:22.899433] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:11:04.221 [2024-07-16 00:12:22.900412] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:11:04.221 are Threshold: 0% 00:11:04.221 Life Percentage Used: 0% 00:11:04.221 Data Units Read: 0 00:11:04.221 Data Units Written: 0 00:11:04.221 Host Read Commands: 0 00:11:04.221 Host Write Commands: 0 00:11:04.221 Controller Busy Time: 0 minutes 00:11:04.221 Power Cycles: 0 00:11:04.221 Power On Hours: 0 hours 00:11:04.221 Unsafe Shutdowns: 0 00:11:04.221 Unrecoverable Media Errors: 0 00:11:04.221 Lifetime Error Log Entries: 0 00:11:04.221 Warning Temperature Time: 0 minutes 00:11:04.221 Critical Temperature Time: 0 minutes 00:11:04.221 00:11:04.221 Number of Queues 00:11:04.221 ================ 00:11:04.221 Number of I/O Submission Queues: 127 00:11:04.221 Number of I/O Completion Queues: 127 00:11:04.221 00:11:04.221 Active Namespaces 00:11:04.221 ================= 00:11:04.221 Namespace ID:1 00:11:04.221 Error Recovery Timeout: Unlimited 00:11:04.221 Command Set Identifier: NVM (00h) 00:11:04.221 Deallocate: Supported 00:11:04.221 Deallocated/Unwritten Error: Not Supported 00:11:04.221 Deallocated Read Value: Unknown 00:11:04.221 Deallocate in Write Zeroes: Not Supported 00:11:04.221 Deallocated Guard Field: 0xFFFF 00:11:04.221 Flush: Supported 00:11:04.221 Reservation: Supported 00:11:04.221 Namespace Sharing Capabilities: Multiple Controllers 00:11:04.221 Size (in LBAs): 131072 (0GiB) 00:11:04.221 Capacity (in LBAs): 131072 (0GiB) 00:11:04.221 Utilization (in LBAs): 131072 (0GiB) 00:11:04.221 NGUID: 5CA5FF1D7DF3476E8421A7C4031C7B9E 00:11:04.221 UUID: 5ca5ff1d-7df3-476e-8421-a7c4031c7b9e 00:11:04.221 Thin Provisioning: Not Supported 00:11:04.221 Per-NS Atomic Units: Yes 00:11:04.221 Atomic Boundary Size (Normal): 0 00:11:04.221 Atomic Boundary Size (PFail): 0 00:11:04.221 Atomic Boundary Offset: 0 00:11:04.221 Maximum Single Source Range Length: 65535 00:11:04.221 Maximum Copy Length: 65535 00:11:04.221 Maximum Source Range Count: 1 00:11:04.221 NGUID/EUI64 Never Reused: No 00:11:04.221 Namespace Write Protected: No 00:11:04.221 Number of LBA Formats: 1 00:11:04.221 Current LBA Format: LBA Format #00 00:11:04.221 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:04.221 00:11:04.221 00:12:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:11:04.481 [2024-07-16 00:12:23.117582] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:09.762 Initializing NVMe Controllers 00:11:09.762 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:11:09.762 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:11:09.762 Initialization complete. Launching workers. 00:11:09.762 ======================================================== 00:11:09.762 Latency(us) 00:11:09.762 Device Information : IOPS MiB/s Average min max 00:11:09.762 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39949.70 156.05 3203.84 959.17 6625.30 00:11:09.762 ======================================================== 00:11:09.762 Total : 39949.70 156.05 3203.84 959.17 6625.30 00:11:09.762 00:11:09.762 [2024-07-16 00:12:28.223469] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:09.762 00:12:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:11:09.762 [2024-07-16 00:12:28.438078] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:15.033 Initializing NVMe Controllers 00:11:15.033 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:11:15.033 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:11:15.033 Initialization complete. Launching workers. 00:11:15.033 ======================================================== 00:11:15.033 Latency(us) 00:11:15.033 Device Information : IOPS MiB/s Average min max 00:11:15.033 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39864.06 155.72 3210.73 959.80 6623.91 00:11:15.033 ======================================================== 00:11:15.033 Total : 39864.06 155.72 3210.73 959.80 6623.91 00:11:15.033 00:11:15.033 [2024-07-16 00:12:33.461299] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:15.033 00:12:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:11:15.033 [2024-07-16 00:12:33.647029] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:20.305 [2024-07-16 00:12:38.795317] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:20.305 Initializing NVMe Controllers 00:11:20.306 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:11:20.306 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:11:20.306 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:11:20.306 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:11:20.306 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:11:20.306 Initialization complete. Launching workers. 00:11:20.306 Starting thread on core 2 00:11:20.306 Starting thread on core 3 00:11:20.306 Starting thread on core 1 00:11:20.306 00:12:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:11:20.306 [2024-07-16 00:12:39.081633] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:23.652 [2024-07-16 00:12:42.152523] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:23.652 Initializing NVMe Controllers 00:11:23.652 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:11:23.652 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:11:23.652 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:11:23.652 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:11:23.652 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:11:23.652 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:11:23.652 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:11:23.652 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:11:23.652 Initialization complete. Launching workers. 00:11:23.652 Starting thread on core 1 with urgent priority queue 00:11:23.652 Starting thread on core 2 with urgent priority queue 00:11:23.652 Starting thread on core 3 with urgent priority queue 00:11:23.652 Starting thread on core 0 with urgent priority queue 00:11:23.652 SPDK bdev Controller (SPDK2 ) core 0: 8794.33 IO/s 11.37 secs/100000 ios 00:11:23.652 SPDK bdev Controller (SPDK2 ) core 1: 7912.67 IO/s 12.64 secs/100000 ios 00:11:23.652 SPDK bdev Controller (SPDK2 ) core 2: 7925.00 IO/s 12.62 secs/100000 ios 00:11:23.652 SPDK bdev Controller (SPDK2 ) core 3: 9190.33 IO/s 10.88 secs/100000 ios 00:11:23.652 ======================================================== 00:11:23.652 00:11:23.652 00:12:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:11:23.652 [2024-07-16 00:12:42.421642] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:23.652 Initializing NVMe Controllers 00:11:23.652 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:11:23.652 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:11:23.652 Namespace ID: 1 size: 0GB 00:11:23.652 Initialization complete. 00:11:23.652 INFO: using host memory buffer for IO 00:11:23.652 Hello world! 00:11:23.652 [2024-07-16 00:12:42.431713] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:23.652 00:12:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:11:23.910 [2024-07-16 00:12:42.702188] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:25.287 Initializing NVMe Controllers 00:11:25.287 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:11:25.287 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:11:25.287 Initialization complete. Launching workers. 00:11:25.287 submit (in ns) avg, min, max = 7716.1, 3267.0, 3998698.3 00:11:25.287 complete (in ns) avg, min, max = 21226.5, 1807.0, 3997849.6 00:11:25.287 00:11:25.287 Submit histogram 00:11:25.287 ================ 00:11:25.287 Range in us Cumulative Count 00:11:25.287 3.256 - 3.270: 0.0062% ( 1) 00:11:25.287 3.270 - 3.283: 0.0311% ( 4) 00:11:25.287 3.283 - 3.297: 0.3417% ( 50) 00:11:25.287 3.297 - 3.311: 0.8138% ( 76) 00:11:25.287 3.311 - 3.325: 1.2922% ( 77) 00:11:25.287 3.325 - 3.339: 2.0190% ( 117) 00:11:25.287 3.339 - 3.353: 4.5536% ( 408) 00:11:25.287 3.353 - 3.367: 9.3931% ( 779) 00:11:25.287 3.367 - 3.381: 15.2389% ( 941) 00:11:25.287 3.381 - 3.395: 21.1219% ( 947) 00:11:25.287 3.395 - 3.409: 27.3156% ( 997) 00:11:25.287 3.409 - 3.423: 32.6210% ( 854) 00:11:25.287 3.423 - 3.437: 37.2492% ( 745) 00:11:25.287 3.437 - 3.450: 42.9832% ( 923) 00:11:25.287 3.450 - 3.464: 47.9592% ( 801) 00:11:25.287 3.464 - 3.478: 51.4816% ( 567) 00:11:25.287 3.478 - 3.492: 55.1345% ( 588) 00:11:25.287 3.492 - 3.506: 60.9927% ( 943) 00:11:25.287 3.506 - 3.520: 67.4660% ( 1042) 00:11:25.287 3.520 - 3.534: 71.8954% ( 713) 00:11:25.288 3.534 - 3.548: 76.7534% ( 782) 00:11:25.288 3.548 - 3.562: 81.0586% ( 693) 00:11:25.288 3.562 - 3.590: 85.8731% ( 775) 00:11:25.288 3.590 - 3.617: 87.2958% ( 229) 00:11:25.288 3.617 - 3.645: 88.1717% ( 141) 00:11:25.288 3.645 - 3.673: 89.5322% ( 219) 00:11:25.288 3.673 - 3.701: 91.2033% ( 269) 00:11:25.288 3.701 - 3.729: 92.9055% ( 274) 00:11:25.288 3.729 - 3.757: 94.6450% ( 280) 00:11:25.288 3.757 - 3.784: 96.4652% ( 293) 00:11:25.288 3.784 - 3.812: 97.7884% ( 213) 00:11:25.288 3.812 - 3.840: 98.6146% ( 133) 00:11:25.288 3.840 - 3.868: 99.1738% ( 90) 00:11:25.288 3.868 - 3.896: 99.4409% ( 43) 00:11:25.288 3.896 - 3.923: 99.6086% ( 27) 00:11:25.288 3.923 - 3.951: 99.6645% ( 9) 00:11:25.288 5.259 - 5.287: 99.6707% ( 1) 00:11:25.288 5.343 - 5.370: 99.6770% ( 1) 00:11:25.288 5.426 - 5.454: 99.6832% ( 1) 00:11:25.288 5.510 - 5.537: 99.6956% ( 2) 00:11:25.288 5.537 - 5.565: 99.7018% ( 1) 00:11:25.288 5.565 - 5.593: 99.7142% ( 2) 00:11:25.288 5.593 - 5.621: 99.7204% ( 1) 00:11:25.288 5.621 - 5.649: 99.7391% ( 3) 00:11:25.288 5.649 - 5.677: 99.7453% ( 1) 00:11:25.288 5.704 - 5.732: 99.7515% ( 1) 00:11:25.288 5.760 - 5.788: 99.7577% ( 1) 00:11:25.288 5.816 - 5.843: 99.7639% ( 1) 00:11:25.288 6.066 - 6.094: 99.7701% ( 1) 00:11:25.288 6.094 - 6.122: 99.7764% ( 1) 00:11:25.288 6.150 - 6.177: 99.7826% ( 1) 00:11:25.288 6.177 - 6.205: 99.7888% ( 1) 00:11:25.288 6.317 - 6.344: 99.7950% ( 1) 00:11:25.288 6.344 - 6.372: 99.8012% ( 1) 00:11:25.288 6.428 - 6.456: 99.8074% ( 1) 00:11:25.288 6.483 - 6.511: 99.8136% ( 1) 00:11:25.288 6.623 - 6.650: 99.8261% ( 2) 00:11:25.288 6.762 - 6.790: 99.8323% ( 1) 00:11:25.288 6.817 - 6.845: 99.8571% ( 4) 00:11:25.288 7.096 - 7.123: 99.8633% ( 1) 00:11:25.288 7.179 - 7.235: 99.8695% ( 1) 00:11:25.288 7.346 - 7.402: 99.8758% ( 1) 00:11:25.288 7.513 - 7.569: 99.8820% ( 1) 00:11:25.288 7.680 - 7.736: 99.8882% ( 1) 00:11:25.288 10.630 - 10.685: 99.8944% ( 1) 00:11:25.288 3989.148 - 4017.642: 100.0000% ( 17) 00:11:25.288 00:11:25.288 Complete histogram 00:11:25.288 ================== 00:11:25.288 Range in us Cumulative Count 00:11:25.288 1.795 - 1.809: 0.0062% ( 1) 00:11:25.288 1.809 - 1.823: 0.2671% ( 42) 00:11:25.288 1.823 - 1.837: 1.9693% ( 274) 00:11:25.288 1.837 - 1.850: 3.4478% ( 238) 00:11:25.288 1.850 - 1.864: 4.1871% ( 119) 00:11:25.288 1.864 - 1.878: 8.6351% ( 716) 00:11:25.288 1.878 - 1.892: 55.9918% ( 7623) 00:11:25.288 1.892 - 1.906: 89.9174% ( 5461) 00:11:25.288 1.906 - 1.920: 94.6822% ( 767) 00:11:25.288 1.920 - 1.934: 96.2229% ( 248) 00:11:25.288 1.934 - 1.948: 96.8069% ( 94) 00:11:25.288 1.948 - [2024-07-16 00:12:43.801322] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:25.288 1.962: 97.6766% ( 140) 00:11:25.288 1.962 - 1.976: 98.5960% ( 148) 00:11:25.288 1.976 - 1.990: 99.0868% ( 79) 00:11:25.288 1.990 - 2.003: 99.1675% ( 13) 00:11:25.288 2.003 - 2.017: 99.2297% ( 10) 00:11:25.288 2.017 - 2.031: 99.2545% ( 4) 00:11:25.288 2.031 - 2.045: 99.2607% ( 1) 00:11:25.288 2.045 - 2.059: 99.2794% ( 3) 00:11:25.288 2.059 - 2.073: 99.2856% ( 1) 00:11:25.288 2.101 - 2.115: 99.2918% ( 1) 00:11:25.288 2.296 - 2.310: 99.2980% ( 1) 00:11:25.288 3.409 - 3.423: 99.3042% ( 1) 00:11:25.288 3.617 - 3.645: 99.3104% ( 1) 00:11:25.288 3.645 - 3.673: 99.3166% ( 1) 00:11:25.288 3.701 - 3.729: 99.3291% ( 2) 00:11:25.288 3.757 - 3.784: 99.3415% ( 2) 00:11:25.288 3.868 - 3.896: 99.3477% ( 1) 00:11:25.288 3.896 - 3.923: 99.3539% ( 1) 00:11:25.288 3.923 - 3.951: 99.3601% ( 1) 00:11:25.288 3.979 - 4.007: 99.3726% ( 2) 00:11:25.288 4.118 - 4.146: 99.3788% ( 1) 00:11:25.288 4.146 - 4.174: 99.3850% ( 1) 00:11:25.288 4.174 - 4.202: 99.3912% ( 1) 00:11:25.288 4.202 - 4.230: 99.3974% ( 1) 00:11:25.288 4.257 - 4.285: 99.4036% ( 1) 00:11:25.288 4.285 - 4.313: 99.4098% ( 1) 00:11:25.288 4.369 - 4.397: 99.4160% ( 1) 00:11:25.288 4.508 - 4.536: 99.4223% ( 1) 00:11:25.288 4.563 - 4.591: 99.4285% ( 1) 00:11:25.288 4.730 - 4.758: 99.4347% ( 1) 00:11:25.288 4.842 - 4.870: 99.4409% ( 1) 00:11:25.288 4.897 - 4.925: 99.4471% ( 1) 00:11:25.288 5.009 - 5.037: 99.4533% ( 1) 00:11:25.288 5.037 - 5.064: 99.4595% ( 1) 00:11:25.288 5.287 - 5.315: 99.4657% ( 1) 00:11:25.288 5.593 - 5.621: 99.4720% ( 1) 00:11:25.288 5.621 - 5.649: 99.4782% ( 1) 00:11:25.288 5.732 - 5.760: 99.4844% ( 1) 00:11:25.288 5.955 - 5.983: 99.4906% ( 1) 00:11:25.288 6.038 - 6.066: 99.4968% ( 1) 00:11:25.288 7.680 - 7.736: 99.5030% ( 1) 00:11:25.288 7.847 - 7.903: 99.5092% ( 1) 00:11:25.288 8.849 - 8.904: 99.5154% ( 1) 00:11:25.288 3590.233 - 3604.480: 99.5216% ( 1) 00:11:25.288 3989.148 - 4017.642: 100.0000% ( 77) 00:11:25.288 00:11:25.288 00:12:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:11:25.288 00:12:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:11:25.288 00:12:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:11:25.288 00:12:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:11:25.288 00:12:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:11:25.288 [ 00:11:25.288 { 00:11:25.288 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:25.288 "subtype": "Discovery", 00:11:25.288 "listen_addresses": [], 00:11:25.288 "allow_any_host": true, 00:11:25.288 "hosts": [] 00:11:25.288 }, 00:11:25.288 { 00:11:25.288 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:11:25.288 "subtype": "NVMe", 00:11:25.288 "listen_addresses": [ 00:11:25.288 { 00:11:25.288 "trtype": "VFIOUSER", 00:11:25.288 "adrfam": "IPv4", 00:11:25.288 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:11:25.288 "trsvcid": "0" 00:11:25.288 } 00:11:25.288 ], 00:11:25.288 "allow_any_host": true, 00:11:25.288 "hosts": [], 00:11:25.288 "serial_number": "SPDK1", 00:11:25.288 "model_number": "SPDK bdev Controller", 00:11:25.288 "max_namespaces": 32, 00:11:25.288 "min_cntlid": 1, 00:11:25.288 "max_cntlid": 65519, 00:11:25.288 "namespaces": [ 00:11:25.288 { 00:11:25.288 "nsid": 1, 00:11:25.288 "bdev_name": "Malloc1", 00:11:25.288 "name": "Malloc1", 00:11:25.288 "nguid": "E30B269D37FC40918311B546A806FABC", 00:11:25.288 "uuid": "e30b269d-37fc-4091-8311-b546a806fabc" 00:11:25.288 }, 00:11:25.288 { 00:11:25.288 "nsid": 2, 00:11:25.288 "bdev_name": "Malloc3", 00:11:25.288 "name": "Malloc3", 00:11:25.288 "nguid": "6EEF61AF4AD24BEDB4DC430FDAC0B7FA", 00:11:25.288 "uuid": "6eef61af-4ad2-4bed-b4dc-430fdac0b7fa" 00:11:25.288 } 00:11:25.288 ] 00:11:25.288 }, 00:11:25.288 { 00:11:25.288 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:11:25.288 "subtype": "NVMe", 00:11:25.288 "listen_addresses": [ 00:11:25.288 { 00:11:25.288 "trtype": "VFIOUSER", 00:11:25.288 "adrfam": "IPv4", 00:11:25.288 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:11:25.288 "trsvcid": "0" 00:11:25.288 } 00:11:25.288 ], 00:11:25.288 "allow_any_host": true, 00:11:25.288 "hosts": [], 00:11:25.288 "serial_number": "SPDK2", 00:11:25.288 "model_number": "SPDK bdev Controller", 00:11:25.288 "max_namespaces": 32, 00:11:25.288 "min_cntlid": 1, 00:11:25.288 "max_cntlid": 65519, 00:11:25.288 "namespaces": [ 00:11:25.288 { 00:11:25.288 "nsid": 1, 00:11:25.288 "bdev_name": "Malloc2", 00:11:25.288 "name": "Malloc2", 00:11:25.288 "nguid": "5CA5FF1D7DF3476E8421A7C4031C7B9E", 00:11:25.288 "uuid": "5ca5ff1d-7df3-476e-8421-a7c4031c7b9e" 00:11:25.288 } 00:11:25.288 ] 00:11:25.288 } 00:11:25.288 ] 00:11:25.288 00:12:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:11:25.288 00:12:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1436617 00:11:25.289 00:12:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:11:25.289 00:12:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:11:25.289 00:12:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1259 -- # local i=0 00:11:25.289 00:12:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1260 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:11:25.289 00:12:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:11:25.289 00:12:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # return 0 00:11:25.289 00:12:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:11:25.289 00:12:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:11:25.596 [2024-07-16 00:12:44.172701] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:25.596 Malloc4 00:11:25.596 00:12:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:11:25.596 [2024-07-16 00:12:44.399436] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:25.596 00:12:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:11:25.596 Asynchronous Event Request test 00:11:25.596 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:11:25.596 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:11:25.596 Registering asynchronous event callbacks... 00:11:25.596 Starting namespace attribute notice tests for all controllers... 00:11:25.596 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:11:25.596 aer_cb - Changed Namespace 00:11:25.596 Cleaning up... 00:11:25.888 [ 00:11:25.888 { 00:11:25.888 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:25.888 "subtype": "Discovery", 00:11:25.888 "listen_addresses": [], 00:11:25.888 "allow_any_host": true, 00:11:25.888 "hosts": [] 00:11:25.888 }, 00:11:25.888 { 00:11:25.888 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:11:25.888 "subtype": "NVMe", 00:11:25.888 "listen_addresses": [ 00:11:25.888 { 00:11:25.888 "trtype": "VFIOUSER", 00:11:25.888 "adrfam": "IPv4", 00:11:25.888 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:11:25.888 "trsvcid": "0" 00:11:25.888 } 00:11:25.888 ], 00:11:25.888 "allow_any_host": true, 00:11:25.888 "hosts": [], 00:11:25.888 "serial_number": "SPDK1", 00:11:25.888 "model_number": "SPDK bdev Controller", 00:11:25.888 "max_namespaces": 32, 00:11:25.888 "min_cntlid": 1, 00:11:25.888 "max_cntlid": 65519, 00:11:25.888 "namespaces": [ 00:11:25.888 { 00:11:25.888 "nsid": 1, 00:11:25.888 "bdev_name": "Malloc1", 00:11:25.888 "name": "Malloc1", 00:11:25.888 "nguid": "E30B269D37FC40918311B546A806FABC", 00:11:25.888 "uuid": "e30b269d-37fc-4091-8311-b546a806fabc" 00:11:25.888 }, 00:11:25.888 { 00:11:25.888 "nsid": 2, 00:11:25.888 "bdev_name": "Malloc3", 00:11:25.888 "name": "Malloc3", 00:11:25.888 "nguid": "6EEF61AF4AD24BEDB4DC430FDAC0B7FA", 00:11:25.888 "uuid": "6eef61af-4ad2-4bed-b4dc-430fdac0b7fa" 00:11:25.888 } 00:11:25.888 ] 00:11:25.888 }, 00:11:25.888 { 00:11:25.888 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:11:25.888 "subtype": "NVMe", 00:11:25.888 "listen_addresses": [ 00:11:25.888 { 00:11:25.888 "trtype": "VFIOUSER", 00:11:25.888 "adrfam": "IPv4", 00:11:25.888 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:11:25.888 "trsvcid": "0" 00:11:25.888 } 00:11:25.888 ], 00:11:25.888 "allow_any_host": true, 00:11:25.888 "hosts": [], 00:11:25.888 "serial_number": "SPDK2", 00:11:25.888 "model_number": "SPDK bdev Controller", 00:11:25.888 "max_namespaces": 32, 00:11:25.888 "min_cntlid": 1, 00:11:25.888 "max_cntlid": 65519, 00:11:25.888 "namespaces": [ 00:11:25.888 { 00:11:25.888 "nsid": 1, 00:11:25.888 "bdev_name": "Malloc2", 00:11:25.888 "name": "Malloc2", 00:11:25.888 "nguid": "5CA5FF1D7DF3476E8421A7C4031C7B9E", 00:11:25.888 "uuid": "5ca5ff1d-7df3-476e-8421-a7c4031c7b9e" 00:11:25.888 }, 00:11:25.888 { 00:11:25.888 "nsid": 2, 00:11:25.888 "bdev_name": "Malloc4", 00:11:25.888 "name": "Malloc4", 00:11:25.888 "nguid": "17EA0E7D40F94436AC92E523982BCAB5", 00:11:25.888 "uuid": "17ea0e7d-40f9-4436-ac92-e523982bcab5" 00:11:25.888 } 00:11:25.888 ] 00:11:25.888 } 00:11:25.888 ] 00:11:25.888 00:12:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1436617 00:11:25.888 00:12:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:11:25.888 00:12:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1428455 00:11:25.888 00:12:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@942 -- # '[' -z 1428455 ']' 00:11:25.888 00:12:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@946 -- # kill -0 1428455 00:11:25.888 00:12:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@947 -- # uname 00:11:25.888 00:12:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:11:25.888 00:12:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1428455 00:11:25.888 00:12:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:11:25.888 00:12:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:11:25.888 00:12:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1428455' 00:11:25.888 killing process with pid 1428455 00:11:25.888 00:12:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@961 -- # kill 1428455 00:11:25.888 00:12:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # wait 1428455 00:11:26.149 00:12:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:11:26.149 00:12:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:26.149 00:12:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:11:26.149 00:12:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:11:26.149 00:12:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:11:26.149 00:12:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1436836 00:11:26.149 00:12:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1436836' 00:11:26.149 Process pid: 1436836 00:11:26.149 00:12:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:11:26.149 00:12:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:11:26.149 00:12:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1436836 00:11:26.149 00:12:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@823 -- # '[' -z 1436836 ']' 00:11:26.149 00:12:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:26.149 00:12:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@828 -- # local max_retries=100 00:11:26.149 00:12:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:26.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:26.149 00:12:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@832 -- # xtrace_disable 00:11:26.149 00:12:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:11:26.149 [2024-07-16 00:12:44.956439] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:11:26.149 [2024-07-16 00:12:44.957274] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:11:26.149 [2024-07-16 00:12:44.957311] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:26.408 [2024-07-16 00:12:45.011952] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:26.408 [2024-07-16 00:12:45.091561] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:26.408 [2024-07-16 00:12:45.091600] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:26.408 [2024-07-16 00:12:45.091609] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:26.408 [2024-07-16 00:12:45.091617] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:26.408 [2024-07-16 00:12:45.091623] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:26.408 [2024-07-16 00:12:45.091687] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:26.408 [2024-07-16 00:12:45.091705] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:26.409 [2024-07-16 00:12:45.091791] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:26.409 [2024-07-16 00:12:45.091794] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:26.409 [2024-07-16 00:12:45.165282] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:11:26.409 [2024-07-16 00:12:45.165381] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:11:26.409 [2024-07-16 00:12:45.165526] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:11:26.409 [2024-07-16 00:12:45.165927] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:11:26.409 [2024-07-16 00:12:45.166150] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:11:26.977 00:12:45 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:11:26.977 00:12:45 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@856 -- # return 0 00:11:26.977 00:12:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:11:28.355 00:12:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:11:28.356 00:12:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:11:28.356 00:12:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:11:28.356 00:12:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:28.356 00:12:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:11:28.356 00:12:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:11:28.356 Malloc1 00:11:28.356 00:12:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:11:28.615 00:12:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:11:28.874 00:12:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:11:28.874 00:12:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:28.874 00:12:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:11:28.874 00:12:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:11:29.133 Malloc2 00:11:29.133 00:12:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:11:29.392 00:12:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:11:29.392 00:12:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:11:29.652 00:12:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:11:29.652 00:12:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1436836 00:11:29.652 00:12:48 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@942 -- # '[' -z 1436836 ']' 00:11:29.652 00:12:48 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@946 -- # kill -0 1436836 00:11:29.652 00:12:48 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@947 -- # uname 00:11:29.652 00:12:48 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:11:29.652 00:12:48 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1436836 00:11:29.652 00:12:48 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:11:29.652 00:12:48 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:11:29.652 00:12:48 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1436836' 00:11:29.652 killing process with pid 1436836 00:11:29.652 00:12:48 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@961 -- # kill 1436836 00:11:29.652 00:12:48 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # wait 1436836 00:11:29.911 00:12:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:11:29.911 00:12:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:29.911 00:11:29.911 real 0m51.327s 00:11:29.911 user 3m23.295s 00:11:29.911 sys 0m3.595s 00:11:29.911 00:12:48 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1118 -- # xtrace_disable 00:11:29.911 00:12:48 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:11:29.911 ************************************ 00:11:29.911 END TEST nvmf_vfio_user 00:11:29.911 ************************************ 00:11:29.911 00:12:48 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:11:29.911 00:12:48 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:11:29.912 00:12:48 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:11:29.912 00:12:48 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:11:29.912 00:12:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:29.912 ************************************ 00:11:29.912 START TEST nvmf_vfio_user_nvme_compliance 00:11:29.912 ************************************ 00:11:29.912 00:12:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:11:30.172 * Looking for test storage... 00:11:30.172 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:11:30.172 00:12:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:30.172 00:12:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:11:30.172 00:12:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:30.172 00:12:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:30.172 00:12:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:30.172 00:12:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:30.172 00:12:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:30.172 00:12:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:30.172 00:12:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:30.172 00:12:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:30.172 00:12:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:30.172 00:12:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:30.172 00:12:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:30.172 00:12:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:30.172 00:12:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:30.172 00:12:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:30.172 00:12:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:30.172 00:12:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:30.172 00:12:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:30.172 00:12:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:30.172 00:12:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:30.172 00:12:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:30.172 00:12:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.172 00:12:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.172 00:12:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.172 00:12:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:11:30.172 00:12:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.172 00:12:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:11:30.172 00:12:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:30.172 00:12:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:30.172 00:12:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:30.172 00:12:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:30.172 00:12:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:30.172 00:12:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:30.172 00:12:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:30.172 00:12:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:30.172 00:12:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:30.172 00:12:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:30.172 00:12:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:11:30.172 00:12:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:11:30.172 00:12:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:11:30.172 00:12:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=1437595 00:11:30.172 00:12:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 1437595' 00:11:30.172 Process pid: 1437595 00:11:30.172 00:12:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:11:30.173 00:12:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:11:30.173 00:12:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 1437595 00:11:30.173 00:12:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@823 -- # '[' -z 1437595 ']' 00:11:30.173 00:12:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:30.173 00:12:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@828 -- # local max_retries=100 00:11:30.173 00:12:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:30.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:30.173 00:12:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@832 -- # xtrace_disable 00:11:30.173 00:12:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:11:30.173 [2024-07-16 00:12:48.910564] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:11:30.173 [2024-07-16 00:12:48.910608] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:30.173 [2024-07-16 00:12:48.964883] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:30.433 [2024-07-16 00:12:49.038983] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:30.433 [2024-07-16 00:12:49.039020] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:30.433 [2024-07-16 00:12:49.039029] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:30.433 [2024-07-16 00:12:49.039036] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:30.433 [2024-07-16 00:12:49.039042] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:30.433 [2024-07-16 00:12:49.039134] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:30.433 [2024-07-16 00:12:49.039237] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:30.433 [2024-07-16 00:12:49.039243] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:31.001 00:12:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:11:31.001 00:12:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@856 -- # return 0 00:11:31.001 00:12:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:11:31.938 00:12:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:11:31.938 00:12:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:11:31.938 00:12:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:11:31.938 00:12:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@553 -- # xtrace_disable 00:11:31.938 00:12:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:11:31.938 00:12:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:11:31.938 00:12:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:11:31.938 00:12:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:11:31.938 00:12:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@553 -- # xtrace_disable 00:11:31.938 00:12:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:11:31.938 malloc0 00:11:31.938 00:12:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:11:31.939 00:12:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:11:31.939 00:12:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@553 -- # xtrace_disable 00:11:31.939 00:12:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:11:31.939 00:12:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:11:31.939 00:12:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:11:31.939 00:12:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@553 -- # xtrace_disable 00:11:31.939 00:12:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:11:32.198 00:12:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:11:32.198 00:12:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:11:32.198 00:12:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@553 -- # xtrace_disable 00:11:32.198 00:12:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:11:32.198 00:12:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:11:32.198 00:12:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:11:32.198 00:11:32.198 00:11:32.198 CUnit - A unit testing framework for C - Version 2.1-3 00:11:32.198 http://cunit.sourceforge.net/ 00:11:32.198 00:11:32.198 00:11:32.198 Suite: nvme_compliance 00:11:32.198 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-16 00:12:50.942329] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:32.198 [2024-07-16 00:12:50.943678] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:11:32.198 [2024-07-16 00:12:50.943694] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:11:32.198 [2024-07-16 00:12:50.943700] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:11:32.198 [2024-07-16 00:12:50.945346] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:32.198 passed 00:11:32.198 Test: admin_identify_ctrlr_verify_fused ...[2024-07-16 00:12:51.023914] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:32.198 [2024-07-16 00:12:51.026941] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:32.457 passed 00:11:32.457 Test: admin_identify_ns ...[2024-07-16 00:12:51.110754] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:32.457 [2024-07-16 00:12:51.171239] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:11:32.457 [2024-07-16 00:12:51.179250] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:11:32.457 [2024-07-16 00:12:51.200355] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:32.457 passed 00:11:32.457 Test: admin_get_features_mandatory_features ...[2024-07-16 00:12:51.276634] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:32.457 [2024-07-16 00:12:51.279654] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:32.457 passed 00:11:32.716 Test: admin_get_features_optional_features ...[2024-07-16 00:12:51.357172] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:32.716 [2024-07-16 00:12:51.360194] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:32.716 passed 00:11:32.716 Test: admin_set_features_number_of_queues ...[2024-07-16 00:12:51.438093] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:32.716 [2024-07-16 00:12:51.542314] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:32.716 passed 00:11:32.975 Test: admin_get_log_page_mandatory_logs ...[2024-07-16 00:12:51.618322] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:32.975 [2024-07-16 00:12:51.623355] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:32.975 passed 00:11:32.975 Test: admin_get_log_page_with_lpo ...[2024-07-16 00:12:51.701253] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:32.975 [2024-07-16 00:12:51.771237] ctrlr.c:2677:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:11:32.975 [2024-07-16 00:12:51.784299] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:32.975 passed 00:11:33.235 Test: fabric_property_get ...[2024-07-16 00:12:51.860655] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:33.235 [2024-07-16 00:12:51.861881] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:11:33.235 [2024-07-16 00:12:51.863677] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:33.235 passed 00:11:33.235 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-16 00:12:51.943197] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:33.235 [2024-07-16 00:12:51.944446] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:11:33.235 [2024-07-16 00:12:51.948230] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:33.235 passed 00:11:33.235 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-16 00:12:52.025706] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:33.494 [2024-07-16 00:12:52.110236] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:11:33.494 [2024-07-16 00:12:52.126233] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:11:33.494 [2024-07-16 00:12:52.131329] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:33.494 passed 00:11:33.494 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-16 00:12:52.206471] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:33.494 [2024-07-16 00:12:52.207701] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:11:33.494 [2024-07-16 00:12:52.209493] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:33.494 passed 00:11:33.494 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-16 00:12:52.287402] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:33.753 [2024-07-16 00:12:52.364237] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:11:33.753 [2024-07-16 00:12:52.388234] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:11:33.753 [2024-07-16 00:12:52.393333] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:33.753 passed 00:11:33.753 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-16 00:12:52.470530] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:33.753 [2024-07-16 00:12:52.471759] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:11:33.753 [2024-07-16 00:12:52.471781] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:11:33.753 [2024-07-16 00:12:52.473551] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:33.753 passed 00:11:33.753 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-16 00:12:52.551713] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:34.012 [2024-07-16 00:12:52.642240] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:11:34.012 [2024-07-16 00:12:52.650235] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:11:34.012 [2024-07-16 00:12:52.658234] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:11:34.012 [2024-07-16 00:12:52.666233] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:11:34.012 [2024-07-16 00:12:52.699317] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:34.012 passed 00:11:34.012 Test: admin_create_io_sq_verify_pc ...[2024-07-16 00:12:52.773467] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:34.012 [2024-07-16 00:12:52.792241] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:11:34.012 [2024-07-16 00:12:52.809652] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:34.012 passed 00:11:34.271 Test: admin_create_io_qp_max_qps ...[2024-07-16 00:12:52.887173] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:35.211 [2024-07-16 00:12:53.978238] nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:11:35.777 [2024-07-16 00:12:54.362035] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:35.777 passed 00:11:35.777 Test: admin_create_io_sq_shared_cq ...[2024-07-16 00:12:54.439673] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:35.777 [2024-07-16 00:12:54.575233] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:11:35.777 [2024-07-16 00:12:54.612293] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:36.036 passed 00:11:36.036 00:11:36.036 Run Summary: Type Total Ran Passed Failed Inactive 00:11:36.036 suites 1 1 n/a 0 0 00:11:36.036 tests 18 18 18 0 0 00:11:36.036 asserts 360 360 360 0 n/a 00:11:36.036 00:11:36.036 Elapsed time = 1.506 seconds 00:11:36.036 00:12:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 1437595 00:11:36.036 00:12:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@942 -- # '[' -z 1437595 ']' 00:11:36.036 00:12:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@946 -- # kill -0 1437595 00:11:36.036 00:12:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@947 -- # uname 00:11:36.036 00:12:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:11:36.036 00:12:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1437595 00:11:36.036 00:12:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:11:36.036 00:12:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:11:36.036 00:12:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1437595' 00:11:36.036 killing process with pid 1437595 00:11:36.036 00:12:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@961 -- # kill 1437595 00:11:36.036 00:12:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@966 -- # wait 1437595 00:11:36.296 00:12:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:11:36.296 00:12:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:11:36.296 00:11:36.296 real 0m6.150s 00:11:36.296 user 0m17.568s 00:11:36.296 sys 0m0.460s 00:11:36.296 00:12:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1118 -- # xtrace_disable 00:11:36.296 00:12:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:11:36.296 ************************************ 00:11:36.296 END TEST nvmf_vfio_user_nvme_compliance 00:11:36.296 ************************************ 00:11:36.296 00:12:54 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:11:36.296 00:12:54 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:11:36.296 00:12:54 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:11:36.296 00:12:54 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:11:36.296 00:12:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:36.296 ************************************ 00:11:36.296 START TEST nvmf_vfio_user_fuzz 00:11:36.296 ************************************ 00:11:36.296 00:12:54 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:11:36.296 * Looking for test storage... 00:11:36.296 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:36.296 00:12:55 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:36.296 00:12:55 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:11:36.296 00:12:55 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:36.296 00:12:55 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:36.296 00:12:55 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:36.296 00:12:55 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:36.296 00:12:55 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:36.296 00:12:55 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:36.296 00:12:55 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:36.296 00:12:55 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:36.296 00:12:55 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:36.296 00:12:55 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:36.296 00:12:55 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:36.296 00:12:55 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:36.296 00:12:55 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:36.296 00:12:55 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:36.296 00:12:55 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:36.296 00:12:55 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:36.296 00:12:55 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:36.296 00:12:55 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:36.296 00:12:55 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:36.296 00:12:55 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:36.297 00:12:55 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.297 00:12:55 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.297 00:12:55 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.297 00:12:55 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:11:36.297 00:12:55 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.297 00:12:55 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:11:36.297 00:12:55 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:36.297 00:12:55 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:36.297 00:12:55 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:36.297 00:12:55 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:36.297 00:12:55 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:36.297 00:12:55 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:36.297 00:12:55 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:36.297 00:12:55 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:36.297 00:12:55 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:11:36.297 00:12:55 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:36.297 00:12:55 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:11:36.297 00:12:55 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:11:36.297 00:12:55 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:11:36.297 00:12:55 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:11:36.297 00:12:55 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:11:36.297 00:12:55 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=1438593 00:11:36.297 00:12:55 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 1438593' 00:11:36.297 Process pid: 1438593 00:11:36.297 00:12:55 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:11:36.297 00:12:55 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 1438593 00:11:36.297 00:12:55 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:11:36.297 00:12:55 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@823 -- # '[' -z 1438593 ']' 00:11:36.297 00:12:55 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:36.297 00:12:55 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@828 -- # local max_retries=100 00:11:36.297 00:12:55 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:36.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:36.297 00:12:55 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@832 -- # xtrace_disable 00:11:36.297 00:12:55 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:37.234 00:12:55 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:11:37.234 00:12:55 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@856 -- # return 0 00:11:37.234 00:12:55 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:11:38.171 00:12:56 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:11:38.171 00:12:56 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@553 -- # xtrace_disable 00:11:38.171 00:12:56 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:38.171 00:12:56 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:11:38.171 00:12:56 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:11:38.171 00:12:56 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:11:38.171 00:12:56 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@553 -- # xtrace_disable 00:11:38.171 00:12:56 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:38.171 malloc0 00:11:38.171 00:12:57 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:11:38.171 00:12:57 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:11:38.171 00:12:57 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@553 -- # xtrace_disable 00:11:38.171 00:12:57 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:38.171 00:12:57 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:11:38.171 00:12:57 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:11:38.171 00:12:57 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@553 -- # xtrace_disable 00:11:38.171 00:12:57 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:38.430 00:12:57 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:11:38.430 00:12:57 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:11:38.430 00:12:57 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@553 -- # xtrace_disable 00:11:38.430 00:12:57 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:38.430 00:12:57 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:11:38.430 00:12:57 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:11:38.430 00:12:57 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:12:10.568 Fuzzing completed. Shutting down the fuzz application 00:12:10.568 00:12:10.568 Dumping successful admin opcodes: 00:12:10.568 8, 9, 10, 24, 00:12:10.568 Dumping successful io opcodes: 00:12:10.568 0, 00:12:10.568 NS: 0x200003a1ef00 I/O qp, Total commands completed: 990264, total successful commands: 3877, random_seed: 1896169536 00:12:10.568 NS: 0x200003a1ef00 admin qp, Total commands completed: 244396, total successful commands: 1968, random_seed: 1006391744 00:12:10.568 00:13:27 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:12:10.568 00:13:27 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@553 -- # xtrace_disable 00:12:10.568 00:13:27 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:10.568 00:13:27 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:12:10.568 00:13:27 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 1438593 00:12:10.568 00:13:27 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@942 -- # '[' -z 1438593 ']' 00:12:10.568 00:13:27 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@946 -- # kill -0 1438593 00:12:10.568 00:13:27 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@947 -- # uname 00:12:10.568 00:13:27 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:12:10.568 00:13:27 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1438593 00:12:10.568 00:13:27 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:12:10.568 00:13:27 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:12:10.568 00:13:27 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1438593' 00:12:10.568 killing process with pid 1438593 00:12:10.568 00:13:27 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@961 -- # kill 1438593 00:12:10.568 00:13:27 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@966 -- # wait 1438593 00:12:10.568 00:13:27 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:12:10.568 00:13:27 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:12:10.568 00:12:10.568 real 0m32.793s 00:12:10.568 user 0m30.794s 00:12:10.568 sys 0m30.854s 00:12:10.568 00:13:27 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1118 -- # xtrace_disable 00:12:10.568 00:13:27 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:10.568 ************************************ 00:12:10.568 END TEST nvmf_vfio_user_fuzz 00:12:10.568 ************************************ 00:12:10.568 00:13:27 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:12:10.568 00:13:27 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:12:10.568 00:13:27 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:12:10.568 00:13:27 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:12:10.568 00:13:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:10.568 ************************************ 00:12:10.568 START TEST nvmf_host_management 00:12:10.568 ************************************ 00:12:10.568 00:13:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:12:10.568 * Looking for test storage... 00:12:10.568 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:10.568 00:13:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:10.568 00:13:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:12:10.568 00:13:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:10.568 00:13:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:10.568 00:13:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:10.568 00:13:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:10.568 00:13:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:10.568 00:13:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:10.568 00:13:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:10.568 00:13:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:10.568 00:13:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:10.568 00:13:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:10.568 00:13:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:10.568 00:13:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:10.568 00:13:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:10.568 00:13:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:10.568 00:13:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:10.568 00:13:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:10.568 00:13:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:10.568 00:13:27 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:10.568 00:13:27 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:10.568 00:13:27 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:10.568 00:13:27 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.568 00:13:27 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.568 00:13:27 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.568 00:13:27 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:12:10.568 00:13:27 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.568 00:13:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:12:10.568 00:13:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:10.568 00:13:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:10.568 00:13:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:10.568 00:13:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:10.568 00:13:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:10.568 00:13:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:10.568 00:13:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:10.568 00:13:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:10.569 00:13:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:10.569 00:13:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:10.569 00:13:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:12:10.569 00:13:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:10.569 00:13:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:10.569 00:13:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:10.569 00:13:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:10.569 00:13:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:10.569 00:13:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:10.569 00:13:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:10.569 00:13:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:10.569 00:13:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:10.569 00:13:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:10.569 00:13:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:12:10.569 00:13:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:14.765 00:13:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:14.765 00:13:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:12:14.765 00:13:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:14.765 00:13:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:14.765 00:13:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:14.765 00:13:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:14.765 00:13:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:14.765 00:13:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:12:14.765 00:13:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:14.765 00:13:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:12:14.765 00:13:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:12:14.765 00:13:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:12:14.765 00:13:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:12:14.765 00:13:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:12:14.765 00:13:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:12:14.765 00:13:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:14.765 00:13:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:14.765 00:13:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:14.765 00:13:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:14.765 00:13:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:14.765 00:13:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:14.765 00:13:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:14.765 00:13:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:14.765 00:13:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:14.765 00:13:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:14.765 00:13:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:14.765 00:13:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:14.765 00:13:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:14.765 00:13:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:14.765 00:13:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:14.765 00:13:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:14.765 00:13:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:14.765 00:13:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:14.765 00:13:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:14.765 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:14.765 00:13:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:14.765 00:13:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:14.765 00:13:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:14.765 00:13:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:14.765 00:13:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:14.765 00:13:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:14.765 00:13:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:14.765 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:14.765 00:13:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:14.765 00:13:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:14.765 00:13:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:14.765 00:13:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:14.765 00:13:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:14.765 00:13:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:14.765 00:13:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:14.765 00:13:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:14.765 00:13:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:14.765 00:13:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:14.765 00:13:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:14.765 00:13:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:14.765 00:13:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:14.765 00:13:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:14.765 00:13:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:14.765 00:13:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:14.765 Found net devices under 0000:86:00.0: cvl_0_0 00:12:14.765 00:13:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:14.765 00:13:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:14.765 00:13:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:14.765 00:13:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:14.765 00:13:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:14.765 00:13:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:14.765 00:13:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:14.765 00:13:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:14.765 00:13:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:14.765 Found net devices under 0000:86:00.1: cvl_0_1 00:12:14.765 00:13:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:14.765 00:13:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:14.765 00:13:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:12:14.765 00:13:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:14.765 00:13:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:14.765 00:13:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:14.765 00:13:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:14.765 00:13:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:14.765 00:13:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:14.766 00:13:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:14.766 00:13:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:14.766 00:13:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:14.766 00:13:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:14.766 00:13:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:14.766 00:13:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:14.766 00:13:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:14.766 00:13:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:14.766 00:13:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:14.766 00:13:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:14.766 00:13:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:14.766 00:13:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:14.766 00:13:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:14.766 00:13:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:14.766 00:13:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:14.766 00:13:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:14.766 00:13:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:14.766 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:14.766 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:12:14.766 00:12:14.766 --- 10.0.0.2 ping statistics --- 00:12:14.766 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:14.766 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:12:14.766 00:13:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:14.766 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:14.766 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.248 ms 00:12:14.766 00:12:14.766 --- 10.0.0.1 ping statistics --- 00:12:14.766 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:14.766 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:12:14.766 00:13:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:14.766 00:13:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:12:14.766 00:13:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:14.766 00:13:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:14.766 00:13:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:14.766 00:13:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:14.766 00:13:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:14.766 00:13:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:14.766 00:13:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:14.766 00:13:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:12:14.766 00:13:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:12:14.766 00:13:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:12:14.766 00:13:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:14.766 00:13:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:14.766 00:13:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:14.766 00:13:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=1447103 00:12:14.766 00:13:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 1447103 00:12:14.766 00:13:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:12:14.766 00:13:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@823 -- # '[' -z 1447103 ']' 00:12:14.766 00:13:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:14.766 00:13:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@828 -- # local max_retries=100 00:12:14.766 00:13:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:14.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:14.766 00:13:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # xtrace_disable 00:12:14.766 00:13:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:14.766 [2024-07-16 00:13:33.448449] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:12:14.766 [2024-07-16 00:13:33.448491] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:14.766 [2024-07-16 00:13:33.504708] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:14.766 [2024-07-16 00:13:33.586137] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:14.766 [2024-07-16 00:13:33.586170] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:14.766 [2024-07-16 00:13:33.586177] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:14.766 [2024-07-16 00:13:33.586183] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:14.766 [2024-07-16 00:13:33.586188] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:14.766 [2024-07-16 00:13:33.586284] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:14.766 [2024-07-16 00:13:33.586302] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:14.766 [2024-07-16 00:13:33.586411] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:14.766 [2024-07-16 00:13:33.586412] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:12:15.704 00:13:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:12:15.704 00:13:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@856 -- # return 0 00:12:15.704 00:13:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:15.704 00:13:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:15.704 00:13:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:15.704 00:13:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:15.704 00:13:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:15.704 00:13:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@553 -- # xtrace_disable 00:12:15.704 00:13:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:15.704 [2024-07-16 00:13:34.306272] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:15.704 00:13:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:12:15.704 00:13:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:12:15.704 00:13:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:15.704 00:13:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:15.704 00:13:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:12:15.704 00:13:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:12:15.704 00:13:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:12:15.704 00:13:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@553 -- # xtrace_disable 00:12:15.704 00:13:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:15.704 Malloc0 00:12:15.704 [2024-07-16 00:13:34.365987] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:15.704 00:13:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:12:15.704 00:13:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:12:15.704 00:13:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:15.704 00:13:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:15.704 00:13:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1447368 00:12:15.704 00:13:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1447368 /var/tmp/bdevperf.sock 00:12:15.704 00:13:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@823 -- # '[' -z 1447368 ']' 00:12:15.704 00:13:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:15.704 00:13:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:12:15.704 00:13:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:12:15.704 00:13:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@828 -- # local max_retries=100 00:12:15.704 00:13:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:15.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:15.704 00:13:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:12:15.704 00:13:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # xtrace_disable 00:12:15.704 00:13:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:12:15.704 00:13:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:15.704 00:13:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:15.704 00:13:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:15.704 { 00:12:15.704 "params": { 00:12:15.704 "name": "Nvme$subsystem", 00:12:15.704 "trtype": "$TEST_TRANSPORT", 00:12:15.704 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:15.704 "adrfam": "ipv4", 00:12:15.704 "trsvcid": "$NVMF_PORT", 00:12:15.704 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:15.704 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:15.704 "hdgst": ${hdgst:-false}, 00:12:15.704 "ddgst": ${ddgst:-false} 00:12:15.704 }, 00:12:15.704 "method": "bdev_nvme_attach_controller" 00:12:15.704 } 00:12:15.704 EOF 00:12:15.704 )") 00:12:15.704 00:13:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:12:15.704 00:13:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:12:15.704 00:13:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:12:15.704 00:13:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:15.704 "params": { 00:12:15.704 "name": "Nvme0", 00:12:15.704 "trtype": "tcp", 00:12:15.704 "traddr": "10.0.0.2", 00:12:15.704 "adrfam": "ipv4", 00:12:15.704 "trsvcid": "4420", 00:12:15.704 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:15.704 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:12:15.704 "hdgst": false, 00:12:15.704 "ddgst": false 00:12:15.704 }, 00:12:15.704 "method": "bdev_nvme_attach_controller" 00:12:15.704 }' 00:12:15.704 [2024-07-16 00:13:34.457345] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:12:15.704 [2024-07-16 00:13:34.457391] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1447368 ] 00:12:15.704 [2024-07-16 00:13:34.512415] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:15.964 [2024-07-16 00:13:34.586343] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:15.964 Running I/O for 10 seconds... 00:12:16.538 00:13:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:12:16.538 00:13:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@856 -- # return 0 00:12:16.538 00:13:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:12:16.538 00:13:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@553 -- # xtrace_disable 00:12:16.538 00:13:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:16.538 00:13:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:12:16.538 00:13:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:16.538 00:13:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:12:16.538 00:13:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:12:16.538 00:13:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:12:16.538 00:13:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:12:16.538 00:13:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:12:16.538 00:13:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:12:16.538 00:13:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:12:16.538 00:13:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:12:16.538 00:13:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:12:16.538 00:13:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@553 -- # xtrace_disable 00:12:16.538 00:13:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:16.538 00:13:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:12:16.538 00:13:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=835 00:12:16.538 00:13:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 835 -ge 100 ']' 00:12:16.538 00:13:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:12:16.538 00:13:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:12:16.538 00:13:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:12:16.538 00:13:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:12:16.538 00:13:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@553 -- # xtrace_disable 00:12:16.538 00:13:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:16.538 [2024-07-16 00:13:35.341313] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2519460 is same with the state(5) to be set 00:12:16.538 [2024-07-16 00:13:35.341356] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2519460 is same with the state(5) to be set 00:12:16.538 [2024-07-16 00:13:35.341363] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2519460 is same with the state(5) to be set 00:12:16.538 [2024-07-16 00:13:35.341372] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2519460 is same with the state(5) to be set 00:12:16.538 [2024-07-16 00:13:35.341378] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2519460 is same with the state(5) to be set 00:12:16.538 [2024-07-16 00:13:35.341384] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2519460 is same with the state(5) to be set 00:12:16.538 [2024-07-16 00:13:35.341391] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2519460 is same with the state(5) to be set 00:12:16.538 [2024-07-16 00:13:35.341397] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2519460 is same with the state(5) to be set 00:12:16.538 [2024-07-16 00:13:35.341403] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2519460 is same with the state(5) to be set 00:12:16.538 [2024-07-16 00:13:35.341409] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2519460 is same with the state(5) to be set 00:12:16.538 [2024-07-16 00:13:35.341415] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2519460 is same with the state(5) to be set 00:12:16.538 [2024-07-16 00:13:35.341421] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2519460 is same with the state(5) to be set 00:12:16.538 [2024-07-16 00:13:35.341427] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2519460 is same with the state(5) to be set 00:12:16.538 [2024-07-16 00:13:35.341439] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2519460 is same with the state(5) to be set 00:12:16.539 [2024-07-16 00:13:35.341444] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2519460 is same with the state(5) to be set 00:12:16.539 [2024-07-16 00:13:35.341450] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2519460 is same with the state(5) to be set 00:12:16.539 [2024-07-16 00:13:35.341457] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2519460 is same with the state(5) to be set 00:12:16.539 [2024-07-16 00:13:35.341462] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2519460 is same with the state(5) to be set 00:12:16.539 [2024-07-16 00:13:35.341468] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2519460 is same with the state(5) to be set 00:12:16.539 [2024-07-16 00:13:35.341474] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2519460 is same with the state(5) to be set 00:12:16.539 [2024-07-16 00:13:35.341479] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2519460 is same with the state(5) to be set 00:12:16.539 [2024-07-16 00:13:35.341485] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2519460 is same with the state(5) to be set 00:12:16.539 [2024-07-16 00:13:35.341491] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2519460 is same with the state(5) to be set 00:12:16.539 [2024-07-16 00:13:35.341496] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2519460 is same with the state(5) to be set 00:12:16.539 [2024-07-16 00:13:35.341502] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2519460 is same with the state(5) to be set 00:12:16.539 [2024-07-16 00:13:35.341508] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2519460 is same with the state(5) to be set 00:12:16.539 [2024-07-16 00:13:35.341515] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2519460 is same with the state(5) to be set 00:12:16.539 [2024-07-16 00:13:35.341522] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2519460 is same with the state(5) to be set 00:12:16.539 [2024-07-16 00:13:35.341528] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2519460 is same with the state(5) to be set 00:12:16.539 [2024-07-16 00:13:35.341533] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2519460 is same with the state(5) to be set 00:12:16.539 [2024-07-16 00:13:35.341539] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2519460 is same with the state(5) to be set 00:12:16.539 [2024-07-16 00:13:35.341545] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2519460 is same with the state(5) to be set 00:12:16.539 [2024-07-16 00:13:35.341551] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2519460 is same with the state(5) to be set 00:12:16.539 [2024-07-16 00:13:35.341557] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2519460 is same with the state(5) to be set 00:12:16.539 [2024-07-16 00:13:35.341564] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2519460 is same with the state(5) to be set 00:12:16.539 [2024-07-16 00:13:35.341571] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2519460 is same with the state(5) to be set 00:12:16.539 [2024-07-16 00:13:35.341577] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2519460 is same with the state(5) to be set 00:12:16.539 [2024-07-16 00:13:35.341583] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2519460 is same with the state(5) to be set 00:12:16.539 [2024-07-16 00:13:35.341589] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2519460 is same with the state(5) to be set 00:12:16.539 [2024-07-16 00:13:35.341595] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2519460 is same with the state(5) to be set 00:12:16.539 [2024-07-16 00:13:35.341602] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2519460 is same with the state(5) to be set 00:12:16.539 [2024-07-16 00:13:35.341608] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2519460 is same with the state(5) to be set 00:12:16.539 [2024-07-16 00:13:35.341614] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2519460 is same with the state(5) to be set 00:12:16.539 [2024-07-16 00:13:35.341620] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2519460 is same with the state(5) to be set 00:12:16.539 [2024-07-16 00:13:35.341627] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2519460 is same with the state(5) to be set 00:12:16.539 [2024-07-16 00:13:35.341632] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2519460 is same with the state(5) to be set 00:12:16.539 [2024-07-16 00:13:35.341638] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2519460 is same with the state(5) to be set 00:12:16.539 [2024-07-16 00:13:35.341644] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2519460 is same with the state(5) to be set 00:12:16.539 [2024-07-16 00:13:35.341650] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2519460 is same with the state(5) to be set 00:12:16.539 [2024-07-16 00:13:35.341656] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2519460 is same with the state(5) to be set 00:12:16.539 [2024-07-16 00:13:35.341662] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2519460 is same with the state(5) to be set 00:12:16.539 [2024-07-16 00:13:35.341668] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2519460 is same with the state(5) to be set 00:12:16.539 [2024-07-16 00:13:35.341673] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2519460 is same with the state(5) to be set 00:12:16.539 [2024-07-16 00:13:35.341679] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2519460 is same with the state(5) to be set 00:12:16.539 [2024-07-16 00:13:35.341685] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2519460 is same with the state(5) to be set 00:12:16.539 [2024-07-16 00:13:35.341691] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2519460 is same with the state(5) to be set 00:12:16.539 [2024-07-16 00:13:35.341697] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2519460 is same with the state(5) to be set 00:12:16.539 [2024-07-16 00:13:35.341703] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2519460 is same with the state(5) to be set 00:12:16.539 [2024-07-16 00:13:35.341709] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2519460 is same with the state(5) to be set 00:12:16.539 [2024-07-16 00:13:35.341715] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2519460 is same with the state(5) to be set 00:12:16.539 [2024-07-16 00:13:35.341721] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2519460 is same with the state(5) to be set 00:12:16.539 [2024-07-16 00:13:35.341727] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2519460 is same with the state(5) to be set 00:12:16.539 [2024-07-16 00:13:35.341733] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2519460 is same with the state(5) to be set 00:12:16.539 [2024-07-16 00:13:35.342018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.539 [2024-07-16 00:13:35.342050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.539 [2024-07-16 00:13:35.342068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:114816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.539 [2024-07-16 00:13:35.342076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.539 [2024-07-16 00:13:35.342090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:114944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.539 [2024-07-16 00:13:35.342097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.539 [2024-07-16 00:13:35.342107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:115072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.539 [2024-07-16 00:13:35.342114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.539 [2024-07-16 00:13:35.342123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:115200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.539 [2024-07-16 00:13:35.342132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.539 [2024-07-16 00:13:35.342140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:115328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.539 [2024-07-16 00:13:35.342148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.539 [2024-07-16 00:13:35.342156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:115456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.539 [2024-07-16 00:13:35.342164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.539 [2024-07-16 00:13:35.342173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:115584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.539 [2024-07-16 00:13:35.342180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.539 [2024-07-16 00:13:35.342188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:115712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.539 [2024-07-16 00:13:35.342196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.539 [2024-07-16 00:13:35.342204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:115840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.539 [2024-07-16 00:13:35.342211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.539 [2024-07-16 00:13:35.342220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:115968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.539 [2024-07-16 00:13:35.342233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.539 [2024-07-16 00:13:35.342242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:116096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.539 [2024-07-16 00:13:35.342248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.539 [2024-07-16 00:13:35.342256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:116224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.539 [2024-07-16 00:13:35.342263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.539 [2024-07-16 00:13:35.342271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:116352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.539 [2024-07-16 00:13:35.342278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.539 [2024-07-16 00:13:35.342287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:116480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.539 [2024-07-16 00:13:35.342296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.539 [2024-07-16 00:13:35.342305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:116608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.539 [2024-07-16 00:13:35.342313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.539 [2024-07-16 00:13:35.342322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:116736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.539 [2024-07-16 00:13:35.342332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.539 [2024-07-16 00:13:35.342341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:116864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.539 [2024-07-16 00:13:35.342348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.539 [2024-07-16 00:13:35.342358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:116992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.539 [2024-07-16 00:13:35.342365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.540 [2024-07-16 00:13:35.342375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:117120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.540 [2024-07-16 00:13:35.342384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.540 [2024-07-16 00:13:35.342394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:117248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.540 [2024-07-16 00:13:35.342401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.540 [2024-07-16 00:13:35.342410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:117376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.540 [2024-07-16 00:13:35.342419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.540 [2024-07-16 00:13:35.342428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:117504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.540 [2024-07-16 00:13:35.342435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.540 [2024-07-16 00:13:35.342444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:117632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.540 [2024-07-16 00:13:35.342451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.540 [2024-07-16 00:13:35.342460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:117760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.540 [2024-07-16 00:13:35.342468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.540 [2024-07-16 00:13:35.342477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:117888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.540 [2024-07-16 00:13:35.342484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.540 [2024-07-16 00:13:35.342493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:118016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.540 [2024-07-16 00:13:35.342500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.540 [2024-07-16 00:13:35.342511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:118144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.540 [2024-07-16 00:13:35.342519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.540 [2024-07-16 00:13:35.342528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:118272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.540 [2024-07-16 00:13:35.342536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.540 [2024-07-16 00:13:35.342545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:118400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.540 [2024-07-16 00:13:35.342552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.540 [2024-07-16 00:13:35.342562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:118528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.540 [2024-07-16 00:13:35.342569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.540 [2024-07-16 00:13:35.342578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:118656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.540 [2024-07-16 00:13:35.342585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.540 [2024-07-16 00:13:35.342594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:118784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.540 [2024-07-16 00:13:35.342602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.540 [2024-07-16 00:13:35.342611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:118912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.540 [2024-07-16 00:13:35.342619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.540 [2024-07-16 00:13:35.342628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:119040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.540 [2024-07-16 00:13:35.342636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.540 [2024-07-16 00:13:35.342646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:119168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.540 [2024-07-16 00:13:35.342653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.540 [2024-07-16 00:13:35.342662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:119296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.540 [2024-07-16 00:13:35.342669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.540 [2024-07-16 00:13:35.342679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:119424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.540 [2024-07-16 00:13:35.342686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.540 [2024-07-16 00:13:35.342695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:119552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.540 [2024-07-16 00:13:35.342704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.540 [2024-07-16 00:13:35.342713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:119680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.540 [2024-07-16 00:13:35.342722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.540 [2024-07-16 00:13:35.342731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:119808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.540 [2024-07-16 00:13:35.342738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.540 [2024-07-16 00:13:35.342748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:119936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.540 [2024-07-16 00:13:35.342755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.540 [2024-07-16 00:13:35.342764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:120064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.540 [2024-07-16 00:13:35.342771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.540 [2024-07-16 00:13:35.342780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:120192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.540 [2024-07-16 00:13:35.342787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.540 [2024-07-16 00:13:35.342796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:120320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.540 [2024-07-16 00:13:35.342804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.540 [2024-07-16 00:13:35.342813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:120448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.540 [2024-07-16 00:13:35.342820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.540 [2024-07-16 00:13:35.342829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:120576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.540 [2024-07-16 00:13:35.342837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.540 [2024-07-16 00:13:35.342845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:120704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.540 [2024-07-16 00:13:35.342853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.540 [2024-07-16 00:13:35.342861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:120832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.540 [2024-07-16 00:13:35.342870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.540 [2024-07-16 00:13:35.342878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:120960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.540 [2024-07-16 00:13:35.342886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.540 [2024-07-16 00:13:35.342895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:121088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.540 [2024-07-16 00:13:35.342903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.540 [2024-07-16 00:13:35.342913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:121216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.540 [2024-07-16 00:13:35.342920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.540 [2024-07-16 00:13:35.342931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:121344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.540 [2024-07-16 00:13:35.342938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.540 [2024-07-16 00:13:35.342947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:121472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.540 [2024-07-16 00:13:35.342955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.540 [2024-07-16 00:13:35.342964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:121600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.540 [2024-07-16 00:13:35.342972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.540 [2024-07-16 00:13:35.342981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:121728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.540 [2024-07-16 00:13:35.342989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.540 [2024-07-16 00:13:35.342999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:121856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.540 [2024-07-16 00:13:35.343006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.541 [2024-07-16 00:13:35.343016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:121984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.541 [2024-07-16 00:13:35.343024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.541 [2024-07-16 00:13:35.343033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:122112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.541 [2024-07-16 00:13:35.343040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.541 [2024-07-16 00:13:35.343048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:122240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.541 [2024-07-16 00:13:35.343056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.541 [2024-07-16 00:13:35.343065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:122368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.541 [2024-07-16 00:13:35.343073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.541 [2024-07-16 00:13:35.343082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:122496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.541 [2024-07-16 00:13:35.343090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.541 [2024-07-16 00:13:35.343098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:122624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.541 [2024-07-16 00:13:35.343105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.541 [2024-07-16 00:13:35.343114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:122752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.541 [2024-07-16 00:13:35.343123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.541 [2024-07-16 00:13:35.343132] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ddb20 is same with the state(5) to be set 00:12:16.541 [2024-07-16 00:13:35.343187] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x23ddb20 was disconnected and freed. reset controller. 00:12:16.541 [2024-07-16 00:13:35.344128] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:12:16.541 task offset: 114688 on job bdev=Nvme0n1 fails 00:12:16.541 00:12:16.541 Latency(us) 00:12:16.541 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:16.541 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:12:16.541 Job: Nvme0n1 ended in about 0.60 seconds with error 00:12:16.541 Verification LBA range: start 0x0 length 0x400 00:12:16.541 Nvme0n1 : 0.60 1486.04 92.88 106.15 0.00 39432.98 6924.02 34876.55 00:12:16.541 =================================================================================================================== 00:12:16.541 Total : 1486.04 92.88 106.15 0.00 39432.98 6924.02 34876.55 00:12:16.541 [2024-07-16 00:13:35.345756] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:16.541 [2024-07-16 00:13:35.345773] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fcc980 (9): Bad file descriptor 00:12:16.541 00:13:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:12:16.541 00:13:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:12:16.541 00:13:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@553 -- # xtrace_disable 00:12:16.541 00:13:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:16.541 [2024-07-16 00:13:35.348866] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:12:16.541 [2024-07-16 00:13:35.349003] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:12:16.541 [2024-07-16 00:13:35.349029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.541 [2024-07-16 00:13:35.349044] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:12:16.541 [2024-07-16 00:13:35.349054] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:12:16.541 [2024-07-16 00:13:35.349063] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:12:16.541 [2024-07-16 00:13:35.349070] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1fcc980 00:12:16.541 [2024-07-16 00:13:35.349089] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fcc980 (9): Bad file descriptor 00:12:16.541 [2024-07-16 00:13:35.349102] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:12:16.541 [2024-07-16 00:13:35.349111] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:12:16.541 [2024-07-16 00:13:35.349120] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:12:16.541 [2024-07-16 00:13:35.349133] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:12:16.541 00:13:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:12:16.541 00:13:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:12:17.919 00:13:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1447368 00:12:17.919 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1447368) - No such process 00:12:17.919 00:13:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:12:17.919 00:13:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:12:17.919 00:13:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:12:17.919 00:13:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:12:17.919 00:13:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:12:17.919 00:13:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:12:17.919 00:13:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:17.919 00:13:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:17.919 { 00:12:17.919 "params": { 00:12:17.919 "name": "Nvme$subsystem", 00:12:17.919 "trtype": "$TEST_TRANSPORT", 00:12:17.919 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:17.919 "adrfam": "ipv4", 00:12:17.919 "trsvcid": "$NVMF_PORT", 00:12:17.919 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:17.919 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:17.919 "hdgst": ${hdgst:-false}, 00:12:17.919 "ddgst": ${ddgst:-false} 00:12:17.919 }, 00:12:17.919 "method": "bdev_nvme_attach_controller" 00:12:17.919 } 00:12:17.919 EOF 00:12:17.919 )") 00:12:17.919 00:13:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:12:17.919 00:13:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:12:17.919 00:13:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:12:17.919 00:13:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:17.919 "params": { 00:12:17.919 "name": "Nvme0", 00:12:17.919 "trtype": "tcp", 00:12:17.919 "traddr": "10.0.0.2", 00:12:17.919 "adrfam": "ipv4", 00:12:17.919 "trsvcid": "4420", 00:12:17.919 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:17.919 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:12:17.919 "hdgst": false, 00:12:17.919 "ddgst": false 00:12:17.919 }, 00:12:17.919 "method": "bdev_nvme_attach_controller" 00:12:17.919 }' 00:12:17.919 [2024-07-16 00:13:36.408773] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:12:17.919 [2024-07-16 00:13:36.408822] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1447624 ] 00:12:17.919 [2024-07-16 00:13:36.463047] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:17.919 [2024-07-16 00:13:36.533585] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:18.178 Running I/O for 1 seconds... 00:12:19.115 00:12:19.115 Latency(us) 00:12:19.115 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:19.115 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:12:19.115 Verification LBA range: start 0x0 length 0x400 00:12:19.115 Nvme0n1 : 1.04 1606.50 100.41 0.00 0.00 39277.15 3960.65 34192.70 00:12:19.115 =================================================================================================================== 00:12:19.115 Total : 1606.50 100.41 0.00 0.00 39277.15 3960.65 34192.70 00:12:19.373 00:13:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:12:19.373 00:13:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:12:19.373 00:13:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:12:19.373 00:13:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:12:19.373 00:13:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:12:19.373 00:13:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:19.373 00:13:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:12:19.373 00:13:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:19.373 00:13:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:12:19.373 00:13:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:19.373 00:13:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:19.373 rmmod nvme_tcp 00:12:19.373 rmmod nvme_fabrics 00:12:19.373 rmmod nvme_keyring 00:12:19.373 00:13:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:19.373 00:13:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:12:19.373 00:13:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:12:19.373 00:13:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 1447103 ']' 00:12:19.373 00:13:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 1447103 00:12:19.373 00:13:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@942 -- # '[' -z 1447103 ']' 00:12:19.373 00:13:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@946 -- # kill -0 1447103 00:12:19.373 00:13:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@947 -- # uname 00:12:19.373 00:13:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:12:19.373 00:13:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1447103 00:12:19.373 00:13:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:12:19.373 00:13:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:12:19.373 00:13:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1447103' 00:12:19.373 killing process with pid 1447103 00:12:19.373 00:13:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@961 -- # kill 1447103 00:12:19.373 00:13:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # wait 1447103 00:12:19.632 [2024-07-16 00:13:38.378611] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:12:19.632 00:13:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:19.632 00:13:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:19.632 00:13:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:19.632 00:13:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:19.632 00:13:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:19.632 00:13:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:19.632 00:13:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:19.632 00:13:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:22.166 00:13:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:22.166 00:13:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:12:22.166 00:12:22.166 real 0m12.638s 00:12:22.166 user 0m23.328s 00:12:22.166 sys 0m5.186s 00:12:22.166 00:13:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1118 -- # xtrace_disable 00:12:22.166 00:13:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:22.166 ************************************ 00:12:22.166 END TEST nvmf_host_management 00:12:22.166 ************************************ 00:12:22.166 00:13:40 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:12:22.166 00:13:40 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:12:22.166 00:13:40 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:12:22.166 00:13:40 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:12:22.166 00:13:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:22.166 ************************************ 00:12:22.166 START TEST nvmf_lvol 00:12:22.166 ************************************ 00:12:22.166 00:13:40 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:12:22.166 * Looking for test storage... 00:12:22.166 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:22.166 00:13:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:22.166 00:13:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:12:22.166 00:13:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:22.166 00:13:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:22.166 00:13:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:22.166 00:13:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:22.166 00:13:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:22.166 00:13:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:22.166 00:13:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:22.166 00:13:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:22.166 00:13:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:22.166 00:13:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:22.166 00:13:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:22.166 00:13:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:22.166 00:13:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:22.166 00:13:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:22.167 00:13:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:22.167 00:13:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:22.167 00:13:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:22.167 00:13:40 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:22.167 00:13:40 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:22.167 00:13:40 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:22.167 00:13:40 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.167 00:13:40 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.167 00:13:40 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.167 00:13:40 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:12:22.167 00:13:40 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.167 00:13:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:12:22.167 00:13:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:22.167 00:13:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:22.167 00:13:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:22.167 00:13:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:22.167 00:13:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:22.167 00:13:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:22.167 00:13:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:22.167 00:13:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:22.167 00:13:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:22.167 00:13:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:22.167 00:13:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:12:22.167 00:13:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:12:22.167 00:13:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:22.167 00:13:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:12:22.167 00:13:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:22.167 00:13:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:22.167 00:13:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:22.167 00:13:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:22.167 00:13:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:22.167 00:13:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:22.167 00:13:40 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:22.167 00:13:40 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:22.167 00:13:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:22.167 00:13:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:22.167 00:13:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:12:22.167 00:13:40 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:27.442 00:13:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:27.442 00:13:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:12:27.442 00:13:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:27.442 00:13:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:27.442 00:13:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:27.442 00:13:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:27.442 00:13:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:27.442 00:13:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:12:27.442 00:13:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:27.442 00:13:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:12:27.442 00:13:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:12:27.442 00:13:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:12:27.442 00:13:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:12:27.442 00:13:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:12:27.442 00:13:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:12:27.442 00:13:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:27.442 00:13:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:27.442 00:13:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:27.442 00:13:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:27.442 00:13:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:27.442 00:13:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:27.442 00:13:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:27.442 00:13:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:27.442 00:13:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:27.442 00:13:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:27.442 00:13:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:27.442 00:13:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:27.442 00:13:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:27.442 00:13:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:27.442 00:13:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:27.442 00:13:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:27.442 00:13:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:27.442 00:13:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:27.442 00:13:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:27.442 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:27.442 00:13:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:27.442 00:13:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:27.442 00:13:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:27.442 00:13:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:27.442 00:13:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:27.442 00:13:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:27.442 00:13:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:27.442 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:27.442 00:13:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:27.442 00:13:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:27.442 00:13:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:27.442 00:13:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:27.442 00:13:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:27.442 00:13:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:27.442 00:13:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:27.442 00:13:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:27.442 00:13:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:27.442 00:13:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:27.442 00:13:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:27.442 00:13:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:27.442 00:13:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:27.442 00:13:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:27.442 00:13:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:27.442 00:13:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:27.442 Found net devices under 0000:86:00.0: cvl_0_0 00:12:27.442 00:13:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:27.442 00:13:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:27.442 00:13:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:27.442 00:13:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:27.442 00:13:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:27.442 00:13:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:27.442 00:13:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:27.442 00:13:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:27.442 00:13:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:27.442 Found net devices under 0000:86:00.1: cvl_0_1 00:12:27.442 00:13:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:27.442 00:13:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:27.442 00:13:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:12:27.442 00:13:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:27.442 00:13:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:27.442 00:13:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:27.442 00:13:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:27.442 00:13:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:27.442 00:13:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:27.442 00:13:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:27.442 00:13:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:27.442 00:13:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:27.442 00:13:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:27.442 00:13:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:27.442 00:13:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:27.442 00:13:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:27.442 00:13:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:27.442 00:13:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:27.442 00:13:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:27.442 00:13:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:27.442 00:13:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:27.442 00:13:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:27.442 00:13:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:27.442 00:13:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:27.443 00:13:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:27.443 00:13:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:27.443 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:27.443 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.157 ms 00:12:27.443 00:12:27.443 --- 10.0.0.2 ping statistics --- 00:12:27.443 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:27.443 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:12:27.443 00:13:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:27.443 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:27.443 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:12:27.443 00:12:27.443 --- 10.0.0.1 ping statistics --- 00:12:27.443 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:27.443 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:12:27.443 00:13:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:27.443 00:13:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:12:27.443 00:13:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:27.443 00:13:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:27.443 00:13:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:27.443 00:13:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:27.443 00:13:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:27.443 00:13:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:27.443 00:13:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:27.443 00:13:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:12:27.443 00:13:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:27.443 00:13:46 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:27.443 00:13:46 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:27.443 00:13:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=1451375 00:12:27.443 00:13:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 1451375 00:12:27.443 00:13:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:12:27.443 00:13:46 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@823 -- # '[' -z 1451375 ']' 00:12:27.443 00:13:46 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:27.443 00:13:46 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@828 -- # local max_retries=100 00:12:27.443 00:13:46 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:27.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:27.443 00:13:46 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@832 -- # xtrace_disable 00:12:27.443 00:13:46 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:27.443 [2024-07-16 00:13:46.080490] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:12:27.443 [2024-07-16 00:13:46.080532] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:27.443 [2024-07-16 00:13:46.138493] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:27.443 [2024-07-16 00:13:46.210616] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:27.443 [2024-07-16 00:13:46.210657] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:27.443 [2024-07-16 00:13:46.210664] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:27.443 [2024-07-16 00:13:46.210670] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:27.443 [2024-07-16 00:13:46.210675] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:27.443 [2024-07-16 00:13:46.210718] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:27.443 [2024-07-16 00:13:46.210740] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:27.443 [2024-07-16 00:13:46.210741] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:28.380 00:13:46 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:12:28.380 00:13:46 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@856 -- # return 0 00:12:28.380 00:13:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:28.380 00:13:46 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:28.380 00:13:46 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:28.380 00:13:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:28.380 00:13:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:28.380 [2024-07-16 00:13:47.059929] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:28.380 00:13:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:28.641 00:13:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:12:28.641 00:13:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:28.641 00:13:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:12:28.641 00:13:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:12:28.940 00:13:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:12:29.200 00:13:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=59acaf5f-2a13-4e07-a5a3-bdd072e4a366 00:12:29.200 00:13:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 59acaf5f-2a13-4e07-a5a3-bdd072e4a366 lvol 20 00:12:29.200 00:13:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=4730f0af-9179-45c5-ad45-30b441046091 00:12:29.200 00:13:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:29.460 00:13:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 4730f0af-9179-45c5-ad45-30b441046091 00:12:29.720 00:13:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:29.720 [2024-07-16 00:13:48.549672] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:29.980 00:13:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:29.980 00:13:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1451875 00:12:29.980 00:13:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:12:29.980 00:13:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:12:30.917 00:13:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 4730f0af-9179-45c5-ad45-30b441046091 MY_SNAPSHOT 00:12:31.177 00:13:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=5e0920d1-07c0-411c-bf47-f959342a01b9 00:12:31.177 00:13:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 4730f0af-9179-45c5-ad45-30b441046091 30 00:12:31.436 00:13:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 5e0920d1-07c0-411c-bf47-f959342a01b9 MY_CLONE 00:12:31.695 00:13:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=aec0a2a5-4b42-4295-8745-dab79977a67c 00:12:31.695 00:13:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate aec0a2a5-4b42-4295-8745-dab79977a67c 00:12:32.263 00:13:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1451875 00:12:40.381 Initializing NVMe Controllers 00:12:40.381 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:12:40.381 Controller IO queue size 128, less than required. 00:12:40.381 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:40.381 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:12:40.381 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:12:40.382 Initialization complete. Launching workers. 00:12:40.382 ======================================================== 00:12:40.382 Latency(us) 00:12:40.382 Device Information : IOPS MiB/s Average min max 00:12:40.382 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12557.90 49.05 10198.47 1554.27 45104.91 00:12:40.382 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12393.50 48.41 10329.66 3505.23 47437.54 00:12:40.382 ======================================================== 00:12:40.382 Total : 24951.40 97.47 10263.63 1554.27 47437.54 00:12:40.382 00:12:40.382 00:13:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:40.640 00:13:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 4730f0af-9179-45c5-ad45-30b441046091 00:12:40.640 00:13:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 59acaf5f-2a13-4e07-a5a3-bdd072e4a366 00:12:40.898 00:13:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:12:40.898 00:13:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:12:40.898 00:13:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:12:40.898 00:13:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:40.898 00:13:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:12:40.898 00:13:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:40.898 00:13:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:12:40.898 00:13:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:40.898 00:13:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:40.898 rmmod nvme_tcp 00:12:40.898 rmmod nvme_fabrics 00:12:40.898 rmmod nvme_keyring 00:12:40.898 00:13:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:40.898 00:13:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:12:40.898 00:13:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:12:40.898 00:13:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 1451375 ']' 00:12:40.898 00:13:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 1451375 00:12:40.898 00:13:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@942 -- # '[' -z 1451375 ']' 00:12:40.898 00:13:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@946 -- # kill -0 1451375 00:12:40.898 00:13:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@947 -- # uname 00:12:40.898 00:13:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:12:40.898 00:13:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1451375 00:12:41.157 00:13:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:12:41.157 00:13:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:12:41.157 00:13:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1451375' 00:12:41.157 killing process with pid 1451375 00:12:41.157 00:13:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@961 -- # kill 1451375 00:12:41.157 00:13:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # wait 1451375 00:12:41.157 00:13:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:41.157 00:13:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:41.157 00:13:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:41.157 00:13:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:41.157 00:13:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:41.157 00:13:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:41.157 00:13:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:41.157 00:13:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:43.703 00:14:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:43.703 00:12:43.703 real 0m21.515s 00:12:43.703 user 1m3.888s 00:12:43.703 sys 0m6.668s 00:12:43.703 00:14:02 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1118 -- # xtrace_disable 00:12:43.703 00:14:02 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:43.703 ************************************ 00:12:43.703 END TEST nvmf_lvol 00:12:43.703 ************************************ 00:12:43.703 00:14:02 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:12:43.703 00:14:02 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:12:43.703 00:14:02 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:12:43.703 00:14:02 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:12:43.703 00:14:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:43.703 ************************************ 00:12:43.703 START TEST nvmf_lvs_grow 00:12:43.703 ************************************ 00:12:43.703 00:14:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:12:43.703 * Looking for test storage... 00:12:43.703 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:43.703 00:14:02 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:43.703 00:14:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:12:43.703 00:14:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:43.703 00:14:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:43.703 00:14:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:43.703 00:14:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:43.703 00:14:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:43.703 00:14:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:43.703 00:14:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:43.703 00:14:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:43.703 00:14:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:43.703 00:14:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:43.703 00:14:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:43.703 00:14:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:43.703 00:14:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:43.703 00:14:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:43.703 00:14:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:43.703 00:14:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:43.703 00:14:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:43.703 00:14:02 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:43.703 00:14:02 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:43.703 00:14:02 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:43.703 00:14:02 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.703 00:14:02 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.703 00:14:02 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.703 00:14:02 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:12:43.703 00:14:02 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.703 00:14:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:12:43.703 00:14:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:43.703 00:14:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:43.703 00:14:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:43.703 00:14:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:43.703 00:14:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:43.703 00:14:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:43.703 00:14:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:43.703 00:14:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:43.703 00:14:02 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:43.703 00:14:02 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:43.703 00:14:02 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:12:43.703 00:14:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:43.703 00:14:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:43.703 00:14:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:43.703 00:14:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:43.703 00:14:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:43.703 00:14:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:43.703 00:14:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:43.703 00:14:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:43.703 00:14:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:43.703 00:14:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:43.703 00:14:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:12:43.703 00:14:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:48.976 00:14:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:48.976 00:14:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:12:48.976 00:14:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:48.976 00:14:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:48.976 00:14:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:48.976 00:14:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:48.976 00:14:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:48.976 00:14:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:12:48.976 00:14:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:48.976 00:14:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:12:48.976 00:14:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:12:48.976 00:14:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:12:48.976 00:14:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:12:48.976 00:14:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:12:48.976 00:14:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:12:48.976 00:14:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:48.976 00:14:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:48.976 00:14:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:48.976 00:14:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:48.976 00:14:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:48.976 00:14:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:48.976 00:14:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:48.976 00:14:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:48.976 00:14:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:48.976 00:14:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:48.976 00:14:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:48.976 00:14:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:48.976 00:14:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:48.976 00:14:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:48.976 00:14:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:48.976 00:14:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:48.976 00:14:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:48.976 00:14:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:48.976 00:14:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:48.976 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:48.976 00:14:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:48.976 00:14:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:48.976 00:14:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:48.976 00:14:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:48.976 00:14:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:48.976 00:14:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:48.976 00:14:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:48.976 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:48.976 00:14:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:48.976 00:14:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:48.976 00:14:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:48.976 00:14:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:48.976 00:14:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:48.976 00:14:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:48.976 00:14:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:48.976 00:14:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:48.976 00:14:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:48.976 00:14:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:48.976 00:14:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:48.976 00:14:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:48.976 00:14:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:48.976 00:14:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:48.976 00:14:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:48.976 00:14:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:48.976 Found net devices under 0000:86:00.0: cvl_0_0 00:12:48.976 00:14:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:48.976 00:14:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:48.976 00:14:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:48.976 00:14:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:48.976 00:14:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:48.976 00:14:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:48.976 00:14:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:48.976 00:14:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:48.976 00:14:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:48.976 Found net devices under 0000:86:00.1: cvl_0_1 00:12:48.976 00:14:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:48.976 00:14:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:48.976 00:14:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:12:48.976 00:14:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:48.976 00:14:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:48.976 00:14:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:48.976 00:14:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:48.976 00:14:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:48.976 00:14:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:48.976 00:14:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:48.976 00:14:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:48.976 00:14:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:48.976 00:14:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:48.976 00:14:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:48.976 00:14:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:48.976 00:14:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:48.976 00:14:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:48.976 00:14:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:48.976 00:14:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:48.976 00:14:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:48.976 00:14:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:48.976 00:14:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:48.976 00:14:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:48.976 00:14:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:48.976 00:14:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:48.976 00:14:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:48.976 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:48.976 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.163 ms 00:12:48.976 00:12:48.976 --- 10.0.0.2 ping statistics --- 00:12:48.976 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:48.976 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:12:48.977 00:14:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:48.977 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:48.977 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:12:48.977 00:12:48.977 --- 10.0.0.1 ping statistics --- 00:12:48.977 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:48.977 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:12:48.977 00:14:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:48.977 00:14:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:12:48.977 00:14:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:48.977 00:14:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:48.977 00:14:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:48.977 00:14:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:48.977 00:14:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:48.977 00:14:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:48.977 00:14:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:48.977 00:14:07 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:12:48.977 00:14:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:48.977 00:14:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:48.977 00:14:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:48.977 00:14:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=1457202 00:12:48.977 00:14:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 1457202 00:12:48.977 00:14:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:48.977 00:14:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # '[' -z 1457202 ']' 00:12:48.977 00:14:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:48.977 00:14:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@828 -- # local max_retries=100 00:12:48.977 00:14:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:48.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:48.977 00:14:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@832 -- # xtrace_disable 00:12:48.977 00:14:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:48.977 [2024-07-16 00:14:07.438936] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:12:48.977 [2024-07-16 00:14:07.438980] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:48.977 [2024-07-16 00:14:07.495857] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:48.977 [2024-07-16 00:14:07.567208] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:48.977 [2024-07-16 00:14:07.567254] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:48.977 [2024-07-16 00:14:07.567260] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:48.977 [2024-07-16 00:14:07.567266] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:48.977 [2024-07-16 00:14:07.567271] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:48.977 [2024-07-16 00:14:07.567292] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:49.545 00:14:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:12:49.545 00:14:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@856 -- # return 0 00:12:49.545 00:14:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:49.545 00:14:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:49.545 00:14:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:49.545 00:14:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:49.545 00:14:08 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:49.804 [2024-07-16 00:14:08.422384] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:49.804 00:14:08 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:12:49.804 00:14:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:12:49.805 00:14:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # xtrace_disable 00:12:49.805 00:14:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:49.805 ************************************ 00:12:49.805 START TEST lvs_grow_clean 00:12:49.805 ************************************ 00:12:49.805 00:14:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1117 -- # lvs_grow 00:12:49.805 00:14:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:12:49.805 00:14:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:12:49.805 00:14:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:12:49.805 00:14:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:12:49.805 00:14:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:12:49.805 00:14:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:12:49.805 00:14:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:49.805 00:14:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:49.805 00:14:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:50.064 00:14:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:12:50.064 00:14:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:12:50.064 00:14:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=7de0cd2a-8943-4bb6-89a3-4ee54e7c276d 00:12:50.064 00:14:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7de0cd2a-8943-4bb6-89a3-4ee54e7c276d 00:12:50.064 00:14:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:12:50.322 00:14:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:12:50.322 00:14:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:12:50.322 00:14:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 7de0cd2a-8943-4bb6-89a3-4ee54e7c276d lvol 150 00:12:50.580 00:14:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=645ae8ed-0746-4b31-ace3-8d324f4fd66d 00:12:50.580 00:14:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:50.580 00:14:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:12:50.580 [2024-07-16 00:14:09.384496] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:12:50.580 [2024-07-16 00:14:09.384544] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:12:50.580 true 00:12:50.580 00:14:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7de0cd2a-8943-4bb6-89a3-4ee54e7c276d 00:12:50.580 00:14:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:12:50.838 00:14:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:12:50.838 00:14:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:51.096 00:14:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 645ae8ed-0746-4b31-ace3-8d324f4fd66d 00:12:51.096 00:14:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:51.354 [2024-07-16 00:14:10.066562] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:51.354 00:14:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:51.612 00:14:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1457725 00:12:51.613 00:14:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:51.613 00:14:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1457725 /var/tmp/bdevperf.sock 00:12:51.613 00:14:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@823 -- # '[' -z 1457725 ']' 00:12:51.613 00:14:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:51.613 00:14:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@828 -- # local max_retries=100 00:12:51.613 00:14:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:12:51.613 00:14:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:51.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:51.613 00:14:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@832 -- # xtrace_disable 00:12:51.613 00:14:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:12:51.613 [2024-07-16 00:14:10.301246] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:12:51.613 [2024-07-16 00:14:10.301294] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1457725 ] 00:12:51.613 [2024-07-16 00:14:10.358304] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:51.613 [2024-07-16 00:14:10.437828] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:52.597 00:14:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:12:52.597 00:14:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@856 -- # return 0 00:12:52.597 00:14:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:12:52.597 Nvme0n1 00:12:52.597 00:14:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:12:52.856 [ 00:12:52.856 { 00:12:52.856 "name": "Nvme0n1", 00:12:52.856 "aliases": [ 00:12:52.856 "645ae8ed-0746-4b31-ace3-8d324f4fd66d" 00:12:52.856 ], 00:12:52.856 "product_name": "NVMe disk", 00:12:52.856 "block_size": 4096, 00:12:52.856 "num_blocks": 38912, 00:12:52.856 "uuid": "645ae8ed-0746-4b31-ace3-8d324f4fd66d", 00:12:52.856 "assigned_rate_limits": { 00:12:52.856 "rw_ios_per_sec": 0, 00:12:52.856 "rw_mbytes_per_sec": 0, 00:12:52.856 "r_mbytes_per_sec": 0, 00:12:52.856 "w_mbytes_per_sec": 0 00:12:52.856 }, 00:12:52.856 "claimed": false, 00:12:52.856 "zoned": false, 00:12:52.856 "supported_io_types": { 00:12:52.856 "read": true, 00:12:52.856 "write": true, 00:12:52.856 "unmap": true, 00:12:52.856 "flush": true, 00:12:52.856 "reset": true, 00:12:52.856 "nvme_admin": true, 00:12:52.856 "nvme_io": true, 00:12:52.856 "nvme_io_md": false, 00:12:52.856 "write_zeroes": true, 00:12:52.856 "zcopy": false, 00:12:52.856 "get_zone_info": false, 00:12:52.856 "zone_management": false, 00:12:52.856 "zone_append": false, 00:12:52.856 "compare": true, 00:12:52.856 "compare_and_write": true, 00:12:52.856 "abort": true, 00:12:52.856 "seek_hole": false, 00:12:52.856 "seek_data": false, 00:12:52.856 "copy": true, 00:12:52.856 "nvme_iov_md": false 00:12:52.856 }, 00:12:52.856 "memory_domains": [ 00:12:52.856 { 00:12:52.856 "dma_device_id": "system", 00:12:52.856 "dma_device_type": 1 00:12:52.856 } 00:12:52.856 ], 00:12:52.856 "driver_specific": { 00:12:52.856 "nvme": [ 00:12:52.856 { 00:12:52.856 "trid": { 00:12:52.856 "trtype": "TCP", 00:12:52.856 "adrfam": "IPv4", 00:12:52.856 "traddr": "10.0.0.2", 00:12:52.856 "trsvcid": "4420", 00:12:52.856 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:12:52.856 }, 00:12:52.856 "ctrlr_data": { 00:12:52.856 "cntlid": 1, 00:12:52.856 "vendor_id": "0x8086", 00:12:52.856 "model_number": "SPDK bdev Controller", 00:12:52.856 "serial_number": "SPDK0", 00:12:52.856 "firmware_revision": "24.09", 00:12:52.856 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:52.856 "oacs": { 00:12:52.856 "security": 0, 00:12:52.856 "format": 0, 00:12:52.856 "firmware": 0, 00:12:52.856 "ns_manage": 0 00:12:52.856 }, 00:12:52.856 "multi_ctrlr": true, 00:12:52.856 "ana_reporting": false 00:12:52.856 }, 00:12:52.856 "vs": { 00:12:52.856 "nvme_version": "1.3" 00:12:52.856 }, 00:12:52.856 "ns_data": { 00:12:52.856 "id": 1, 00:12:52.856 "can_share": true 00:12:52.856 } 00:12:52.856 } 00:12:52.856 ], 00:12:52.856 "mp_policy": "active_passive" 00:12:52.856 } 00:12:52.856 } 00:12:52.856 ] 00:12:52.856 00:14:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1457955 00:12:52.856 00:14:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:52.856 00:14:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:12:52.856 Running I/O for 10 seconds... 00:12:53.792 Latency(us) 00:12:53.792 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:53.792 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:53.792 Nvme0n1 : 1.00 22981.00 89.77 0.00 0.00 0.00 0.00 0.00 00:12:53.792 =================================================================================================================== 00:12:53.792 Total : 22981.00 89.77 0.00 0.00 0.00 0.00 0.00 00:12:53.792 00:12:54.728 00:14:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 7de0cd2a-8943-4bb6-89a3-4ee54e7c276d 00:12:54.987 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:54.987 Nvme0n1 : 2.00 23138.50 90.38 0.00 0.00 0.00 0.00 0.00 00:12:54.987 =================================================================================================================== 00:12:54.987 Total : 23138.50 90.38 0.00 0.00 0.00 0.00 0.00 00:12:54.987 00:12:54.987 true 00:12:54.987 00:14:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7de0cd2a-8943-4bb6-89a3-4ee54e7c276d 00:12:54.987 00:14:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:12:55.246 00:14:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:12:55.246 00:14:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:12:55.246 00:14:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1457955 00:12:55.827 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:55.827 Nvme0n1 : 3.00 23191.00 90.59 0.00 0.00 0.00 0.00 0.00 00:12:55.827 =================================================================================================================== 00:12:55.827 Total : 23191.00 90.59 0.00 0.00 0.00 0.00 0.00 00:12:55.827 00:12:57.199 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:57.199 Nvme0n1 : 4.00 23267.50 90.89 0.00 0.00 0.00 0.00 0.00 00:12:57.199 =================================================================================================================== 00:12:57.199 Total : 23267.50 90.89 0.00 0.00 0.00 0.00 0.00 00:12:57.199 00:12:58.131 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:58.131 Nvme0n1 : 5.00 23311.80 91.06 0.00 0.00 0.00 0.00 0.00 00:12:58.131 =================================================================================================================== 00:12:58.131 Total : 23311.80 91.06 0.00 0.00 0.00 0.00 0.00 00:12:58.131 00:12:59.066 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:59.066 Nvme0n1 : 6.00 23330.50 91.13 0.00 0.00 0.00 0.00 0.00 00:12:59.066 =================================================================================================================== 00:12:59.066 Total : 23330.50 91.13 0.00 0.00 0.00 0.00 0.00 00:12:59.066 00:13:00.003 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:00.003 Nvme0n1 : 7.00 23362.00 91.26 0.00 0.00 0.00 0.00 0.00 00:13:00.003 =================================================================================================================== 00:13:00.003 Total : 23362.00 91.26 0.00 0.00 0.00 0.00 0.00 00:13:00.003 00:13:00.940 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:00.940 Nvme0n1 : 8.00 23385.75 91.35 0.00 0.00 0.00 0.00 0.00 00:13:00.940 =================================================================================================================== 00:13:00.940 Total : 23385.75 91.35 0.00 0.00 0.00 0.00 0.00 00:13:00.940 00:13:01.878 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:01.878 Nvme0n1 : 9.00 23375.67 91.31 0.00 0.00 0.00 0.00 0.00 00:13:01.878 =================================================================================================================== 00:13:01.878 Total : 23375.67 91.31 0.00 0.00 0.00 0.00 0.00 00:13:01.878 00:13:02.815 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:02.815 Nvme0n1 : 10.00 23387.00 91.36 0.00 0.00 0.00 0.00 0.00 00:13:02.815 =================================================================================================================== 00:13:02.815 Total : 23387.00 91.36 0.00 0.00 0.00 0.00 0.00 00:13:02.815 00:13:02.815 00:13:02.815 Latency(us) 00:13:02.815 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:02.815 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:02.815 Nvme0n1 : 10.01 23386.69 91.35 0.00 0.00 5469.60 3348.03 15272.74 00:13:02.815 =================================================================================================================== 00:13:02.815 Total : 23386.69 91.35 0.00 0.00 5469.60 3348.03 15272.74 00:13:02.815 0 00:13:02.815 00:14:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1457725 00:13:02.815 00:14:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@942 -- # '[' -z 1457725 ']' 00:13:02.815 00:14:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@946 -- # kill -0 1457725 00:13:02.815 00:14:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@947 -- # uname 00:13:02.815 00:14:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:13:02.815 00:14:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1457725 00:13:03.074 00:14:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:13:03.074 00:14:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:13:03.074 00:14:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1457725' 00:13:03.074 killing process with pid 1457725 00:13:03.074 00:14:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@961 -- # kill 1457725 00:13:03.074 Received shutdown signal, test time was about 10.000000 seconds 00:13:03.074 00:13:03.074 Latency(us) 00:13:03.074 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:03.074 =================================================================================================================== 00:13:03.074 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:03.074 00:14:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # wait 1457725 00:13:03.074 00:14:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:03.333 00:14:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:03.593 00:14:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7de0cd2a-8943-4bb6-89a3-4ee54e7c276d 00:13:03.593 00:14:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:13:03.593 00:14:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:13:03.593 00:14:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:13:03.593 00:14:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:03.852 [2024-07-16 00:14:22.566816] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:13:03.852 00:14:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7de0cd2a-8943-4bb6-89a3-4ee54e7c276d 00:13:03.852 00:14:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # local es=0 00:13:03.852 00:14:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7de0cd2a-8943-4bb6-89a3-4ee54e7c276d 00:13:03.852 00:14:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@630 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:03.852 00:14:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:13:03.852 00:14:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@634 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:03.852 00:14:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:13:03.852 00:14:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:03.852 00:14:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:13:03.852 00:14:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:03.852 00:14:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:03.852 00:14:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@645 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7de0cd2a-8943-4bb6-89a3-4ee54e7c276d 00:13:04.111 request: 00:13:04.112 { 00:13:04.112 "uuid": "7de0cd2a-8943-4bb6-89a3-4ee54e7c276d", 00:13:04.112 "method": "bdev_lvol_get_lvstores", 00:13:04.112 "req_id": 1 00:13:04.112 } 00:13:04.112 Got JSON-RPC error response 00:13:04.112 response: 00:13:04.112 { 00:13:04.112 "code": -19, 00:13:04.112 "message": "No such device" 00:13:04.112 } 00:13:04.112 00:14:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@645 -- # es=1 00:13:04.112 00:14:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:13:04.112 00:14:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:13:04.112 00:14:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:13:04.112 00:14:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:04.112 aio_bdev 00:13:04.370 00:14:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 645ae8ed-0746-4b31-ace3-8d324f4fd66d 00:13:04.370 00:14:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@891 -- # local bdev_name=645ae8ed-0746-4b31-ace3-8d324f4fd66d 00:13:04.370 00:14:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:13:04.370 00:14:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@893 -- # local i 00:13:04.370 00:14:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:13:04.370 00:14:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:13:04.370 00:14:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@896 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:13:04.370 00:14:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 645ae8ed-0746-4b31-ace3-8d324f4fd66d -t 2000 00:13:04.630 [ 00:13:04.630 { 00:13:04.630 "name": "645ae8ed-0746-4b31-ace3-8d324f4fd66d", 00:13:04.630 "aliases": [ 00:13:04.630 "lvs/lvol" 00:13:04.630 ], 00:13:04.630 "product_name": "Logical Volume", 00:13:04.630 "block_size": 4096, 00:13:04.630 "num_blocks": 38912, 00:13:04.630 "uuid": "645ae8ed-0746-4b31-ace3-8d324f4fd66d", 00:13:04.630 "assigned_rate_limits": { 00:13:04.630 "rw_ios_per_sec": 0, 00:13:04.630 "rw_mbytes_per_sec": 0, 00:13:04.630 "r_mbytes_per_sec": 0, 00:13:04.630 "w_mbytes_per_sec": 0 00:13:04.630 }, 00:13:04.630 "claimed": false, 00:13:04.630 "zoned": false, 00:13:04.630 "supported_io_types": { 00:13:04.630 "read": true, 00:13:04.630 "write": true, 00:13:04.630 "unmap": true, 00:13:04.630 "flush": false, 00:13:04.630 "reset": true, 00:13:04.630 "nvme_admin": false, 00:13:04.630 "nvme_io": false, 00:13:04.630 "nvme_io_md": false, 00:13:04.630 "write_zeroes": true, 00:13:04.630 "zcopy": false, 00:13:04.630 "get_zone_info": false, 00:13:04.630 "zone_management": false, 00:13:04.630 "zone_append": false, 00:13:04.631 "compare": false, 00:13:04.631 "compare_and_write": false, 00:13:04.631 "abort": false, 00:13:04.631 "seek_hole": true, 00:13:04.631 "seek_data": true, 00:13:04.631 "copy": false, 00:13:04.631 "nvme_iov_md": false 00:13:04.631 }, 00:13:04.631 "driver_specific": { 00:13:04.631 "lvol": { 00:13:04.631 "lvol_store_uuid": "7de0cd2a-8943-4bb6-89a3-4ee54e7c276d", 00:13:04.631 "base_bdev": "aio_bdev", 00:13:04.631 "thin_provision": false, 00:13:04.631 "num_allocated_clusters": 38, 00:13:04.631 "snapshot": false, 00:13:04.631 "clone": false, 00:13:04.631 "esnap_clone": false 00:13:04.631 } 00:13:04.631 } 00:13:04.631 } 00:13:04.631 ] 00:13:04.631 00:14:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # return 0 00:13:04.631 00:14:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7de0cd2a-8943-4bb6-89a3-4ee54e7c276d 00:13:04.631 00:14:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:13:04.891 00:14:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:13:04.891 00:14:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7de0cd2a-8943-4bb6-89a3-4ee54e7c276d 00:13:04.891 00:14:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:13:04.891 00:14:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:13:04.891 00:14:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 645ae8ed-0746-4b31-ace3-8d324f4fd66d 00:13:05.149 00:14:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7de0cd2a-8943-4bb6-89a3-4ee54e7c276d 00:13:05.409 00:14:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:05.409 00:14:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:05.409 00:13:05.409 real 0m15.750s 00:13:05.409 user 0m15.417s 00:13:05.409 sys 0m1.406s 00:13:05.409 00:14:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1118 -- # xtrace_disable 00:13:05.409 00:14:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:13:05.409 ************************************ 00:13:05.409 END TEST lvs_grow_clean 00:13:05.409 ************************************ 00:13:05.409 00:14:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1136 -- # return 0 00:13:05.409 00:14:24 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:13:05.409 00:14:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:13:05.409 00:14:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # xtrace_disable 00:13:05.409 00:14:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:05.668 ************************************ 00:13:05.668 START TEST lvs_grow_dirty 00:13:05.668 ************************************ 00:13:05.668 00:14:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1117 -- # lvs_grow dirty 00:13:05.668 00:14:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:13:05.668 00:14:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:13:05.668 00:14:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:13:05.668 00:14:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:13:05.668 00:14:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:13:05.668 00:14:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:13:05.668 00:14:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:05.668 00:14:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:05.668 00:14:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:05.668 00:14:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:13:05.668 00:14:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:13:05.927 00:14:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=84cc3d83-cdd8-426c-9930-46e3208509c6 00:13:05.927 00:14:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 84cc3d83-cdd8-426c-9930-46e3208509c6 00:13:05.927 00:14:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:13:06.186 00:14:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:13:06.186 00:14:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:13:06.186 00:14:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 84cc3d83-cdd8-426c-9930-46e3208509c6 lvol 150 00:13:06.186 00:14:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=8a2b5ee6-37d4-440b-b4d2-d5450d77ad22 00:13:06.186 00:14:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:06.186 00:14:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:13:06.444 [2024-07-16 00:14:25.175403] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:13:06.444 [2024-07-16 00:14:25.175454] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:13:06.444 true 00:13:06.444 00:14:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 84cc3d83-cdd8-426c-9930-46e3208509c6 00:13:06.444 00:14:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:13:06.703 00:14:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:13:06.703 00:14:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:06.963 00:14:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 8a2b5ee6-37d4-440b-b4d2-d5450d77ad22 00:13:06.963 00:14:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:07.223 [2024-07-16 00:14:25.881504] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:07.223 00:14:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:07.223 00:14:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1460323 00:13:07.223 00:14:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:13:07.223 00:14:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:07.223 00:14:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1460323 /var/tmp/bdevperf.sock 00:13:07.223 00:14:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@823 -- # '[' -z 1460323 ']' 00:13:07.223 00:14:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:07.223 00:14:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@828 -- # local max_retries=100 00:13:07.223 00:14:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:07.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:07.223 00:14:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # xtrace_disable 00:13:07.223 00:14:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:07.483 [2024-07-16 00:14:26.110699] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:13:07.483 [2024-07-16 00:14:26.110747] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1460323 ] 00:13:07.483 [2024-07-16 00:14:26.163821] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:07.483 [2024-07-16 00:14:26.242654] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:08.105 00:14:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:13:08.105 00:14:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # return 0 00:13:08.105 00:14:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:13:08.364 Nvme0n1 00:13:08.364 00:14:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:13:08.623 [ 00:13:08.623 { 00:13:08.623 "name": "Nvme0n1", 00:13:08.623 "aliases": [ 00:13:08.623 "8a2b5ee6-37d4-440b-b4d2-d5450d77ad22" 00:13:08.623 ], 00:13:08.623 "product_name": "NVMe disk", 00:13:08.623 "block_size": 4096, 00:13:08.623 "num_blocks": 38912, 00:13:08.624 "uuid": "8a2b5ee6-37d4-440b-b4d2-d5450d77ad22", 00:13:08.624 "assigned_rate_limits": { 00:13:08.624 "rw_ios_per_sec": 0, 00:13:08.624 "rw_mbytes_per_sec": 0, 00:13:08.624 "r_mbytes_per_sec": 0, 00:13:08.624 "w_mbytes_per_sec": 0 00:13:08.624 }, 00:13:08.624 "claimed": false, 00:13:08.624 "zoned": false, 00:13:08.624 "supported_io_types": { 00:13:08.624 "read": true, 00:13:08.624 "write": true, 00:13:08.624 "unmap": true, 00:13:08.624 "flush": true, 00:13:08.624 "reset": true, 00:13:08.624 "nvme_admin": true, 00:13:08.624 "nvme_io": true, 00:13:08.624 "nvme_io_md": false, 00:13:08.624 "write_zeroes": true, 00:13:08.624 "zcopy": false, 00:13:08.624 "get_zone_info": false, 00:13:08.624 "zone_management": false, 00:13:08.624 "zone_append": false, 00:13:08.624 "compare": true, 00:13:08.624 "compare_and_write": true, 00:13:08.624 "abort": true, 00:13:08.624 "seek_hole": false, 00:13:08.624 "seek_data": false, 00:13:08.624 "copy": true, 00:13:08.624 "nvme_iov_md": false 00:13:08.624 }, 00:13:08.624 "memory_domains": [ 00:13:08.624 { 00:13:08.624 "dma_device_id": "system", 00:13:08.624 "dma_device_type": 1 00:13:08.624 } 00:13:08.624 ], 00:13:08.624 "driver_specific": { 00:13:08.624 "nvme": [ 00:13:08.624 { 00:13:08.624 "trid": { 00:13:08.624 "trtype": "TCP", 00:13:08.624 "adrfam": "IPv4", 00:13:08.624 "traddr": "10.0.0.2", 00:13:08.624 "trsvcid": "4420", 00:13:08.624 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:13:08.624 }, 00:13:08.624 "ctrlr_data": { 00:13:08.624 "cntlid": 1, 00:13:08.624 "vendor_id": "0x8086", 00:13:08.624 "model_number": "SPDK bdev Controller", 00:13:08.624 "serial_number": "SPDK0", 00:13:08.624 "firmware_revision": "24.09", 00:13:08.624 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:08.624 "oacs": { 00:13:08.624 "security": 0, 00:13:08.624 "format": 0, 00:13:08.624 "firmware": 0, 00:13:08.624 "ns_manage": 0 00:13:08.624 }, 00:13:08.624 "multi_ctrlr": true, 00:13:08.624 "ana_reporting": false 00:13:08.624 }, 00:13:08.624 "vs": { 00:13:08.624 "nvme_version": "1.3" 00:13:08.624 }, 00:13:08.624 "ns_data": { 00:13:08.624 "id": 1, 00:13:08.624 "can_share": true 00:13:08.624 } 00:13:08.624 } 00:13:08.624 ], 00:13:08.624 "mp_policy": "active_passive" 00:13:08.624 } 00:13:08.624 } 00:13:08.624 ] 00:13:08.624 00:14:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:08.624 00:14:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1460559 00:13:08.624 00:14:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:13:08.624 Running I/O for 10 seconds... 00:13:10.004 Latency(us) 00:13:10.004 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:10.004 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:10.004 Nvme0n1 : 1.00 22940.00 89.61 0.00 0.00 0.00 0.00 0.00 00:13:10.004 =================================================================================================================== 00:13:10.004 Total : 22940.00 89.61 0.00 0.00 0.00 0.00 0.00 00:13:10.004 00:13:10.573 00:14:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 84cc3d83-cdd8-426c-9930-46e3208509c6 00:13:10.832 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:10.832 Nvme0n1 : 2.00 23142.00 90.40 0.00 0.00 0.00 0.00 0.00 00:13:10.832 =================================================================================================================== 00:13:10.832 Total : 23142.00 90.40 0.00 0.00 0.00 0.00 0.00 00:13:10.832 00:13:10.832 true 00:13:10.832 00:14:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 84cc3d83-cdd8-426c-9930-46e3208509c6 00:13:10.832 00:14:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:13:11.092 00:14:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:13:11.092 00:14:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:13:11.092 00:14:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1460559 00:13:11.660 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:11.660 Nvme0n1 : 3.00 23171.67 90.51 0.00 0.00 0.00 0.00 0.00 00:13:11.660 =================================================================================================================== 00:13:11.660 Total : 23171.67 90.51 0.00 0.00 0.00 0.00 0.00 00:13:11.660 00:13:13.039 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:13.039 Nvme0n1 : 4.00 23234.75 90.76 0.00 0.00 0.00 0.00 0.00 00:13:13.039 =================================================================================================================== 00:13:13.039 Total : 23234.75 90.76 0.00 0.00 0.00 0.00 0.00 00:13:13.039 00:13:13.978 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:13.978 Nvme0n1 : 5.00 23285.40 90.96 0.00 0.00 0.00 0.00 0.00 00:13:13.978 =================================================================================================================== 00:13:13.978 Total : 23285.40 90.96 0.00 0.00 0.00 0.00 0.00 00:13:13.978 00:13:14.924 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:14.924 Nvme0n1 : 6.00 23328.17 91.13 0.00 0.00 0.00 0.00 0.00 00:13:14.924 =================================================================================================================== 00:13:14.924 Total : 23328.17 91.13 0.00 0.00 0.00 0.00 0.00 00:13:14.924 00:13:15.859 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:15.859 Nvme0n1 : 7.00 23369.00 91.29 0.00 0.00 0.00 0.00 0.00 00:13:15.859 =================================================================================================================== 00:13:15.859 Total : 23369.00 91.29 0.00 0.00 0.00 0.00 0.00 00:13:15.859 00:13:16.792 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:16.792 Nvme0n1 : 8.00 23393.12 91.38 0.00 0.00 0.00 0.00 0.00 00:13:16.792 =================================================================================================================== 00:13:16.792 Total : 23393.12 91.38 0.00 0.00 0.00 0.00 0.00 00:13:16.792 00:13:17.786 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:17.786 Nvme0n1 : 9.00 23415.44 91.47 0.00 0.00 0.00 0.00 0.00 00:13:17.786 =================================================================================================================== 00:13:17.786 Total : 23415.44 91.47 0.00 0.00 0.00 0.00 0.00 00:13:17.786 00:13:18.720 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:18.720 Nvme0n1 : 10.00 23429.10 91.52 0.00 0.00 0.00 0.00 0.00 00:13:18.720 =================================================================================================================== 00:13:18.720 Total : 23429.10 91.52 0.00 0.00 0.00 0.00 0.00 00:13:18.720 00:13:18.720 00:13:18.720 Latency(us) 00:13:18.720 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:18.720 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:18.720 Nvme0n1 : 10.01 23430.15 91.52 0.00 0.00 5459.91 3262.55 16070.57 00:13:18.720 =================================================================================================================== 00:13:18.720 Total : 23430.15 91.52 0.00 0.00 5459.91 3262.55 16070.57 00:13:18.720 0 00:13:18.720 00:14:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1460323 00:13:18.720 00:14:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@942 -- # '[' -z 1460323 ']' 00:13:18.720 00:14:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@946 -- # kill -0 1460323 00:13:18.720 00:14:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@947 -- # uname 00:13:18.720 00:14:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:13:18.720 00:14:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1460323 00:13:18.720 00:14:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:13:18.720 00:14:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:13:18.720 00:14:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1460323' 00:13:18.720 killing process with pid 1460323 00:13:18.720 00:14:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@961 -- # kill 1460323 00:13:18.720 Received shutdown signal, test time was about 10.000000 seconds 00:13:18.720 00:13:18.720 Latency(us) 00:13:18.720 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:18.720 =================================================================================================================== 00:13:18.720 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:18.720 00:14:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # wait 1460323 00:13:18.979 00:14:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:19.238 00:14:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:19.497 00:14:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 84cc3d83-cdd8-426c-9930-46e3208509c6 00:13:19.497 00:14:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:13:19.497 00:14:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:13:19.497 00:14:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:13:19.497 00:14:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1457202 00:13:19.498 00:14:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1457202 00:13:19.498 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1457202 Killed "${NVMF_APP[@]}" "$@" 00:13:19.498 00:14:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:13:19.498 00:14:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:13:19.498 00:14:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:19.498 00:14:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:19.498 00:14:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:19.498 00:14:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=1462397 00:13:19.498 00:14:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:19.498 00:14:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 1462397 00:13:19.498 00:14:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@823 -- # '[' -z 1462397 ']' 00:13:19.498 00:14:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:19.498 00:14:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@828 -- # local max_retries=100 00:13:19.498 00:14:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:19.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:19.498 00:14:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # xtrace_disable 00:13:19.498 00:14:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:19.757 [2024-07-16 00:14:38.378268] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:13:19.757 [2024-07-16 00:14:38.378316] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:19.757 [2024-07-16 00:14:38.437034] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:19.757 [2024-07-16 00:14:38.515998] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:19.757 [2024-07-16 00:14:38.516033] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:19.757 [2024-07-16 00:14:38.516040] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:19.757 [2024-07-16 00:14:38.516047] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:19.757 [2024-07-16 00:14:38.516054] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:19.757 [2024-07-16 00:14:38.516070] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:20.326 00:14:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:13:20.585 00:14:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # return 0 00:13:20.585 00:14:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:20.585 00:14:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:20.585 00:14:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:20.585 00:14:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:20.585 00:14:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:20.585 [2024-07-16 00:14:39.373678] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:13:20.585 [2024-07-16 00:14:39.373758] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:13:20.585 [2024-07-16 00:14:39.373782] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:13:20.585 00:14:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:13:20.585 00:14:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 8a2b5ee6-37d4-440b-b4d2-d5450d77ad22 00:13:20.585 00:14:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@891 -- # local bdev_name=8a2b5ee6-37d4-440b-b4d2-d5450d77ad22 00:13:20.585 00:14:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:13:20.585 00:14:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@893 -- # local i 00:13:20.585 00:14:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:13:20.585 00:14:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:13:20.585 00:14:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:13:20.844 00:14:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 8a2b5ee6-37d4-440b-b4d2-d5450d77ad22 -t 2000 00:13:21.102 [ 00:13:21.102 { 00:13:21.102 "name": "8a2b5ee6-37d4-440b-b4d2-d5450d77ad22", 00:13:21.102 "aliases": [ 00:13:21.102 "lvs/lvol" 00:13:21.102 ], 00:13:21.102 "product_name": "Logical Volume", 00:13:21.102 "block_size": 4096, 00:13:21.102 "num_blocks": 38912, 00:13:21.102 "uuid": "8a2b5ee6-37d4-440b-b4d2-d5450d77ad22", 00:13:21.102 "assigned_rate_limits": { 00:13:21.102 "rw_ios_per_sec": 0, 00:13:21.102 "rw_mbytes_per_sec": 0, 00:13:21.102 "r_mbytes_per_sec": 0, 00:13:21.102 "w_mbytes_per_sec": 0 00:13:21.102 }, 00:13:21.102 "claimed": false, 00:13:21.102 "zoned": false, 00:13:21.102 "supported_io_types": { 00:13:21.102 "read": true, 00:13:21.102 "write": true, 00:13:21.102 "unmap": true, 00:13:21.102 "flush": false, 00:13:21.102 "reset": true, 00:13:21.102 "nvme_admin": false, 00:13:21.102 "nvme_io": false, 00:13:21.102 "nvme_io_md": false, 00:13:21.102 "write_zeroes": true, 00:13:21.102 "zcopy": false, 00:13:21.102 "get_zone_info": false, 00:13:21.102 "zone_management": false, 00:13:21.102 "zone_append": false, 00:13:21.102 "compare": false, 00:13:21.102 "compare_and_write": false, 00:13:21.102 "abort": false, 00:13:21.102 "seek_hole": true, 00:13:21.102 "seek_data": true, 00:13:21.102 "copy": false, 00:13:21.102 "nvme_iov_md": false 00:13:21.102 }, 00:13:21.102 "driver_specific": { 00:13:21.102 "lvol": { 00:13:21.102 "lvol_store_uuid": "84cc3d83-cdd8-426c-9930-46e3208509c6", 00:13:21.102 "base_bdev": "aio_bdev", 00:13:21.102 "thin_provision": false, 00:13:21.102 "num_allocated_clusters": 38, 00:13:21.102 "snapshot": false, 00:13:21.102 "clone": false, 00:13:21.102 "esnap_clone": false 00:13:21.102 } 00:13:21.102 } 00:13:21.102 } 00:13:21.102 ] 00:13:21.102 00:14:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # return 0 00:13:21.102 00:14:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 84cc3d83-cdd8-426c-9930-46e3208509c6 00:13:21.102 00:14:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:13:21.102 00:14:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:13:21.102 00:14:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 84cc3d83-cdd8-426c-9930-46e3208509c6 00:13:21.102 00:14:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:13:21.360 00:14:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:13:21.360 00:14:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:21.618 [2024-07-16 00:14:40.242293] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:13:21.618 00:14:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 84cc3d83-cdd8-426c-9930-46e3208509c6 00:13:21.618 00:14:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # local es=0 00:13:21.619 00:14:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 84cc3d83-cdd8-426c-9930-46e3208509c6 00:13:21.619 00:14:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@630 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:21.619 00:14:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:13:21.619 00:14:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@634 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:21.619 00:14:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:13:21.619 00:14:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:21.619 00:14:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:13:21.619 00:14:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:21.619 00:14:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:21.619 00:14:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@645 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 84cc3d83-cdd8-426c-9930-46e3208509c6 00:13:21.619 request: 00:13:21.619 { 00:13:21.619 "uuid": "84cc3d83-cdd8-426c-9930-46e3208509c6", 00:13:21.619 "method": "bdev_lvol_get_lvstores", 00:13:21.619 "req_id": 1 00:13:21.619 } 00:13:21.619 Got JSON-RPC error response 00:13:21.619 response: 00:13:21.619 { 00:13:21.619 "code": -19, 00:13:21.619 "message": "No such device" 00:13:21.619 } 00:13:21.619 00:14:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@645 -- # es=1 00:13:21.619 00:14:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:13:21.619 00:14:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:13:21.619 00:14:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:13:21.619 00:14:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:21.877 aio_bdev 00:13:21.877 00:14:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 8a2b5ee6-37d4-440b-b4d2-d5450d77ad22 00:13:21.878 00:14:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@891 -- # local bdev_name=8a2b5ee6-37d4-440b-b4d2-d5450d77ad22 00:13:21.878 00:14:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:13:21.878 00:14:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@893 -- # local i 00:13:21.878 00:14:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:13:21.878 00:14:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:13:21.878 00:14:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:13:22.137 00:14:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 8a2b5ee6-37d4-440b-b4d2-d5450d77ad22 -t 2000 00:13:22.137 [ 00:13:22.137 { 00:13:22.137 "name": "8a2b5ee6-37d4-440b-b4d2-d5450d77ad22", 00:13:22.137 "aliases": [ 00:13:22.137 "lvs/lvol" 00:13:22.137 ], 00:13:22.137 "product_name": "Logical Volume", 00:13:22.137 "block_size": 4096, 00:13:22.137 "num_blocks": 38912, 00:13:22.137 "uuid": "8a2b5ee6-37d4-440b-b4d2-d5450d77ad22", 00:13:22.137 "assigned_rate_limits": { 00:13:22.137 "rw_ios_per_sec": 0, 00:13:22.137 "rw_mbytes_per_sec": 0, 00:13:22.137 "r_mbytes_per_sec": 0, 00:13:22.137 "w_mbytes_per_sec": 0 00:13:22.137 }, 00:13:22.137 "claimed": false, 00:13:22.137 "zoned": false, 00:13:22.137 "supported_io_types": { 00:13:22.137 "read": true, 00:13:22.137 "write": true, 00:13:22.137 "unmap": true, 00:13:22.137 "flush": false, 00:13:22.137 "reset": true, 00:13:22.137 "nvme_admin": false, 00:13:22.137 "nvme_io": false, 00:13:22.137 "nvme_io_md": false, 00:13:22.137 "write_zeroes": true, 00:13:22.137 "zcopy": false, 00:13:22.137 "get_zone_info": false, 00:13:22.137 "zone_management": false, 00:13:22.137 "zone_append": false, 00:13:22.137 "compare": false, 00:13:22.137 "compare_and_write": false, 00:13:22.137 "abort": false, 00:13:22.137 "seek_hole": true, 00:13:22.137 "seek_data": true, 00:13:22.137 "copy": false, 00:13:22.137 "nvme_iov_md": false 00:13:22.137 }, 00:13:22.137 "driver_specific": { 00:13:22.137 "lvol": { 00:13:22.137 "lvol_store_uuid": "84cc3d83-cdd8-426c-9930-46e3208509c6", 00:13:22.137 "base_bdev": "aio_bdev", 00:13:22.137 "thin_provision": false, 00:13:22.137 "num_allocated_clusters": 38, 00:13:22.137 "snapshot": false, 00:13:22.137 "clone": false, 00:13:22.137 "esnap_clone": false 00:13:22.137 } 00:13:22.137 } 00:13:22.137 } 00:13:22.137 ] 00:13:22.137 00:14:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # return 0 00:13:22.137 00:14:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 84cc3d83-cdd8-426c-9930-46e3208509c6 00:13:22.137 00:14:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:13:22.396 00:14:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:13:22.396 00:14:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 84cc3d83-cdd8-426c-9930-46e3208509c6 00:13:22.396 00:14:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:13:22.655 00:14:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:13:22.655 00:14:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 8a2b5ee6-37d4-440b-b4d2-d5450d77ad22 00:13:22.655 00:14:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 84cc3d83-cdd8-426c-9930-46e3208509c6 00:13:22.914 00:14:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:23.174 00:14:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:23.174 00:13:23.174 real 0m17.581s 00:13:23.174 user 0m44.922s 00:13:23.174 sys 0m3.798s 00:13:23.174 00:14:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1118 -- # xtrace_disable 00:13:23.174 00:14:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:23.174 ************************************ 00:13:23.174 END TEST lvs_grow_dirty 00:13:23.174 ************************************ 00:13:23.174 00:14:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1136 -- # return 0 00:13:23.174 00:14:41 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:13:23.174 00:14:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@800 -- # type=--id 00:13:23.174 00:14:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@801 -- # id=0 00:13:23.174 00:14:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@802 -- # '[' --id = --pid ']' 00:13:23.174 00:14:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:13:23.174 00:14:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # shm_files=nvmf_trace.0 00:13:23.174 00:14:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # [[ -z nvmf_trace.0 ]] 00:13:23.174 00:14:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # for n in $shm_files 00:13:23.174 00:14:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:13:23.174 nvmf_trace.0 00:13:23.174 00:14:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@815 -- # return 0 00:13:23.174 00:14:41 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:13:23.174 00:14:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:23.174 00:14:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:13:23.174 00:14:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:23.174 00:14:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:13:23.174 00:14:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:23.174 00:14:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:23.174 rmmod nvme_tcp 00:13:23.174 rmmod nvme_fabrics 00:13:23.174 rmmod nvme_keyring 00:13:23.174 00:14:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:23.174 00:14:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:13:23.174 00:14:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:13:23.174 00:14:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 1462397 ']' 00:13:23.174 00:14:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 1462397 00:13:23.174 00:14:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@942 -- # '[' -z 1462397 ']' 00:13:23.174 00:14:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@946 -- # kill -0 1462397 00:13:23.174 00:14:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@947 -- # uname 00:13:23.174 00:14:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:13:23.174 00:14:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1462397 00:13:23.433 00:14:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:13:23.433 00:14:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:13:23.433 00:14:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1462397' 00:13:23.433 killing process with pid 1462397 00:13:23.433 00:14:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@961 -- # kill 1462397 00:13:23.433 00:14:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # wait 1462397 00:13:23.433 00:14:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:23.433 00:14:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:23.433 00:14:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:23.433 00:14:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:23.433 00:14:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:23.433 00:14:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:23.433 00:14:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:23.433 00:14:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:26.001 00:14:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:26.001 00:13:26.001 real 0m42.188s 00:13:26.001 user 1m5.995s 00:13:26.001 sys 0m9.517s 00:13:26.001 00:14:44 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1118 -- # xtrace_disable 00:13:26.001 00:14:44 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:26.001 ************************************ 00:13:26.001 END TEST nvmf_lvs_grow 00:13:26.001 ************************************ 00:13:26.001 00:14:44 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:13:26.001 00:14:44 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:13:26.001 00:14:44 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:13:26.001 00:14:44 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:13:26.001 00:14:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:26.001 ************************************ 00:13:26.001 START TEST nvmf_bdev_io_wait 00:13:26.001 ************************************ 00:13:26.001 00:14:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:13:26.001 * Looking for test storage... 00:13:26.001 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:26.001 00:14:44 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:26.001 00:14:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:13:26.001 00:14:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:26.001 00:14:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:26.001 00:14:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:26.001 00:14:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:26.001 00:14:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:26.001 00:14:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:26.001 00:14:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:26.001 00:14:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:26.001 00:14:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:26.001 00:14:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:26.001 00:14:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:26.001 00:14:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:26.001 00:14:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:26.001 00:14:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:26.001 00:14:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:26.001 00:14:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:26.001 00:14:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:26.001 00:14:44 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:26.001 00:14:44 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:26.001 00:14:44 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:26.002 00:14:44 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.002 00:14:44 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.002 00:14:44 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.002 00:14:44 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:13:26.002 00:14:44 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.002 00:14:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:13:26.002 00:14:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:26.002 00:14:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:26.002 00:14:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:26.002 00:14:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:26.002 00:14:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:26.002 00:14:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:26.002 00:14:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:26.002 00:14:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:26.002 00:14:44 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:26.002 00:14:44 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:26.002 00:14:44 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:13:26.002 00:14:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:26.002 00:14:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:26.002 00:14:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:26.002 00:14:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:26.002 00:14:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:26.002 00:14:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:26.002 00:14:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:26.002 00:14:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:26.002 00:14:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:26.002 00:14:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:26.002 00:14:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:13:26.002 00:14:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:31.285 00:14:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:31.285 00:14:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:13:31.285 00:14:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:31.285 00:14:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:31.285 00:14:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:31.285 00:14:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:31.285 00:14:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:31.285 00:14:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:13:31.285 00:14:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:31.285 00:14:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:13:31.285 00:14:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:13:31.285 00:14:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:13:31.285 00:14:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:13:31.285 00:14:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:13:31.285 00:14:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:13:31.285 00:14:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:31.285 00:14:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:31.285 00:14:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:31.285 00:14:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:31.285 00:14:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:31.285 00:14:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:31.285 00:14:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:31.285 00:14:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:31.285 00:14:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:31.285 00:14:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:31.285 00:14:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:31.285 00:14:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:31.285 00:14:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:31.285 00:14:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:31.285 00:14:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:31.285 00:14:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:31.285 00:14:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:31.285 00:14:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:31.285 00:14:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:31.285 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:31.285 00:14:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:31.285 00:14:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:31.285 00:14:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:31.285 00:14:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:31.285 00:14:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:31.285 00:14:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:31.285 00:14:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:31.285 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:31.285 00:14:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:31.285 00:14:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:31.285 00:14:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:31.285 00:14:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:31.285 00:14:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:31.285 00:14:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:31.285 00:14:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:31.285 00:14:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:31.285 00:14:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:31.285 00:14:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:31.285 00:14:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:31.285 00:14:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:31.285 00:14:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:31.285 00:14:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:31.285 00:14:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:31.285 00:14:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:31.285 Found net devices under 0000:86:00.0: cvl_0_0 00:13:31.285 00:14:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:31.285 00:14:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:31.285 00:14:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:31.285 00:14:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:31.285 00:14:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:31.285 00:14:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:31.285 00:14:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:31.285 00:14:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:31.285 00:14:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:31.286 Found net devices under 0000:86:00.1: cvl_0_1 00:13:31.286 00:14:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:31.286 00:14:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:31.286 00:14:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:13:31.286 00:14:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:31.286 00:14:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:31.286 00:14:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:31.286 00:14:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:31.286 00:14:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:31.286 00:14:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:31.286 00:14:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:31.286 00:14:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:31.286 00:14:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:31.286 00:14:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:31.286 00:14:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:31.286 00:14:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:31.286 00:14:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:31.286 00:14:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:31.286 00:14:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:31.286 00:14:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:31.286 00:14:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:31.286 00:14:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:31.286 00:14:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:31.286 00:14:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:31.286 00:14:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:31.286 00:14:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:31.286 00:14:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:31.286 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:31.286 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.294 ms 00:13:31.286 00:13:31.286 --- 10.0.0.2 ping statistics --- 00:13:31.286 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:31.286 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:13:31.286 00:14:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:31.286 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:31.286 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.263 ms 00:13:31.286 00:13:31.286 --- 10.0.0.1 ping statistics --- 00:13:31.286 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:31.286 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:13:31.286 00:14:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:31.286 00:14:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:13:31.286 00:14:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:31.286 00:14:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:31.286 00:14:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:31.286 00:14:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:31.286 00:14:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:31.286 00:14:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:31.286 00:14:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:31.286 00:14:49 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:13:31.286 00:14:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:31.286 00:14:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:31.286 00:14:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:31.286 00:14:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=1466447 00:13:31.286 00:14:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 1466447 00:13:31.286 00:14:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:13:31.286 00:14:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@823 -- # '[' -z 1466447 ']' 00:13:31.286 00:14:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:31.286 00:14:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@828 -- # local max_retries=100 00:13:31.286 00:14:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:31.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:31.286 00:14:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@832 -- # xtrace_disable 00:13:31.286 00:14:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:31.286 [2024-07-16 00:14:49.938195] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:13:31.286 [2024-07-16 00:14:49.938249] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:31.286 [2024-07-16 00:14:49.996084] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:31.286 [2024-07-16 00:14:50.085599] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:31.286 [2024-07-16 00:14:50.085635] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:31.286 [2024-07-16 00:14:50.085642] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:31.286 [2024-07-16 00:14:50.085648] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:31.286 [2024-07-16 00:14:50.085653] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:31.286 [2024-07-16 00:14:50.085696] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:31.286 [2024-07-16 00:14:50.085792] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:31.286 [2024-07-16 00:14:50.085874] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:31.286 [2024-07-16 00:14:50.085875] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:32.225 00:14:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:13:32.225 00:14:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@856 -- # return 0 00:13:32.225 00:14:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:32.225 00:14:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:32.225 00:14:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:32.225 00:14:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:32.225 00:14:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:13:32.225 00:14:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@553 -- # xtrace_disable 00:13:32.225 00:14:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:32.225 00:14:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:13:32.225 00:14:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:13:32.225 00:14:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@553 -- # xtrace_disable 00:13:32.225 00:14:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:32.225 00:14:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:13:32.225 00:14:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:32.225 00:14:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@553 -- # xtrace_disable 00:13:32.225 00:14:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:32.225 [2024-07-16 00:14:50.863088] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:32.225 00:14:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:13:32.225 00:14:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:32.225 00:14:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@553 -- # xtrace_disable 00:13:32.225 00:14:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:32.225 Malloc0 00:13:32.225 00:14:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:13:32.225 00:14:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:32.225 00:14:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@553 -- # xtrace_disable 00:13:32.225 00:14:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:32.225 00:14:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:13:32.225 00:14:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:32.225 00:14:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@553 -- # xtrace_disable 00:13:32.225 00:14:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:32.225 00:14:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:13:32.225 00:14:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:32.225 00:14:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@553 -- # xtrace_disable 00:13:32.225 00:14:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:32.225 [2024-07-16 00:14:50.927002] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:32.225 00:14:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:13:32.225 00:14:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1466697 00:13:32.225 00:14:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:13:32.225 00:14:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:13:32.225 00:14:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1466699 00:13:32.225 00:14:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:13:32.225 00:14:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:13:32.225 00:14:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:32.225 00:14:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:32.225 { 00:13:32.225 "params": { 00:13:32.225 "name": "Nvme$subsystem", 00:13:32.225 "trtype": "$TEST_TRANSPORT", 00:13:32.225 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:32.225 "adrfam": "ipv4", 00:13:32.225 "trsvcid": "$NVMF_PORT", 00:13:32.225 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:32.225 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:32.225 "hdgst": ${hdgst:-false}, 00:13:32.225 "ddgst": ${ddgst:-false} 00:13:32.225 }, 00:13:32.225 "method": "bdev_nvme_attach_controller" 00:13:32.225 } 00:13:32.225 EOF 00:13:32.225 )") 00:13:32.225 00:14:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:13:32.225 00:14:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1466701 00:13:32.225 00:14:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:13:32.225 00:14:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:13:32.225 00:14:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:13:32.225 00:14:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:32.225 00:14:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:13:32.225 00:14:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:32.225 { 00:13:32.225 "params": { 00:13:32.225 "name": "Nvme$subsystem", 00:13:32.225 "trtype": "$TEST_TRANSPORT", 00:13:32.225 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:32.225 "adrfam": "ipv4", 00:13:32.225 "trsvcid": "$NVMF_PORT", 00:13:32.225 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:32.225 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:32.225 "hdgst": ${hdgst:-false}, 00:13:32.225 "ddgst": ${ddgst:-false} 00:13:32.225 }, 00:13:32.225 "method": "bdev_nvme_attach_controller" 00:13:32.225 } 00:13:32.225 EOF 00:13:32.225 )") 00:13:32.225 00:14:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:13:32.225 00:14:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1466704 00:13:32.225 00:14:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:13:32.225 00:14:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:13:32.225 00:14:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:13:32.225 00:14:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:13:32.225 00:14:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:32.225 00:14:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:32.225 { 00:13:32.225 "params": { 00:13:32.225 "name": "Nvme$subsystem", 00:13:32.225 "trtype": "$TEST_TRANSPORT", 00:13:32.225 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:32.225 "adrfam": "ipv4", 00:13:32.225 "trsvcid": "$NVMF_PORT", 00:13:32.225 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:32.225 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:32.225 "hdgst": ${hdgst:-false}, 00:13:32.225 "ddgst": ${ddgst:-false} 00:13:32.225 }, 00:13:32.225 "method": "bdev_nvme_attach_controller" 00:13:32.225 } 00:13:32.225 EOF 00:13:32.225 )") 00:13:32.225 00:14:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:13:32.225 00:14:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:13:32.225 00:14:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:13:32.225 00:14:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:13:32.225 00:14:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:13:32.225 00:14:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:32.225 00:14:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:32.225 { 00:13:32.225 "params": { 00:13:32.225 "name": "Nvme$subsystem", 00:13:32.225 "trtype": "$TEST_TRANSPORT", 00:13:32.226 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:32.226 "adrfam": "ipv4", 00:13:32.226 "trsvcid": "$NVMF_PORT", 00:13:32.226 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:32.226 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:32.226 "hdgst": ${hdgst:-false}, 00:13:32.226 "ddgst": ${ddgst:-false} 00:13:32.226 }, 00:13:32.226 "method": "bdev_nvme_attach_controller" 00:13:32.226 } 00:13:32.226 EOF 00:13:32.226 )") 00:13:32.226 00:14:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:13:32.226 00:14:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1466697 00:13:32.226 00:14:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:13:32.226 00:14:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:13:32.226 00:14:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:13:32.226 00:14:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:13:32.226 00:14:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:13:32.226 00:14:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:32.226 "params": { 00:13:32.226 "name": "Nvme1", 00:13:32.226 "trtype": "tcp", 00:13:32.226 "traddr": "10.0.0.2", 00:13:32.226 "adrfam": "ipv4", 00:13:32.226 "trsvcid": "4420", 00:13:32.226 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:32.226 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:32.226 "hdgst": false, 00:13:32.226 "ddgst": false 00:13:32.226 }, 00:13:32.226 "method": "bdev_nvme_attach_controller" 00:13:32.226 }' 00:13:32.226 00:14:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:13:32.226 00:14:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:13:32.226 00:14:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:32.226 "params": { 00:13:32.226 "name": "Nvme1", 00:13:32.226 "trtype": "tcp", 00:13:32.226 "traddr": "10.0.0.2", 00:13:32.226 "adrfam": "ipv4", 00:13:32.226 "trsvcid": "4420", 00:13:32.226 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:32.226 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:32.226 "hdgst": false, 00:13:32.226 "ddgst": false 00:13:32.226 }, 00:13:32.226 "method": "bdev_nvme_attach_controller" 00:13:32.226 }' 00:13:32.226 00:14:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:13:32.226 00:14:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:32.226 "params": { 00:13:32.226 "name": "Nvme1", 00:13:32.226 "trtype": "tcp", 00:13:32.226 "traddr": "10.0.0.2", 00:13:32.226 "adrfam": "ipv4", 00:13:32.226 "trsvcid": "4420", 00:13:32.226 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:32.226 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:32.226 "hdgst": false, 00:13:32.226 "ddgst": false 00:13:32.226 }, 00:13:32.226 "method": "bdev_nvme_attach_controller" 00:13:32.226 }' 00:13:32.226 00:14:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:13:32.226 00:14:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:32.226 "params": { 00:13:32.226 "name": "Nvme1", 00:13:32.226 "trtype": "tcp", 00:13:32.226 "traddr": "10.0.0.2", 00:13:32.226 "adrfam": "ipv4", 00:13:32.226 "trsvcid": "4420", 00:13:32.226 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:32.226 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:32.226 "hdgst": false, 00:13:32.226 "ddgst": false 00:13:32.226 }, 00:13:32.226 "method": "bdev_nvme_attach_controller" 00:13:32.226 }' 00:13:32.226 [2024-07-16 00:14:50.974766] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:13:32.226 [2024-07-16 00:14:50.974818] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:13:32.226 [2024-07-16 00:14:50.977287] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:13:32.226 [2024-07-16 00:14:50.977336] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:13:32.226 [2024-07-16 00:14:50.979564] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:13:32.226 [2024-07-16 00:14:50.979604] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:13:32.226 [2024-07-16 00:14:50.981057] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:13:32.226 [2024-07-16 00:14:50.981098] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:13:32.484 [2024-07-16 00:14:51.156054] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:32.484 [2024-07-16 00:14:51.215652] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:32.484 [2024-07-16 00:14:51.238556] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:13:32.484 [2024-07-16 00:14:51.285976] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:13:32.484 [2024-07-16 00:14:51.312902] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:32.742 [2024-07-16 00:14:51.369690] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:32.742 [2024-07-16 00:14:51.405717] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:13:32.742 [2024-07-16 00:14:51.447449] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:13:32.742 Running I/O for 1 seconds... 00:13:32.742 Running I/O for 1 seconds... 00:13:32.999 Running I/O for 1 seconds... 00:13:32.999 Running I/O for 1 seconds... 00:13:33.934 00:13:33.934 Latency(us) 00:13:33.934 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:33.934 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:13:33.934 Nvme1n1 : 1.02 8114.91 31.70 0.00 0.00 15652.10 6496.61 23478.98 00:13:33.934 =================================================================================================================== 00:13:33.934 Total : 8114.91 31.70 0.00 0.00 15652.10 6496.61 23478.98 00:13:33.934 00:13:33.934 Latency(us) 00:13:33.934 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:33.934 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:13:33.934 Nvme1n1 : 1.00 244750.16 956.06 0.00 0.00 520.71 213.70 669.61 00:13:33.934 =================================================================================================================== 00:13:33.934 Total : 244750.16 956.06 0.00 0.00 520.71 213.70 669.61 00:13:33.934 00:13:33.934 Latency(us) 00:13:33.934 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:33.934 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:13:33.934 Nvme1n1 : 1.00 7656.19 29.91 0.00 0.00 16672.65 5043.42 31457.28 00:13:33.934 =================================================================================================================== 00:13:33.934 Total : 7656.19 29.91 0.00 0.00 16672.65 5043.42 31457.28 00:13:33.934 00:13:33.934 Latency(us) 00:13:33.934 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:33.934 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:13:33.934 Nvme1n1 : 1.00 12381.53 48.37 0.00 0.00 10310.20 4587.52 22567.18 00:13:33.934 =================================================================================================================== 00:13:33.934 Total : 12381.53 48.37 0.00 0.00 10310.20 4587.52 22567.18 00:13:34.192 00:14:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1466699 00:13:34.192 00:14:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1466701 00:13:34.192 00:14:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1466704 00:13:34.192 00:14:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:34.192 00:14:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@553 -- # xtrace_disable 00:13:34.192 00:14:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:34.192 00:14:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:13:34.192 00:14:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:13:34.192 00:14:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:13:34.192 00:14:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:34.192 00:14:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:13:34.192 00:14:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:34.192 00:14:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:13:34.192 00:14:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:34.192 00:14:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:34.192 rmmod nvme_tcp 00:13:34.192 rmmod nvme_fabrics 00:13:34.192 rmmod nvme_keyring 00:13:34.192 00:14:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:34.192 00:14:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:13:34.192 00:14:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:13:34.192 00:14:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 1466447 ']' 00:13:34.192 00:14:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 1466447 00:13:34.192 00:14:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@942 -- # '[' -z 1466447 ']' 00:13:34.192 00:14:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@946 -- # kill -0 1466447 00:13:34.192 00:14:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@947 -- # uname 00:13:34.192 00:14:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:13:34.192 00:14:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1466447 00:13:34.192 00:14:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:13:34.192 00:14:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:13:34.192 00:14:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1466447' 00:13:34.192 killing process with pid 1466447 00:13:34.192 00:14:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@961 -- # kill 1466447 00:13:34.192 00:14:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # wait 1466447 00:13:34.450 00:14:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:34.450 00:14:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:34.450 00:14:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:34.450 00:14:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:34.450 00:14:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:34.450 00:14:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:34.450 00:14:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:34.450 00:14:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:36.990 00:14:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:36.990 00:13:36.990 real 0m10.887s 00:13:36.990 user 0m19.572s 00:13:36.990 sys 0m5.663s 00:13:36.990 00:14:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1118 -- # xtrace_disable 00:13:36.990 00:14:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:36.990 ************************************ 00:13:36.990 END TEST nvmf_bdev_io_wait 00:13:36.991 ************************************ 00:13:36.991 00:14:55 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:13:36.991 00:14:55 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:13:36.991 00:14:55 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:13:36.991 00:14:55 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:13:36.991 00:14:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:36.991 ************************************ 00:13:36.991 START TEST nvmf_queue_depth 00:13:36.991 ************************************ 00:13:36.991 00:14:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:13:36.991 * Looking for test storage... 00:13:36.991 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:36.991 00:14:55 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:36.991 00:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:13:36.991 00:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:36.991 00:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:36.991 00:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:36.991 00:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:36.991 00:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:36.991 00:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:36.991 00:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:36.991 00:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:36.991 00:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:36.991 00:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:36.991 00:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:36.991 00:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:36.991 00:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:36.991 00:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:36.991 00:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:36.991 00:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:36.991 00:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:36.991 00:14:55 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:36.991 00:14:55 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:36.991 00:14:55 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:36.991 00:14:55 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.991 00:14:55 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.991 00:14:55 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.991 00:14:55 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:13:36.991 00:14:55 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.991 00:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:13:36.991 00:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:36.991 00:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:36.991 00:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:36.991 00:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:36.991 00:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:36.991 00:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:36.991 00:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:36.991 00:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:36.991 00:14:55 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:13:36.991 00:14:55 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:13:36.991 00:14:55 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:36.991 00:14:55 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:13:36.991 00:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:36.991 00:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:36.991 00:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:36.991 00:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:36.991 00:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:36.991 00:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:36.991 00:14:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:36.991 00:14:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:36.991 00:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:36.991 00:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:36.991 00:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:13:36.991 00:14:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:42.292 00:15:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:42.292 00:15:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:13:42.292 00:15:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:42.292 00:15:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:42.292 00:15:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:42.292 00:15:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:42.292 00:15:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:42.292 00:15:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:13:42.292 00:15:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:42.292 00:15:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:13:42.292 00:15:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:13:42.292 00:15:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:13:42.292 00:15:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:13:42.292 00:15:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:13:42.292 00:15:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:13:42.292 00:15:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:42.292 00:15:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:42.292 00:15:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:42.292 00:15:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:42.292 00:15:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:42.292 00:15:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:42.292 00:15:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:42.292 00:15:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:42.292 00:15:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:42.292 00:15:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:42.292 00:15:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:42.292 00:15:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:42.292 00:15:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:42.292 00:15:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:42.292 00:15:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:42.292 00:15:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:42.292 00:15:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:42.292 00:15:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:42.292 00:15:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:42.292 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:42.292 00:15:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:42.292 00:15:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:42.292 00:15:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:42.292 00:15:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:42.292 00:15:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:42.292 00:15:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:42.292 00:15:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:42.292 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:42.292 00:15:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:42.292 00:15:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:42.292 00:15:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:42.292 00:15:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:42.292 00:15:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:42.292 00:15:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:42.292 00:15:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:42.292 00:15:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:42.292 00:15:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:42.292 00:15:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:42.292 00:15:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:42.292 00:15:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:42.292 00:15:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:42.292 00:15:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:42.292 00:15:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:42.292 00:15:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:42.292 Found net devices under 0000:86:00.0: cvl_0_0 00:13:42.292 00:15:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:42.292 00:15:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:42.292 00:15:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:42.292 00:15:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:42.292 00:15:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:42.292 00:15:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:42.292 00:15:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:42.292 00:15:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:42.292 00:15:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:42.292 Found net devices under 0000:86:00.1: cvl_0_1 00:13:42.292 00:15:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:42.292 00:15:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:42.292 00:15:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:13:42.292 00:15:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:42.292 00:15:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:42.292 00:15:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:42.292 00:15:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:42.292 00:15:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:42.292 00:15:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:42.292 00:15:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:42.292 00:15:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:42.292 00:15:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:42.292 00:15:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:42.292 00:15:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:42.292 00:15:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:42.292 00:15:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:42.292 00:15:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:42.292 00:15:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:42.292 00:15:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:42.292 00:15:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:42.292 00:15:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:42.292 00:15:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:42.292 00:15:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:42.292 00:15:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:42.292 00:15:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:42.292 00:15:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:42.292 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:42.292 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.155 ms 00:13:42.292 00:13:42.292 --- 10.0.0.2 ping statistics --- 00:13:42.292 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:42.292 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:13:42.292 00:15:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:42.292 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:42.292 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.099 ms 00:13:42.292 00:13:42.292 --- 10.0.0.1 ping statistics --- 00:13:42.292 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:42.292 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:13:42.292 00:15:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:42.292 00:15:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:13:42.292 00:15:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:42.292 00:15:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:42.292 00:15:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:42.292 00:15:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:42.292 00:15:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:42.292 00:15:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:42.292 00:15:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:42.292 00:15:01 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:13:42.292 00:15:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:42.292 00:15:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:42.292 00:15:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:42.292 00:15:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=1470541 00:13:42.292 00:15:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 1470541 00:13:42.292 00:15:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:42.292 00:15:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@823 -- # '[' -z 1470541 ']' 00:13:42.292 00:15:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:42.292 00:15:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@828 -- # local max_retries=100 00:13:42.292 00:15:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:42.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:42.292 00:15:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # xtrace_disable 00:13:42.293 00:15:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:42.293 [2024-07-16 00:15:01.070351] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:13:42.293 [2024-07-16 00:15:01.070397] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:42.293 [2024-07-16 00:15:01.129404] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:42.551 [2024-07-16 00:15:01.207917] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:42.551 [2024-07-16 00:15:01.207958] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:42.551 [2024-07-16 00:15:01.207965] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:42.551 [2024-07-16 00:15:01.207971] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:42.551 [2024-07-16 00:15:01.207976] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:42.551 [2024-07-16 00:15:01.207996] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:43.120 00:15:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:13:43.120 00:15:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@856 -- # return 0 00:13:43.120 00:15:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:43.120 00:15:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:43.120 00:15:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:43.120 00:15:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:43.120 00:15:01 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:43.120 00:15:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@553 -- # xtrace_disable 00:13:43.121 00:15:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:43.121 [2024-07-16 00:15:01.899617] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:43.121 00:15:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:13:43.121 00:15:01 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:43.121 00:15:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@553 -- # xtrace_disable 00:13:43.121 00:15:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:43.121 Malloc0 00:13:43.121 00:15:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:13:43.121 00:15:01 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:43.121 00:15:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@553 -- # xtrace_disable 00:13:43.121 00:15:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:43.121 00:15:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:13:43.121 00:15:01 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:43.121 00:15:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@553 -- # xtrace_disable 00:13:43.121 00:15:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:43.121 00:15:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:13:43.121 00:15:01 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:43.121 00:15:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@553 -- # xtrace_disable 00:13:43.121 00:15:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:43.121 [2024-07-16 00:15:01.959221] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:43.121 00:15:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:13:43.121 00:15:01 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1470834 00:13:43.121 00:15:01 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:43.121 00:15:01 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1470834 /var/tmp/bdevperf.sock 00:13:43.121 00:15:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@823 -- # '[' -z 1470834 ']' 00:13:43.121 00:15:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:43.121 00:15:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@828 -- # local max_retries=100 00:13:43.121 00:15:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:43.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:43.121 00:15:01 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:13:43.121 00:15:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # xtrace_disable 00:13:43.121 00:15:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:43.380 [2024-07-16 00:15:02.008948] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:13:43.380 [2024-07-16 00:15:02.008994] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1470834 ] 00:13:43.380 [2024-07-16 00:15:02.063915] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:43.380 [2024-07-16 00:15:02.144722] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:44.318 00:15:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:13:44.318 00:15:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@856 -- # return 0 00:13:44.318 00:15:02 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:13:44.318 00:15:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@553 -- # xtrace_disable 00:13:44.318 00:15:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:44.318 NVMe0n1 00:13:44.318 00:15:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:13:44.318 00:15:02 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:44.318 Running I/O for 10 seconds... 00:13:54.374 00:13:54.374 Latency(us) 00:13:54.374 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:54.374 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:13:54.374 Verification LBA range: start 0x0 length 0x4000 00:13:54.374 NVMe0n1 : 10.04 12241.69 47.82 0.00 0.00 83384.00 5157.40 58811.44 00:13:54.374 =================================================================================================================== 00:13:54.374 Total : 12241.69 47.82 0.00 0.00 83384.00 5157.40 58811.44 00:13:54.374 0 00:13:54.374 00:15:13 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1470834 00:13:54.374 00:15:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@942 -- # '[' -z 1470834 ']' 00:13:54.374 00:15:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@946 -- # kill -0 1470834 00:13:54.374 00:15:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@947 -- # uname 00:13:54.374 00:15:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:13:54.374 00:15:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1470834 00:13:54.374 00:15:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:13:54.374 00:15:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:13:54.374 00:15:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1470834' 00:13:54.374 killing process with pid 1470834 00:13:54.374 00:15:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@961 -- # kill 1470834 00:13:54.374 Received shutdown signal, test time was about 10.000000 seconds 00:13:54.374 00:13:54.374 Latency(us) 00:13:54.374 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:54.374 =================================================================================================================== 00:13:54.374 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:54.374 00:15:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # wait 1470834 00:13:54.632 00:15:13 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:54.632 00:15:13 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:13:54.632 00:15:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:54.632 00:15:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:13:54.632 00:15:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:54.632 00:15:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:13:54.632 00:15:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:54.632 00:15:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:54.632 rmmod nvme_tcp 00:13:54.632 rmmod nvme_fabrics 00:13:54.632 rmmod nvme_keyring 00:13:54.632 00:15:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:54.632 00:15:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:13:54.632 00:15:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:13:54.632 00:15:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 1470541 ']' 00:13:54.632 00:15:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 1470541 00:13:54.632 00:15:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@942 -- # '[' -z 1470541 ']' 00:13:54.632 00:15:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@946 -- # kill -0 1470541 00:13:54.632 00:15:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@947 -- # uname 00:13:54.632 00:15:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:13:54.632 00:15:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1470541 00:13:54.632 00:15:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:13:54.632 00:15:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:13:54.632 00:15:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1470541' 00:13:54.632 killing process with pid 1470541 00:13:54.632 00:15:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@961 -- # kill 1470541 00:13:54.632 00:15:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # wait 1470541 00:13:54.889 00:15:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:54.889 00:15:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:54.889 00:15:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:54.889 00:15:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:54.889 00:15:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:54.889 00:15:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:54.889 00:15:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:54.889 00:15:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:57.421 00:15:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:57.421 00:13:57.421 real 0m20.352s 00:13:57.421 user 0m24.794s 00:13:57.421 sys 0m5.725s 00:13:57.421 00:15:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1118 -- # xtrace_disable 00:13:57.421 00:15:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:57.421 ************************************ 00:13:57.421 END TEST nvmf_queue_depth 00:13:57.421 ************************************ 00:13:57.421 00:15:15 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:13:57.421 00:15:15 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:13:57.421 00:15:15 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:13:57.421 00:15:15 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:13:57.421 00:15:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:57.421 ************************************ 00:13:57.421 START TEST nvmf_target_multipath 00:13:57.421 ************************************ 00:13:57.421 00:15:15 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:13:57.421 * Looking for test storage... 00:13:57.421 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:57.421 00:15:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:57.421 00:15:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:13:57.421 00:15:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:57.421 00:15:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:57.421 00:15:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:57.421 00:15:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:57.421 00:15:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:57.421 00:15:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:57.421 00:15:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:57.421 00:15:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:57.421 00:15:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:57.421 00:15:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:57.421 00:15:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:57.421 00:15:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:57.421 00:15:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:57.421 00:15:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:57.421 00:15:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:57.421 00:15:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:57.421 00:15:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:57.421 00:15:15 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:57.421 00:15:15 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:57.421 00:15:15 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:57.421 00:15:15 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:57.421 00:15:15 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:57.421 00:15:15 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:57.421 00:15:15 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:13:57.422 00:15:15 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:57.422 00:15:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:13:57.422 00:15:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:57.422 00:15:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:57.422 00:15:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:57.422 00:15:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:57.422 00:15:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:57.422 00:15:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:57.422 00:15:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:57.422 00:15:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:57.422 00:15:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:57.422 00:15:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:57.422 00:15:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:13:57.422 00:15:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:57.422 00:15:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:13:57.422 00:15:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:57.422 00:15:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:57.422 00:15:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:57.422 00:15:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:57.422 00:15:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:57.422 00:15:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:57.422 00:15:15 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:57.422 00:15:15 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:57.422 00:15:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:57.422 00:15:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:57.422 00:15:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:13:57.422 00:15:15 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:14:02.699 00:15:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:02.699 00:15:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:14:02.699 00:15:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:02.699 00:15:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:02.699 00:15:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:02.699 00:15:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:02.699 00:15:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:02.699 00:15:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:14:02.699 00:15:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:02.699 00:15:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:14:02.699 00:15:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:14:02.699 00:15:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:14:02.699 00:15:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:14:02.699 00:15:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:14:02.699 00:15:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:14:02.699 00:15:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:02.699 00:15:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:02.699 00:15:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:02.699 00:15:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:02.699 00:15:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:02.699 00:15:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:02.699 00:15:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:02.699 00:15:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:02.699 00:15:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:02.699 00:15:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:02.699 00:15:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:02.699 00:15:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:02.699 00:15:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:02.699 00:15:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:02.699 00:15:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:02.699 00:15:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:02.699 00:15:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:02.699 00:15:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:02.699 00:15:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:02.699 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:02.699 00:15:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:02.699 00:15:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:02.699 00:15:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:02.699 00:15:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:02.699 00:15:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:02.699 00:15:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:02.699 00:15:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:02.699 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:02.699 00:15:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:02.699 00:15:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:02.699 00:15:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:02.699 00:15:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:02.699 00:15:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:02.699 00:15:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:02.699 00:15:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:02.699 00:15:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:02.699 00:15:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:02.699 00:15:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:02.699 00:15:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:02.699 00:15:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:02.699 00:15:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:02.699 00:15:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:02.699 00:15:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:02.699 00:15:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:02.699 Found net devices under 0000:86:00.0: cvl_0_0 00:14:02.699 00:15:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:02.699 00:15:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:02.699 00:15:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:02.699 00:15:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:02.699 00:15:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:02.699 00:15:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:02.699 00:15:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:02.699 00:15:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:02.699 00:15:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:02.699 Found net devices under 0000:86:00.1: cvl_0_1 00:14:02.699 00:15:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:02.699 00:15:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:02.699 00:15:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:14:02.699 00:15:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:02.699 00:15:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:02.699 00:15:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:02.699 00:15:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:02.699 00:15:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:02.699 00:15:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:02.699 00:15:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:02.699 00:15:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:02.699 00:15:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:02.699 00:15:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:02.699 00:15:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:02.699 00:15:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:02.699 00:15:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:02.699 00:15:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:02.699 00:15:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:02.699 00:15:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:02.699 00:15:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:02.699 00:15:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:02.699 00:15:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:02.699 00:15:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:02.699 00:15:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:02.699 00:15:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:02.699 00:15:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:02.699 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:02.699 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.257 ms 00:14:02.699 00:14:02.699 --- 10.0.0.2 ping statistics --- 00:14:02.699 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:02.699 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:14:02.699 00:15:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:02.699 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:02.699 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:14:02.699 00:14:02.699 --- 10.0.0.1 ping statistics --- 00:14:02.699 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:02.699 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:14:02.700 00:15:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:02.700 00:15:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:14:02.700 00:15:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:02.700 00:15:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:02.700 00:15:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:02.700 00:15:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:02.700 00:15:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:02.700 00:15:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:02.700 00:15:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:02.700 00:15:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:14:02.700 00:15:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:14:02.700 only one NIC for nvmf test 00:14:02.700 00:15:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:14:02.700 00:15:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:02.700 00:15:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:14:02.700 00:15:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:02.700 00:15:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:14:02.700 00:15:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:02.700 00:15:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:02.700 rmmod nvme_tcp 00:14:02.700 rmmod nvme_fabrics 00:14:02.700 rmmod nvme_keyring 00:14:02.700 00:15:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:02.700 00:15:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:14:02.700 00:15:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:14:02.700 00:15:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:14:02.700 00:15:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:02.700 00:15:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:02.700 00:15:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:02.700 00:15:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:02.700 00:15:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:02.700 00:15:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:02.700 00:15:21 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:02.700 00:15:21 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:04.605 00:15:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:04.605 00:15:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:14:04.605 00:15:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:14:04.605 00:15:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:04.605 00:15:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:14:04.605 00:15:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:04.605 00:15:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:14:04.605 00:15:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:04.605 00:15:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:04.605 00:15:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:04.605 00:15:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:14:04.605 00:15:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:14:04.605 00:15:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:14:04.605 00:15:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:04.605 00:15:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:04.605 00:15:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:04.605 00:15:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:04.605 00:15:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:04.605 00:15:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:04.605 00:15:23 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:04.605 00:15:23 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:04.605 00:15:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:04.605 00:14:04.605 real 0m7.644s 00:14:04.605 user 0m1.534s 00:14:04.605 sys 0m4.103s 00:14:04.605 00:15:23 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1118 -- # xtrace_disable 00:14:04.605 00:15:23 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:14:04.605 ************************************ 00:14:04.605 END TEST nvmf_target_multipath 00:14:04.605 ************************************ 00:14:04.605 00:15:23 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:14:04.605 00:15:23 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:14:04.605 00:15:23 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:14:04.605 00:15:23 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:14:04.605 00:15:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:04.865 ************************************ 00:14:04.865 START TEST nvmf_zcopy 00:14:04.865 ************************************ 00:14:04.865 00:15:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:14:04.865 * Looking for test storage... 00:14:04.865 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:04.865 00:15:23 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:04.865 00:15:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:14:04.865 00:15:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:04.865 00:15:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:04.865 00:15:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:04.865 00:15:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:04.865 00:15:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:04.865 00:15:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:04.865 00:15:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:04.865 00:15:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:04.865 00:15:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:04.865 00:15:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:04.865 00:15:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:04.865 00:15:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:04.865 00:15:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:04.865 00:15:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:04.865 00:15:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:04.865 00:15:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:04.865 00:15:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:04.865 00:15:23 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:04.865 00:15:23 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:04.865 00:15:23 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:04.865 00:15:23 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.865 00:15:23 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.865 00:15:23 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.865 00:15:23 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:14:04.865 00:15:23 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.865 00:15:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:14:04.865 00:15:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:04.865 00:15:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:04.865 00:15:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:04.865 00:15:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:04.865 00:15:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:04.865 00:15:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:04.865 00:15:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:04.865 00:15:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:04.865 00:15:23 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:14:04.865 00:15:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:04.865 00:15:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:04.865 00:15:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:04.865 00:15:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:04.865 00:15:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:04.865 00:15:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:04.865 00:15:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:04.865 00:15:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:04.865 00:15:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:04.865 00:15:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:04.865 00:15:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:14:04.866 00:15:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:10.142 00:15:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:10.142 00:15:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:14:10.142 00:15:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:10.142 00:15:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:10.142 00:15:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:10.142 00:15:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:10.142 00:15:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:10.142 00:15:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:14:10.142 00:15:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:10.142 00:15:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:14:10.142 00:15:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:14:10.142 00:15:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:14:10.142 00:15:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:14:10.142 00:15:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:14:10.142 00:15:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:14:10.142 00:15:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:10.142 00:15:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:10.142 00:15:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:10.142 00:15:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:10.142 00:15:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:10.142 00:15:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:10.142 00:15:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:10.142 00:15:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:10.142 00:15:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:10.142 00:15:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:10.142 00:15:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:10.142 00:15:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:10.142 00:15:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:10.142 00:15:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:10.142 00:15:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:10.142 00:15:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:10.142 00:15:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:10.142 00:15:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:10.142 00:15:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:10.142 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:10.142 00:15:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:10.142 00:15:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:10.142 00:15:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:10.142 00:15:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:10.142 00:15:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:10.142 00:15:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:10.142 00:15:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:10.142 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:10.142 00:15:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:10.142 00:15:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:10.142 00:15:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:10.142 00:15:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:10.142 00:15:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:10.142 00:15:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:10.142 00:15:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:10.142 00:15:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:10.142 00:15:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:10.142 00:15:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:10.142 00:15:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:10.142 00:15:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:10.142 00:15:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:10.142 00:15:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:10.142 00:15:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:10.142 00:15:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:10.142 Found net devices under 0000:86:00.0: cvl_0_0 00:14:10.142 00:15:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:10.142 00:15:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:10.142 00:15:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:10.142 00:15:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:10.142 00:15:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:10.142 00:15:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:10.142 00:15:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:10.142 00:15:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:10.143 00:15:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:10.143 Found net devices under 0000:86:00.1: cvl_0_1 00:14:10.143 00:15:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:10.143 00:15:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:10.143 00:15:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:14:10.143 00:15:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:10.143 00:15:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:10.143 00:15:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:10.143 00:15:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:10.143 00:15:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:10.143 00:15:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:10.143 00:15:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:10.143 00:15:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:10.143 00:15:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:10.143 00:15:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:10.143 00:15:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:10.143 00:15:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:10.143 00:15:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:10.143 00:15:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:10.143 00:15:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:10.143 00:15:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:10.143 00:15:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:10.143 00:15:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:10.143 00:15:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:10.143 00:15:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:10.143 00:15:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:10.143 00:15:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:10.143 00:15:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:10.143 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:10.143 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.184 ms 00:14:10.143 00:14:10.143 --- 10.0.0.2 ping statistics --- 00:14:10.143 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:10.143 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:14:10.143 00:15:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:10.143 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:10.143 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:14:10.143 00:14:10.143 --- 10.0.0.1 ping statistics --- 00:14:10.143 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:10.143 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:14:10.143 00:15:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:10.143 00:15:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:14:10.143 00:15:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:10.143 00:15:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:10.143 00:15:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:10.143 00:15:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:10.143 00:15:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:10.143 00:15:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:10.143 00:15:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:10.143 00:15:28 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:14:10.143 00:15:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:10.143 00:15:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:10.143 00:15:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:10.143 00:15:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=1479880 00:14:10.143 00:15:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 1479880 00:14:10.143 00:15:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:10.143 00:15:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@823 -- # '[' -z 1479880 ']' 00:14:10.143 00:15:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:10.143 00:15:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@828 -- # local max_retries=100 00:14:10.143 00:15:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:10.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:10.143 00:15:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@832 -- # xtrace_disable 00:14:10.143 00:15:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:10.143 [2024-07-16 00:15:28.781723] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:14:10.143 [2024-07-16 00:15:28.781764] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:10.143 [2024-07-16 00:15:28.838608] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:10.143 [2024-07-16 00:15:28.916519] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:10.143 [2024-07-16 00:15:28.916552] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:10.143 [2024-07-16 00:15:28.916559] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:10.143 [2024-07-16 00:15:28.916565] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:10.143 [2024-07-16 00:15:28.916570] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:10.143 [2024-07-16 00:15:28.916587] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:11.081 00:15:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:14:11.081 00:15:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@856 -- # return 0 00:14:11.081 00:15:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:11.081 00:15:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:11.081 00:15:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:11.081 00:15:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:11.081 00:15:29 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:14:11.081 00:15:29 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:14:11.081 00:15:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:11.081 00:15:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:11.081 [2024-07-16 00:15:29.610097] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:11.081 00:15:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:11.081 00:15:29 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:11.081 00:15:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:11.081 00:15:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:11.081 00:15:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:11.081 00:15:29 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:11.081 00:15:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:11.081 00:15:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:11.081 [2024-07-16 00:15:29.626237] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:11.081 00:15:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:11.081 00:15:29 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:11.081 00:15:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:11.081 00:15:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:11.081 00:15:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:11.081 00:15:29 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:14:11.081 00:15:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:11.081 00:15:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:11.081 malloc0 00:14:11.081 00:15:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:11.081 00:15:29 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:11.081 00:15:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:11.081 00:15:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:11.081 00:15:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:11.081 00:15:29 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:14:11.081 00:15:29 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:14:11.081 00:15:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:14:11.081 00:15:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:14:11.081 00:15:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:11.081 00:15:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:11.081 { 00:14:11.081 "params": { 00:14:11.081 "name": "Nvme$subsystem", 00:14:11.081 "trtype": "$TEST_TRANSPORT", 00:14:11.081 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:11.081 "adrfam": "ipv4", 00:14:11.081 "trsvcid": "$NVMF_PORT", 00:14:11.081 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:11.081 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:11.081 "hdgst": ${hdgst:-false}, 00:14:11.081 "ddgst": ${ddgst:-false} 00:14:11.081 }, 00:14:11.081 "method": "bdev_nvme_attach_controller" 00:14:11.081 } 00:14:11.081 EOF 00:14:11.081 )") 00:14:11.081 00:15:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:14:11.081 00:15:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:14:11.081 00:15:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:14:11.081 00:15:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:11.081 "params": { 00:14:11.081 "name": "Nvme1", 00:14:11.081 "trtype": "tcp", 00:14:11.081 "traddr": "10.0.0.2", 00:14:11.081 "adrfam": "ipv4", 00:14:11.081 "trsvcid": "4420", 00:14:11.081 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:11.081 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:11.081 "hdgst": false, 00:14:11.081 "ddgst": false 00:14:11.081 }, 00:14:11.081 "method": "bdev_nvme_attach_controller" 00:14:11.081 }' 00:14:11.081 [2024-07-16 00:15:29.701481] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:14:11.081 [2024-07-16 00:15:29.701523] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1480126 ] 00:14:11.081 [2024-07-16 00:15:29.753621] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:11.081 [2024-07-16 00:15:29.826777] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:11.341 Running I/O for 10 seconds... 00:14:21.322 00:14:21.322 Latency(us) 00:14:21.322 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:21.322 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:14:21.322 Verification LBA range: start 0x0 length 0x1000 00:14:21.322 Nvme1n1 : 10.01 8670.39 67.74 0.00 0.00 14720.01 1175.37 29633.67 00:14:21.322 =================================================================================================================== 00:14:21.322 Total : 8670.39 67.74 0.00 0.00 14720.01 1175.37 29633.67 00:14:21.644 00:15:40 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1481766 00:14:21.644 00:15:40 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:14:21.644 00:15:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:21.644 00:15:40 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:14:21.644 00:15:40 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:14:21.644 00:15:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:14:21.644 00:15:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:14:21.644 [2024-07-16 00:15:40.328989] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.644 [2024-07-16 00:15:40.329020] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.644 00:15:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:21.644 00:15:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:21.644 { 00:14:21.644 "params": { 00:14:21.644 "name": "Nvme$subsystem", 00:14:21.644 "trtype": "$TEST_TRANSPORT", 00:14:21.644 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:21.644 "adrfam": "ipv4", 00:14:21.644 "trsvcid": "$NVMF_PORT", 00:14:21.644 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:21.644 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:21.644 "hdgst": ${hdgst:-false}, 00:14:21.644 "ddgst": ${ddgst:-false} 00:14:21.644 }, 00:14:21.644 "method": "bdev_nvme_attach_controller" 00:14:21.644 } 00:14:21.644 EOF 00:14:21.644 )") 00:14:21.644 00:15:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:14:21.644 00:15:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:14:21.644 00:15:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:14:21.644 00:15:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:21.644 "params": { 00:14:21.644 "name": "Nvme1", 00:14:21.644 "trtype": "tcp", 00:14:21.644 "traddr": "10.0.0.2", 00:14:21.644 "adrfam": "ipv4", 00:14:21.644 "trsvcid": "4420", 00:14:21.644 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:21.644 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:21.644 "hdgst": false, 00:14:21.644 "ddgst": false 00:14:21.644 }, 00:14:21.644 "method": "bdev_nvme_attach_controller" 00:14:21.644 }' 00:14:21.644 [2024-07-16 00:15:40.336978] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.644 [2024-07-16 00:15:40.336990] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.644 [2024-07-16 00:15:40.344996] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.644 [2024-07-16 00:15:40.345007] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.644 [2024-07-16 00:15:40.353018] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.644 [2024-07-16 00:15:40.353028] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.644 [2024-07-16 00:15:40.361039] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.644 [2024-07-16 00:15:40.361049] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.644 [2024-07-16 00:15:40.369062] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.644 [2024-07-16 00:15:40.369072] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.644 [2024-07-16 00:15:40.369439] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:14:21.644 [2024-07-16 00:15:40.369481] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1481766 ] 00:14:21.644 [2024-07-16 00:15:40.377086] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.644 [2024-07-16 00:15:40.377099] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.644 [2024-07-16 00:15:40.385106] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.644 [2024-07-16 00:15:40.385116] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.644 [2024-07-16 00:15:40.393127] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.644 [2024-07-16 00:15:40.393137] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.644 [2024-07-16 00:15:40.401147] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.644 [2024-07-16 00:15:40.401157] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.644 [2024-07-16 00:15:40.409169] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.644 [2024-07-16 00:15:40.409179] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.644 [2024-07-16 00:15:40.417190] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.644 [2024-07-16 00:15:40.417201] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.644 [2024-07-16 00:15:40.423605] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:21.644 [2024-07-16 00:15:40.425213] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.644 [2024-07-16 00:15:40.425229] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.644 [2024-07-16 00:15:40.433247] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.644 [2024-07-16 00:15:40.433260] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.644 [2024-07-16 00:15:40.441263] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.644 [2024-07-16 00:15:40.441275] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.644 [2024-07-16 00:15:40.449284] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.644 [2024-07-16 00:15:40.449295] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.644 [2024-07-16 00:15:40.457306] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.644 [2024-07-16 00:15:40.457318] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.644 [2024-07-16 00:15:40.465330] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.645 [2024-07-16 00:15:40.465348] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.910 [2024-07-16 00:15:40.473350] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.910 [2024-07-16 00:15:40.473363] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.910 [2024-07-16 00:15:40.481372] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.910 [2024-07-16 00:15:40.481383] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.910 [2024-07-16 00:15:40.489392] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.911 [2024-07-16 00:15:40.489402] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.911 [2024-07-16 00:15:40.497414] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.911 [2024-07-16 00:15:40.497423] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.911 [2024-07-16 00:15:40.504117] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:21.911 [2024-07-16 00:15:40.505435] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.911 [2024-07-16 00:15:40.505447] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.911 [2024-07-16 00:15:40.513460] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.911 [2024-07-16 00:15:40.513472] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.911 [2024-07-16 00:15:40.521490] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.911 [2024-07-16 00:15:40.521508] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.911 [2024-07-16 00:15:40.529504] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.911 [2024-07-16 00:15:40.529518] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.911 [2024-07-16 00:15:40.537523] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.911 [2024-07-16 00:15:40.537534] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.911 [2024-07-16 00:15:40.545543] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.911 [2024-07-16 00:15:40.545554] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.911 [2024-07-16 00:15:40.553563] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.911 [2024-07-16 00:15:40.553574] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.911 [2024-07-16 00:15:40.561586] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.911 [2024-07-16 00:15:40.561598] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.911 [2024-07-16 00:15:40.569606] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.911 [2024-07-16 00:15:40.569616] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.911 [2024-07-16 00:15:40.577630] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.911 [2024-07-16 00:15:40.577641] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.911 [2024-07-16 00:15:40.585667] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.911 [2024-07-16 00:15:40.585683] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.911 [2024-07-16 00:15:40.593686] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.911 [2024-07-16 00:15:40.593703] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.911 [2024-07-16 00:15:40.601701] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.911 [2024-07-16 00:15:40.601718] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.911 [2024-07-16 00:15:40.609721] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.911 [2024-07-16 00:15:40.609736] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.911 [2024-07-16 00:15:40.617739] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.911 [2024-07-16 00:15:40.617750] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.911 [2024-07-16 00:15:40.625762] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.911 [2024-07-16 00:15:40.625773] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.911 [2024-07-16 00:15:40.633784] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.911 [2024-07-16 00:15:40.633794] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.911 [2024-07-16 00:15:40.641804] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.911 [2024-07-16 00:15:40.641813] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.911 [2024-07-16 00:15:40.649825] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.911 [2024-07-16 00:15:40.649835] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.911 [2024-07-16 00:15:40.657852] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.911 [2024-07-16 00:15:40.657865] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.911 [2024-07-16 00:15:40.665873] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.911 [2024-07-16 00:15:40.665886] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.911 [2024-07-16 00:15:40.673892] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.911 [2024-07-16 00:15:40.673902] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.911 [2024-07-16 00:15:40.681913] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.911 [2024-07-16 00:15:40.681924] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.911 [2024-07-16 00:15:40.689935] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.911 [2024-07-16 00:15:40.689944] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.911 [2024-07-16 00:15:40.697956] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.911 [2024-07-16 00:15:40.697965] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.911 [2024-07-16 00:15:40.705983] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.911 [2024-07-16 00:15:40.705996] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.911 [2024-07-16 00:15:40.714002] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.911 [2024-07-16 00:15:40.714011] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.911 [2024-07-16 00:15:40.722023] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.911 [2024-07-16 00:15:40.722033] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.911 [2024-07-16 00:15:40.730046] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.911 [2024-07-16 00:15:40.730055] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.911 [2024-07-16 00:15:40.738066] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.911 [2024-07-16 00:15:40.738076] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.911 [2024-07-16 00:15:40.746088] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.911 [2024-07-16 00:15:40.746098] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.911 [2024-07-16 00:15:40.754114] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.911 [2024-07-16 00:15:40.754128] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:21.911 [2024-07-16 00:15:40.762150] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:21.911 [2024-07-16 00:15:40.762161] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.171 [2024-07-16 00:15:40.770159] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.171 [2024-07-16 00:15:40.770170] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.171 [2024-07-16 00:15:40.778183] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.171 [2024-07-16 00:15:40.778193] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.171 [2024-07-16 00:15:40.786203] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.171 [2024-07-16 00:15:40.786213] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.171 [2024-07-16 00:15:40.794232] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.171 [2024-07-16 00:15:40.794243] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.171 [2024-07-16 00:15:40.802268] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.171 [2024-07-16 00:15:40.802278] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.171 [2024-07-16 00:15:40.810298] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.171 [2024-07-16 00:15:40.810315] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.171 Running I/O for 5 seconds... 00:14:22.171 [2024-07-16 00:15:40.818296] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.171 [2024-07-16 00:15:40.818309] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.171 [2024-07-16 00:15:40.829058] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.171 [2024-07-16 00:15:40.829077] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.171 [2024-07-16 00:15:40.837451] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.171 [2024-07-16 00:15:40.837470] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.171 [2024-07-16 00:15:40.846265] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.171 [2024-07-16 00:15:40.846284] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.171 [2024-07-16 00:15:40.854923] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.171 [2024-07-16 00:15:40.854941] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.171 [2024-07-16 00:15:40.864126] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.171 [2024-07-16 00:15:40.864144] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.171 [2024-07-16 00:15:40.873889] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.171 [2024-07-16 00:15:40.873907] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.171 [2024-07-16 00:15:40.883327] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.171 [2024-07-16 00:15:40.883347] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.171 [2024-07-16 00:15:40.891834] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.171 [2024-07-16 00:15:40.891852] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.171 [2024-07-16 00:15:40.900323] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.171 [2024-07-16 00:15:40.900341] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.171 [2024-07-16 00:15:40.909637] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.171 [2024-07-16 00:15:40.909656] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.171 [2024-07-16 00:15:40.917973] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.171 [2024-07-16 00:15:40.917991] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.171 [2024-07-16 00:15:40.926660] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.171 [2024-07-16 00:15:40.926678] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.171 [2024-07-16 00:15:40.934696] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.171 [2024-07-16 00:15:40.934714] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.171 [2024-07-16 00:15:40.943336] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.171 [2024-07-16 00:15:40.943354] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.171 [2024-07-16 00:15:40.952599] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.171 [2024-07-16 00:15:40.952617] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.171 [2024-07-16 00:15:40.961152] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.171 [2024-07-16 00:15:40.961170] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.171 [2024-07-16 00:15:40.970214] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.171 [2024-07-16 00:15:40.970239] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.171 [2024-07-16 00:15:40.978836] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.171 [2024-07-16 00:15:40.978855] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.171 [2024-07-16 00:15:40.987055] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.171 [2024-07-16 00:15:40.987073] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.171 [2024-07-16 00:15:40.995747] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.171 [2024-07-16 00:15:40.995765] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.171 [2024-07-16 00:15:41.004931] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.171 [2024-07-16 00:15:41.004949] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.171 [2024-07-16 00:15:41.013349] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.171 [2024-07-16 00:15:41.013368] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.171 [2024-07-16 00:15:41.022383] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.171 [2024-07-16 00:15:41.022401] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.431 [2024-07-16 00:15:41.031481] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.431 [2024-07-16 00:15:41.031499] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.431 [2024-07-16 00:15:41.040041] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.431 [2024-07-16 00:15:41.040059] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.431 [2024-07-16 00:15:41.048929] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.431 [2024-07-16 00:15:41.048947] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.431 [2024-07-16 00:15:41.058341] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.431 [2024-07-16 00:15:41.058360] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.431 [2024-07-16 00:15:41.067525] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.431 [2024-07-16 00:15:41.067544] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.431 [2024-07-16 00:15:41.075776] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.431 [2024-07-16 00:15:41.075798] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.431 [2024-07-16 00:15:41.084408] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.431 [2024-07-16 00:15:41.084426] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.431 [2024-07-16 00:15:41.093569] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.431 [2024-07-16 00:15:41.093587] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.431 [2024-07-16 00:15:41.102709] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.431 [2024-07-16 00:15:41.102727] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.431 [2024-07-16 00:15:41.111414] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.431 [2024-07-16 00:15:41.111432] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.431 [2024-07-16 00:15:41.119820] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.431 [2024-07-16 00:15:41.119839] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.431 [2024-07-16 00:15:41.128743] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.431 [2024-07-16 00:15:41.128761] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.431 [2024-07-16 00:15:41.137503] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.431 [2024-07-16 00:15:41.137521] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.431 [2024-07-16 00:15:41.145916] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.431 [2024-07-16 00:15:41.145933] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.431 [2024-07-16 00:15:41.154310] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.431 [2024-07-16 00:15:41.154328] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.431 [2024-07-16 00:15:41.162563] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.431 [2024-07-16 00:15:41.162582] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.431 [2024-07-16 00:15:41.171091] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.431 [2024-07-16 00:15:41.171109] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.431 [2024-07-16 00:15:41.179912] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.431 [2024-07-16 00:15:41.179931] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.432 [2024-07-16 00:15:41.188347] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.432 [2024-07-16 00:15:41.188365] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.432 [2024-07-16 00:15:41.195175] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.432 [2024-07-16 00:15:41.195193] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.432 [2024-07-16 00:15:41.206068] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.432 [2024-07-16 00:15:41.206087] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.432 [2024-07-16 00:15:41.214662] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.432 [2024-07-16 00:15:41.214680] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.432 [2024-07-16 00:15:41.223211] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.432 [2024-07-16 00:15:41.223237] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.432 [2024-07-16 00:15:41.232429] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.432 [2024-07-16 00:15:41.232447] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.432 [2024-07-16 00:15:41.241532] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.432 [2024-07-16 00:15:41.241555] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.432 [2024-07-16 00:15:41.250009] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.432 [2024-07-16 00:15:41.250027] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.432 [2024-07-16 00:15:41.258936] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.432 [2024-07-16 00:15:41.258955] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.432 [2024-07-16 00:15:41.268111] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.432 [2024-07-16 00:15:41.268129] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.432 [2024-07-16 00:15:41.276156] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.432 [2024-07-16 00:15:41.276175] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.692 [2024-07-16 00:15:41.284293] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.692 [2024-07-16 00:15:41.284313] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.692 [2024-07-16 00:15:41.292765] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.692 [2024-07-16 00:15:41.292783] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.692 [2024-07-16 00:15:41.301551] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.692 [2024-07-16 00:15:41.301569] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.692 [2024-07-16 00:15:41.310231] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.692 [2024-07-16 00:15:41.310250] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.692 [2024-07-16 00:15:41.319256] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.692 [2024-07-16 00:15:41.319274] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.692 [2024-07-16 00:15:41.327636] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.692 [2024-07-16 00:15:41.327654] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.692 [2024-07-16 00:15:41.336136] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.692 [2024-07-16 00:15:41.336154] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.692 [2024-07-16 00:15:41.345202] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.692 [2024-07-16 00:15:41.345220] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.692 [2024-07-16 00:15:41.353411] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.692 [2024-07-16 00:15:41.353429] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.692 [2024-07-16 00:15:41.361984] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.692 [2024-07-16 00:15:41.362002] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.692 [2024-07-16 00:15:41.371239] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.692 [2024-07-16 00:15:41.371257] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.692 [2024-07-16 00:15:41.379744] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.692 [2024-07-16 00:15:41.379762] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.692 [2024-07-16 00:15:41.388707] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.692 [2024-07-16 00:15:41.388725] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.692 [2024-07-16 00:15:41.397004] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.692 [2024-07-16 00:15:41.397021] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.692 [2024-07-16 00:15:41.405145] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.692 [2024-07-16 00:15:41.405167] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.692 [2024-07-16 00:15:41.414332] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.692 [2024-07-16 00:15:41.414350] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.692 [2024-07-16 00:15:41.422887] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.692 [2024-07-16 00:15:41.422904] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.692 [2024-07-16 00:15:41.429750] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.692 [2024-07-16 00:15:41.429767] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.692 [2024-07-16 00:15:41.441129] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.692 [2024-07-16 00:15:41.441148] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.692 [2024-07-16 00:15:41.450050] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.692 [2024-07-16 00:15:41.450067] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.692 [2024-07-16 00:15:41.459154] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.692 [2024-07-16 00:15:41.459172] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.692 [2024-07-16 00:15:41.467804] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.692 [2024-07-16 00:15:41.467822] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.692 [2024-07-16 00:15:41.477055] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.692 [2024-07-16 00:15:41.477074] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.692 [2024-07-16 00:15:41.483911] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.692 [2024-07-16 00:15:41.483929] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.692 [2024-07-16 00:15:41.494494] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.692 [2024-07-16 00:15:41.494513] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.692 [2024-07-16 00:15:41.503292] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.692 [2024-07-16 00:15:41.503310] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.692 [2024-07-16 00:15:41.511962] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.692 [2024-07-16 00:15:41.511980] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.692 [2024-07-16 00:15:41.520771] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.692 [2024-07-16 00:15:41.520788] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.692 [2024-07-16 00:15:41.530133] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.692 [2024-07-16 00:15:41.530151] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.692 [2024-07-16 00:15:41.539526] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.692 [2024-07-16 00:15:41.539544] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.952 [2024-07-16 00:15:41.548691] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.952 [2024-07-16 00:15:41.548711] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.952 [2024-07-16 00:15:41.557475] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.952 [2024-07-16 00:15:41.557494] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.952 [2024-07-16 00:15:41.566180] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.952 [2024-07-16 00:15:41.566198] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.952 [2024-07-16 00:15:41.574803] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.952 [2024-07-16 00:15:41.574826] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.952 [2024-07-16 00:15:41.583416] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.952 [2024-07-16 00:15:41.583434] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.952 [2024-07-16 00:15:41.590423] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.952 [2024-07-16 00:15:41.590441] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.952 [2024-07-16 00:15:41.601291] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.952 [2024-07-16 00:15:41.601309] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.952 [2024-07-16 00:15:41.610321] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.952 [2024-07-16 00:15:41.610339] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.952 [2024-07-16 00:15:41.618802] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.952 [2024-07-16 00:15:41.618821] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.952 [2024-07-16 00:15:41.627370] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.952 [2024-07-16 00:15:41.627388] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.952 [2024-07-16 00:15:41.636347] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.952 [2024-07-16 00:15:41.636365] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.952 [2024-07-16 00:15:41.645210] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.952 [2024-07-16 00:15:41.645236] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.952 [2024-07-16 00:15:41.652508] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.952 [2024-07-16 00:15:41.652526] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.952 [2024-07-16 00:15:41.662953] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.952 [2024-07-16 00:15:41.662972] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.952 [2024-07-16 00:15:41.672054] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.952 [2024-07-16 00:15:41.672073] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.952 [2024-07-16 00:15:41.680833] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.952 [2024-07-16 00:15:41.680854] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.952 [2024-07-16 00:15:41.690051] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.952 [2024-07-16 00:15:41.690071] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.952 [2024-07-16 00:15:41.698640] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.952 [2024-07-16 00:15:41.698658] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.952 [2024-07-16 00:15:41.705797] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.952 [2024-07-16 00:15:41.705817] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.952 [2024-07-16 00:15:41.716640] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.952 [2024-07-16 00:15:41.716659] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.952 [2024-07-16 00:15:41.725430] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.952 [2024-07-16 00:15:41.725449] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.952 [2024-07-16 00:15:41.734650] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.952 [2024-07-16 00:15:41.734669] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.952 [2024-07-16 00:15:41.743616] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.952 [2024-07-16 00:15:41.743638] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.952 [2024-07-16 00:15:41.752443] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.952 [2024-07-16 00:15:41.752462] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.952 [2024-07-16 00:15:41.759291] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.952 [2024-07-16 00:15:41.759309] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.952 [2024-07-16 00:15:41.770202] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.952 [2024-07-16 00:15:41.770222] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.952 [2024-07-16 00:15:41.779360] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.952 [2024-07-16 00:15:41.779381] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.952 [2024-07-16 00:15:41.788604] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.952 [2024-07-16 00:15:41.788624] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:22.953 [2024-07-16 00:15:41.797120] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:22.953 [2024-07-16 00:15:41.797139] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.212 [2024-07-16 00:15:41.805674] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.212 [2024-07-16 00:15:41.805694] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.212 [2024-07-16 00:15:41.813968] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.212 [2024-07-16 00:15:41.813987] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.212 [2024-07-16 00:15:41.822643] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.212 [2024-07-16 00:15:41.822662] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.212 [2024-07-16 00:15:41.831946] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.212 [2024-07-16 00:15:41.831965] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.212 [2024-07-16 00:15:41.840524] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.212 [2024-07-16 00:15:41.840543] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.212 [2024-07-16 00:15:41.848991] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.213 [2024-07-16 00:15:41.849009] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.213 [2024-07-16 00:15:41.857566] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.213 [2024-07-16 00:15:41.857585] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.213 [2024-07-16 00:15:41.866948] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.213 [2024-07-16 00:15:41.866968] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.213 [2024-07-16 00:15:41.876350] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.213 [2024-07-16 00:15:41.876370] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.213 [2024-07-16 00:15:41.884981] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.213 [2024-07-16 00:15:41.884999] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.213 [2024-07-16 00:15:41.894566] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.213 [2024-07-16 00:15:41.894585] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.213 [2024-07-16 00:15:41.903517] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.213 [2024-07-16 00:15:41.903535] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.213 [2024-07-16 00:15:41.912195] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.213 [2024-07-16 00:15:41.912214] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.213 [2024-07-16 00:15:41.920757] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.213 [2024-07-16 00:15:41.920776] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.213 [2024-07-16 00:15:41.929249] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.213 [2024-07-16 00:15:41.929268] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.213 [2024-07-16 00:15:41.938557] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.213 [2024-07-16 00:15:41.938575] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.213 [2024-07-16 00:15:41.947995] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.213 [2024-07-16 00:15:41.948013] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.213 [2024-07-16 00:15:41.956553] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.213 [2024-07-16 00:15:41.956571] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.213 [2024-07-16 00:15:41.965069] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.213 [2024-07-16 00:15:41.965088] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.213 [2024-07-16 00:15:41.974145] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.213 [2024-07-16 00:15:41.974164] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.213 [2024-07-16 00:15:41.982707] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.213 [2024-07-16 00:15:41.982726] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.213 [2024-07-16 00:15:41.991429] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.213 [2024-07-16 00:15:41.991448] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.213 [2024-07-16 00:15:42.000619] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.213 [2024-07-16 00:15:42.000638] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.213 [2024-07-16 00:15:42.009482] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.213 [2024-07-16 00:15:42.009501] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.213 [2024-07-16 00:15:42.018445] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.213 [2024-07-16 00:15:42.018465] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.213 [2024-07-16 00:15:42.026797] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.213 [2024-07-16 00:15:42.026815] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.213 [2024-07-16 00:15:42.035338] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.213 [2024-07-16 00:15:42.035356] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.213 [2024-07-16 00:15:42.044471] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.213 [2024-07-16 00:15:42.044489] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.213 [2024-07-16 00:15:42.053776] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.213 [2024-07-16 00:15:42.053794] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.213 [2024-07-16 00:15:42.062930] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.213 [2024-07-16 00:15:42.062949] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.473 [2024-07-16 00:15:42.072035] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.473 [2024-07-16 00:15:42.072054] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.473 [2024-07-16 00:15:42.081069] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.473 [2024-07-16 00:15:42.081089] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.473 [2024-07-16 00:15:42.088106] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.473 [2024-07-16 00:15:42.088124] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.473 [2024-07-16 00:15:42.098658] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.473 [2024-07-16 00:15:42.098676] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.473 [2024-07-16 00:15:42.107588] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.473 [2024-07-16 00:15:42.107606] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.473 [2024-07-16 00:15:42.116142] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.473 [2024-07-16 00:15:42.116160] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.473 [2024-07-16 00:15:42.124944] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.473 [2024-07-16 00:15:42.124962] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.473 [2024-07-16 00:15:42.134202] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.473 [2024-07-16 00:15:42.134221] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.474 [2024-07-16 00:15:42.143074] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.474 [2024-07-16 00:15:42.143094] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.474 [2024-07-16 00:15:42.149859] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.474 [2024-07-16 00:15:42.149877] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.474 [2024-07-16 00:15:42.160747] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.474 [2024-07-16 00:15:42.160765] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.474 [2024-07-16 00:15:42.169537] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.474 [2024-07-16 00:15:42.169555] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.474 [2024-07-16 00:15:42.177960] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.474 [2024-07-16 00:15:42.177978] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.474 [2024-07-16 00:15:42.186241] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.474 [2024-07-16 00:15:42.186275] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.474 [2024-07-16 00:15:42.194886] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.474 [2024-07-16 00:15:42.194905] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.474 [2024-07-16 00:15:42.204088] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.474 [2024-07-16 00:15:42.204107] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.474 [2024-07-16 00:15:42.213262] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.474 [2024-07-16 00:15:42.213280] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.474 [2024-07-16 00:15:42.221932] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.474 [2024-07-16 00:15:42.221950] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.474 [2024-07-16 00:15:42.230979] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.474 [2024-07-16 00:15:42.230997] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.474 [2024-07-16 00:15:42.240008] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.474 [2024-07-16 00:15:42.240026] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.474 [2024-07-16 00:15:42.248882] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.474 [2024-07-16 00:15:42.248900] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.474 [2024-07-16 00:15:42.257390] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.474 [2024-07-16 00:15:42.257408] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.474 [2024-07-16 00:15:42.266511] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.474 [2024-07-16 00:15:42.266530] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.474 [2024-07-16 00:15:42.275271] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.474 [2024-07-16 00:15:42.275290] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.474 [2024-07-16 00:15:42.284410] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.474 [2024-07-16 00:15:42.284429] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.474 [2024-07-16 00:15:42.292923] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.474 [2024-07-16 00:15:42.292941] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.474 [2024-07-16 00:15:42.301962] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.474 [2024-07-16 00:15:42.301981] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.474 [2024-07-16 00:15:42.310593] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.474 [2024-07-16 00:15:42.310611] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.474 [2024-07-16 00:15:42.319156] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.474 [2024-07-16 00:15:42.319174] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.733 [2024-07-16 00:15:42.327835] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.733 [2024-07-16 00:15:42.327853] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.733 [2024-07-16 00:15:42.335172] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.733 [2024-07-16 00:15:42.335190] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.733 [2024-07-16 00:15:42.345436] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.733 [2024-07-16 00:15:42.345453] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.733 [2024-07-16 00:15:42.354427] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.733 [2024-07-16 00:15:42.354446] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.733 [2024-07-16 00:15:42.363876] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.733 [2024-07-16 00:15:42.363894] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.733 [2024-07-16 00:15:42.372424] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.733 [2024-07-16 00:15:42.372443] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.733 [2024-07-16 00:15:42.380919] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.733 [2024-07-16 00:15:42.380938] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.733 [2024-07-16 00:15:42.389363] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.733 [2024-07-16 00:15:42.389381] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.733 [2024-07-16 00:15:42.398312] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.733 [2024-07-16 00:15:42.398331] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.733 [2024-07-16 00:15:42.407500] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.733 [2024-07-16 00:15:42.407522] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.733 [2024-07-16 00:15:42.415914] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.733 [2024-07-16 00:15:42.415932] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.733 [2024-07-16 00:15:42.425712] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.733 [2024-07-16 00:15:42.425730] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.733 [2024-07-16 00:15:42.434169] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.734 [2024-07-16 00:15:42.434188] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.734 [2024-07-16 00:15:42.443099] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.734 [2024-07-16 00:15:42.443117] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.734 [2024-07-16 00:15:42.451415] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.734 [2024-07-16 00:15:42.451433] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.734 [2024-07-16 00:15:42.460063] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.734 [2024-07-16 00:15:42.460082] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.734 [2024-07-16 00:15:42.469372] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.734 [2024-07-16 00:15:42.469390] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.734 [2024-07-16 00:15:42.477984] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.734 [2024-07-16 00:15:42.478003] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.734 [2024-07-16 00:15:42.487267] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.734 [2024-07-16 00:15:42.487286] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.734 [2024-07-16 00:15:42.496050] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.734 [2024-07-16 00:15:42.496068] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.734 [2024-07-16 00:15:42.505201] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.734 [2024-07-16 00:15:42.505220] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.734 [2024-07-16 00:15:42.513439] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.734 [2024-07-16 00:15:42.513457] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.734 [2024-07-16 00:15:42.522059] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.734 [2024-07-16 00:15:42.522076] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.734 [2024-07-16 00:15:42.531547] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.734 [2024-07-16 00:15:42.531566] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.734 [2024-07-16 00:15:42.540877] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.734 [2024-07-16 00:15:42.540894] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.734 [2024-07-16 00:15:42.549496] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.734 [2024-07-16 00:15:42.549514] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.734 [2024-07-16 00:15:42.558755] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.734 [2024-07-16 00:15:42.558773] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.734 [2024-07-16 00:15:42.567345] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.734 [2024-07-16 00:15:42.567364] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.734 [2024-07-16 00:15:42.575988] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.734 [2024-07-16 00:15:42.576011] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.734 [2024-07-16 00:15:42.585197] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.734 [2024-07-16 00:15:42.585218] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.993 [2024-07-16 00:15:42.593866] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.993 [2024-07-16 00:15:42.593885] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.993 [2024-07-16 00:15:42.602480] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.993 [2024-07-16 00:15:42.602499] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.993 [2024-07-16 00:15:42.609527] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.993 [2024-07-16 00:15:42.609546] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.993 [2024-07-16 00:15:42.619660] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.993 [2024-07-16 00:15:42.619679] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.993 [2024-07-16 00:15:42.628389] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.993 [2024-07-16 00:15:42.628408] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.993 [2024-07-16 00:15:42.637661] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.993 [2024-07-16 00:15:42.637679] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.993 [2024-07-16 00:15:42.647192] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.993 [2024-07-16 00:15:42.647210] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.993 [2024-07-16 00:15:42.655864] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.993 [2024-07-16 00:15:42.655882] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.993 [2024-07-16 00:15:42.665003] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.993 [2024-07-16 00:15:42.665021] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.993 [2024-07-16 00:15:42.673622] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.993 [2024-07-16 00:15:42.673639] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.993 [2024-07-16 00:15:42.682178] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.993 [2024-07-16 00:15:42.682196] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.993 [2024-07-16 00:15:42.691362] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.993 [2024-07-16 00:15:42.691381] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.993 [2024-07-16 00:15:42.700238] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.993 [2024-07-16 00:15:42.700256] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.993 [2024-07-16 00:15:42.708483] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.993 [2024-07-16 00:15:42.708501] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.993 [2024-07-16 00:15:42.715528] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.993 [2024-07-16 00:15:42.715545] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.993 [2024-07-16 00:15:42.725788] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.993 [2024-07-16 00:15:42.725806] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.993 [2024-07-16 00:15:42.734522] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.993 [2024-07-16 00:15:42.734540] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.993 [2024-07-16 00:15:42.743023] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.993 [2024-07-16 00:15:42.743045] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.993 [2024-07-16 00:15:42.749965] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.993 [2024-07-16 00:15:42.749983] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.993 [2024-07-16 00:15:42.760223] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.994 [2024-07-16 00:15:42.760246] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.994 [2024-07-16 00:15:42.769515] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.994 [2024-07-16 00:15:42.769533] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.994 [2024-07-16 00:15:42.778154] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.994 [2024-07-16 00:15:42.778173] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.994 [2024-07-16 00:15:42.786825] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.994 [2024-07-16 00:15:42.786844] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.994 [2024-07-16 00:15:42.795935] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.994 [2024-07-16 00:15:42.795957] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.994 [2024-07-16 00:15:42.805000] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.994 [2024-07-16 00:15:42.805018] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.994 [2024-07-16 00:15:42.814175] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.994 [2024-07-16 00:15:42.814193] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.994 [2024-07-16 00:15:42.822874] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.994 [2024-07-16 00:15:42.822892] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.994 [2024-07-16 00:15:42.831140] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.994 [2024-07-16 00:15:42.831158] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.994 [2024-07-16 00:15:42.840482] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:23.994 [2024-07-16 00:15:42.840500] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.253 [2024-07-16 00:15:42.848630] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.253 [2024-07-16 00:15:42.848650] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.253 [2024-07-16 00:15:42.856966] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.253 [2024-07-16 00:15:42.856984] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.253 [2024-07-16 00:15:42.865419] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.253 [2024-07-16 00:15:42.865438] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.253 [2024-07-16 00:15:42.873770] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.253 [2024-07-16 00:15:42.873789] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.253 [2024-07-16 00:15:42.882358] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.253 [2024-07-16 00:15:42.882377] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.253 [2024-07-16 00:15:42.890772] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.253 [2024-07-16 00:15:42.890790] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.253 [2024-07-16 00:15:42.899242] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.253 [2024-07-16 00:15:42.899277] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.253 [2024-07-16 00:15:42.907588] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.253 [2024-07-16 00:15:42.907611] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.253 [2024-07-16 00:15:42.916052] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.253 [2024-07-16 00:15:42.916071] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.253 [2024-07-16 00:15:42.924741] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.253 [2024-07-16 00:15:42.924760] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.253 [2024-07-16 00:15:42.934110] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.253 [2024-07-16 00:15:42.934128] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.253 [2024-07-16 00:15:42.942671] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.253 [2024-07-16 00:15:42.942689] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.253 [2024-07-16 00:15:42.949462] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.253 [2024-07-16 00:15:42.949479] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.253 [2024-07-16 00:15:42.960573] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.253 [2024-07-16 00:15:42.960591] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.253 [2024-07-16 00:15:42.969330] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.253 [2024-07-16 00:15:42.969347] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.253 [2024-07-16 00:15:42.977654] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.253 [2024-07-16 00:15:42.977672] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.253 [2024-07-16 00:15:42.986313] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.253 [2024-07-16 00:15:42.986331] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.253 [2024-07-16 00:15:42.994799] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.253 [2024-07-16 00:15:42.994816] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.253 [2024-07-16 00:15:43.003772] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.253 [2024-07-16 00:15:43.003790] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.253 [2024-07-16 00:15:43.012675] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.253 [2024-07-16 00:15:43.012693] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.253 [2024-07-16 00:15:43.021244] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.253 [2024-07-16 00:15:43.021263] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.253 [2024-07-16 00:15:43.030077] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.253 [2024-07-16 00:15:43.030095] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.253 [2024-07-16 00:15:43.039209] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.253 [2024-07-16 00:15:43.039235] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.253 [2024-07-16 00:15:43.048109] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.253 [2024-07-16 00:15:43.048128] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.253 [2024-07-16 00:15:43.056687] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.253 [2024-07-16 00:15:43.056706] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.253 [2024-07-16 00:15:43.065072] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.253 [2024-07-16 00:15:43.065091] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.253 [2024-07-16 00:15:43.071908] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.253 [2024-07-16 00:15:43.071936] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.253 [2024-07-16 00:15:43.082955] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.253 [2024-07-16 00:15:43.082976] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.253 [2024-07-16 00:15:43.091810] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.253 [2024-07-16 00:15:43.091830] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.253 [2024-07-16 00:15:43.101188] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.253 [2024-07-16 00:15:43.101207] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.519 [2024-07-16 00:15:43.110104] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.519 [2024-07-16 00:15:43.110124] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.519 [2024-07-16 00:15:43.118941] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.519 [2024-07-16 00:15:43.118960] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.519 [2024-07-16 00:15:43.125745] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.519 [2024-07-16 00:15:43.125764] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.519 [2024-07-16 00:15:43.136755] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.519 [2024-07-16 00:15:43.136774] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.519 [2024-07-16 00:15:43.145507] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.519 [2024-07-16 00:15:43.145526] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.519 [2024-07-16 00:15:43.155172] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.519 [2024-07-16 00:15:43.155190] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.519 [2024-07-16 00:15:43.163901] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.519 [2024-07-16 00:15:43.163919] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.519 [2024-07-16 00:15:43.172983] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.519 [2024-07-16 00:15:43.173002] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.519 [2024-07-16 00:15:43.182164] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.519 [2024-07-16 00:15:43.182184] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.519 [2024-07-16 00:15:43.190990] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.519 [2024-07-16 00:15:43.191008] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.519 [2024-07-16 00:15:43.199681] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.519 [2024-07-16 00:15:43.199701] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.519 [2024-07-16 00:15:43.209012] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.519 [2024-07-16 00:15:43.209031] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.519 [2024-07-16 00:15:43.217322] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.519 [2024-07-16 00:15:43.217341] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.519 [2024-07-16 00:15:43.226631] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.519 [2024-07-16 00:15:43.226650] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.519 [2024-07-16 00:15:43.235930] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.519 [2024-07-16 00:15:43.235949] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.519 [2024-07-16 00:15:43.244938] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.519 [2024-07-16 00:15:43.244956] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.519 [2024-07-16 00:15:43.254354] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.519 [2024-07-16 00:15:43.254371] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.519 [2024-07-16 00:15:43.263432] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.519 [2024-07-16 00:15:43.263450] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.519 [2024-07-16 00:15:43.272579] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.519 [2024-07-16 00:15:43.272598] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.519 [2024-07-16 00:15:43.279575] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.519 [2024-07-16 00:15:43.279593] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.519 [2024-07-16 00:15:43.290515] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.519 [2024-07-16 00:15:43.290534] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.519 [2024-07-16 00:15:43.299360] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.519 [2024-07-16 00:15:43.299378] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.519 [2024-07-16 00:15:43.307871] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.519 [2024-07-16 00:15:43.307889] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.519 [2024-07-16 00:15:43.316516] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.519 [2024-07-16 00:15:43.316535] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.519 [2024-07-16 00:15:43.325509] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.519 [2024-07-16 00:15:43.325528] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.519 [2024-07-16 00:15:43.333938] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.519 [2024-07-16 00:15:43.333957] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.519 [2024-07-16 00:15:43.342307] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.519 [2024-07-16 00:15:43.342326] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.519 [2024-07-16 00:15:43.350923] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.519 [2024-07-16 00:15:43.350942] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.519 [2024-07-16 00:15:43.359035] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.519 [2024-07-16 00:15:43.359054] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.520 [2024-07-16 00:15:43.367794] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.520 [2024-07-16 00:15:43.367813] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.779 [2024-07-16 00:15:43.375182] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.779 [2024-07-16 00:15:43.375201] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.779 [2024-07-16 00:15:43.385593] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.779 [2024-07-16 00:15:43.385612] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.779 [2024-07-16 00:15:43.394331] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.779 [2024-07-16 00:15:43.394349] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.779 [2024-07-16 00:15:43.403555] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.779 [2024-07-16 00:15:43.403574] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.779 [2024-07-16 00:15:43.412451] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.779 [2024-07-16 00:15:43.412480] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.779 [2024-07-16 00:15:43.421499] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.779 [2024-07-16 00:15:43.421517] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.779 [2024-07-16 00:15:43.430701] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.779 [2024-07-16 00:15:43.430721] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.779 [2024-07-16 00:15:43.440030] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.779 [2024-07-16 00:15:43.440048] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.779 [2024-07-16 00:15:43.448582] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.779 [2024-07-16 00:15:43.448601] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.779 [2024-07-16 00:15:43.457366] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.779 [2024-07-16 00:15:43.457385] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.780 [2024-07-16 00:15:43.465986] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.780 [2024-07-16 00:15:43.466005] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.780 [2024-07-16 00:15:43.474734] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.780 [2024-07-16 00:15:43.474752] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.780 [2024-07-16 00:15:43.483453] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.780 [2024-07-16 00:15:43.483471] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.780 [2024-07-16 00:15:43.491976] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.780 [2024-07-16 00:15:43.491995] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.780 [2024-07-16 00:15:43.501875] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.780 [2024-07-16 00:15:43.501894] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.780 [2024-07-16 00:15:43.510258] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.780 [2024-07-16 00:15:43.510276] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.780 [2024-07-16 00:15:43.518943] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.780 [2024-07-16 00:15:43.518961] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.780 [2024-07-16 00:15:43.527665] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.780 [2024-07-16 00:15:43.527683] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.780 [2024-07-16 00:15:43.536852] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.780 [2024-07-16 00:15:43.536871] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.780 [2024-07-16 00:15:43.545574] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.780 [2024-07-16 00:15:43.545592] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.780 [2024-07-16 00:15:43.553875] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.780 [2024-07-16 00:15:43.553893] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.780 [2024-07-16 00:15:43.563243] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.780 [2024-07-16 00:15:43.563261] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.780 [2024-07-16 00:15:43.570269] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.780 [2024-07-16 00:15:43.570287] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.780 [2024-07-16 00:15:43.580151] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.780 [2024-07-16 00:15:43.580169] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.780 [2024-07-16 00:15:43.588692] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.780 [2024-07-16 00:15:43.588711] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.780 [2024-07-16 00:15:43.597223] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.780 [2024-07-16 00:15:43.597247] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.780 [2024-07-16 00:15:43.606367] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.780 [2024-07-16 00:15:43.606385] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.780 [2024-07-16 00:15:43.613231] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.780 [2024-07-16 00:15:43.613249] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.780 [2024-07-16 00:15:43.623461] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:24.780 [2024-07-16 00:15:43.623479] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.039 [2024-07-16 00:15:43.632706] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.039 [2024-07-16 00:15:43.632726] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.039 [2024-07-16 00:15:43.641296] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.039 [2024-07-16 00:15:43.641315] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.039 [2024-07-16 00:15:43.649980] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.039 [2024-07-16 00:15:43.649998] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.039 [2024-07-16 00:15:43.658705] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.039 [2024-07-16 00:15:43.658723] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.039 [2024-07-16 00:15:43.667234] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.039 [2024-07-16 00:15:43.667253] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.039 [2024-07-16 00:15:43.675758] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.039 [2024-07-16 00:15:43.675776] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.039 [2024-07-16 00:15:43.684664] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.039 [2024-07-16 00:15:43.684684] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.039 [2024-07-16 00:15:43.693292] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.039 [2024-07-16 00:15:43.693311] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.039 [2024-07-16 00:15:43.702565] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.039 [2024-07-16 00:15:43.702583] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.039 [2024-07-16 00:15:43.711726] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.039 [2024-07-16 00:15:43.711743] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.039 [2024-07-16 00:15:43.720870] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.039 [2024-07-16 00:15:43.720888] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.039 [2024-07-16 00:15:43.729463] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.039 [2024-07-16 00:15:43.729482] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.039 [2024-07-16 00:15:43.736382] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.039 [2024-07-16 00:15:43.736404] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.039 [2024-07-16 00:15:43.746893] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.039 [2024-07-16 00:15:43.746911] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.039 [2024-07-16 00:15:43.755434] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.039 [2024-07-16 00:15:43.755451] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.039 [2024-07-16 00:15:43.763618] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.039 [2024-07-16 00:15:43.763636] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.039 [2024-07-16 00:15:43.772437] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.039 [2024-07-16 00:15:43.772454] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.039 [2024-07-16 00:15:43.781262] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.039 [2024-07-16 00:15:43.781281] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.039 [2024-07-16 00:15:43.789679] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.039 [2024-07-16 00:15:43.789698] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.039 [2024-07-16 00:15:43.798021] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.039 [2024-07-16 00:15:43.798039] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.039 [2024-07-16 00:15:43.805105] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.039 [2024-07-16 00:15:43.805123] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.039 [2024-07-16 00:15:43.815526] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.039 [2024-07-16 00:15:43.815545] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.039 [2024-07-16 00:15:43.824360] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.039 [2024-07-16 00:15:43.824379] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.039 [2024-07-16 00:15:43.833546] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.039 [2024-07-16 00:15:43.833565] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.039 [2024-07-16 00:15:43.840683] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.039 [2024-07-16 00:15:43.840702] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.040 [2024-07-16 00:15:43.850764] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.040 [2024-07-16 00:15:43.850783] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.040 [2024-07-16 00:15:43.859589] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.040 [2024-07-16 00:15:43.859608] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.040 [2024-07-16 00:15:43.868208] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.040 [2024-07-16 00:15:43.868230] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.040 [2024-07-16 00:15:43.877544] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.040 [2024-07-16 00:15:43.877563] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.040 [2024-07-16 00:15:43.886215] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.040 [2024-07-16 00:15:43.886255] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.299 [2024-07-16 00:15:43.895080] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.299 [2024-07-16 00:15:43.895099] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.299 [2024-07-16 00:15:43.904185] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.299 [2024-07-16 00:15:43.904207] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.299 [2024-07-16 00:15:43.913544] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.299 [2024-07-16 00:15:43.913562] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.299 [2024-07-16 00:15:43.922029] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.299 [2024-07-16 00:15:43.922046] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.299 [2024-07-16 00:15:43.928873] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.299 [2024-07-16 00:15:43.928890] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.299 [2024-07-16 00:15:43.939983] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.299 [2024-07-16 00:15:43.940001] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.299 [2024-07-16 00:15:43.948733] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.299 [2024-07-16 00:15:43.948751] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.299 [2024-07-16 00:15:43.957924] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.299 [2024-07-16 00:15:43.957942] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.299 [2024-07-16 00:15:43.966451] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.299 [2024-07-16 00:15:43.966469] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.299 [2024-07-16 00:15:43.975511] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.299 [2024-07-16 00:15:43.975529] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.299 [2024-07-16 00:15:43.984575] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.299 [2024-07-16 00:15:43.984593] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.299 [2024-07-16 00:15:43.992771] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.299 [2024-07-16 00:15:43.992789] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.299 [2024-07-16 00:15:44.001021] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.299 [2024-07-16 00:15:44.001039] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.299 [2024-07-16 00:15:44.009586] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.299 [2024-07-16 00:15:44.009603] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.299 [2024-07-16 00:15:44.018481] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.299 [2024-07-16 00:15:44.018500] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.299 [2024-07-16 00:15:44.027119] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.299 [2024-07-16 00:15:44.027137] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.299 [2024-07-16 00:15:44.035676] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.299 [2024-07-16 00:15:44.035693] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.299 [2024-07-16 00:15:44.044190] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.299 [2024-07-16 00:15:44.044208] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.299 [2024-07-16 00:15:44.050963] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.299 [2024-07-16 00:15:44.050980] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.299 [2024-07-16 00:15:44.061215] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.299 [2024-07-16 00:15:44.061238] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.299 [2024-07-16 00:15:44.069771] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.299 [2024-07-16 00:15:44.069793] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.299 [2024-07-16 00:15:44.079026] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.299 [2024-07-16 00:15:44.079045] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.299 [2024-07-16 00:15:44.088215] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.299 [2024-07-16 00:15:44.088239] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.299 [2024-07-16 00:15:44.097338] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.299 [2024-07-16 00:15:44.097358] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.299 [2024-07-16 00:15:44.106425] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.299 [2024-07-16 00:15:44.106442] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.299 [2024-07-16 00:15:44.114987] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.299 [2024-07-16 00:15:44.115005] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.299 [2024-07-16 00:15:44.124310] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.299 [2024-07-16 00:15:44.124328] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.299 [2024-07-16 00:15:44.131129] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.299 [2024-07-16 00:15:44.131147] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.299 [2024-07-16 00:15:44.142011] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.299 [2024-07-16 00:15:44.142029] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.558 [2024-07-16 00:15:44.151417] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.558 [2024-07-16 00:15:44.151437] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.558 [2024-07-16 00:15:44.161385] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.558 [2024-07-16 00:15:44.161403] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.558 [2024-07-16 00:15:44.170364] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.558 [2024-07-16 00:15:44.170381] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.558 [2024-07-16 00:15:44.179315] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.558 [2024-07-16 00:15:44.179333] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.558 [2024-07-16 00:15:44.188216] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.558 [2024-07-16 00:15:44.188243] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.558 [2024-07-16 00:15:44.194984] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.558 [2024-07-16 00:15:44.195002] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.558 [2024-07-16 00:15:44.206082] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.558 [2024-07-16 00:15:44.206101] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.558 [2024-07-16 00:15:44.214783] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.558 [2024-07-16 00:15:44.214800] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.558 [2024-07-16 00:15:44.224012] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.558 [2024-07-16 00:15:44.224030] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.558 [2024-07-16 00:15:44.233365] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.558 [2024-07-16 00:15:44.233383] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.558 [2024-07-16 00:15:44.240154] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.558 [2024-07-16 00:15:44.240175] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.558 [2024-07-16 00:15:44.251006] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.558 [2024-07-16 00:15:44.251025] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.558 [2024-07-16 00:15:44.269155] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.558 [2024-07-16 00:15:44.269175] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.558 [2024-07-16 00:15:44.278100] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.558 [2024-07-16 00:15:44.278119] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.558 [2024-07-16 00:15:44.287130] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.558 [2024-07-16 00:15:44.287149] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.558 [2024-07-16 00:15:44.296288] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.558 [2024-07-16 00:15:44.296306] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.558 [2024-07-16 00:15:44.305810] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.558 [2024-07-16 00:15:44.305828] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.558 [2024-07-16 00:15:44.315302] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.558 [2024-07-16 00:15:44.315320] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.558 [2024-07-16 00:15:44.323853] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.559 [2024-07-16 00:15:44.323870] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.559 [2024-07-16 00:15:44.333091] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.559 [2024-07-16 00:15:44.333109] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.559 [2024-07-16 00:15:44.341696] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.559 [2024-07-16 00:15:44.341713] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.559 [2024-07-16 00:15:44.350328] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.559 [2024-07-16 00:15:44.350346] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.559 [2024-07-16 00:15:44.359556] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.559 [2024-07-16 00:15:44.359574] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.559 [2024-07-16 00:15:44.368864] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.559 [2024-07-16 00:15:44.368882] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.559 [2024-07-16 00:15:44.377397] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.559 [2024-07-16 00:15:44.377415] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.559 [2024-07-16 00:15:44.386676] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.559 [2024-07-16 00:15:44.386695] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.559 [2024-07-16 00:15:44.395157] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.559 [2024-07-16 00:15:44.395175] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.559 [2024-07-16 00:15:44.403697] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.559 [2024-07-16 00:15:44.403715] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.818 [2024-07-16 00:15:44.413005] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.818 [2024-07-16 00:15:44.413024] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.818 [2024-07-16 00:15:44.422358] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.818 [2024-07-16 00:15:44.422381] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.818 [2024-07-16 00:15:44.431034] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.818 [2024-07-16 00:15:44.431051] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.818 [2024-07-16 00:15:44.439761] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.818 [2024-07-16 00:15:44.439780] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.818 [2024-07-16 00:15:44.449002] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.818 [2024-07-16 00:15:44.449021] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.818 [2024-07-16 00:15:44.457656] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.818 [2024-07-16 00:15:44.457674] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.818 [2024-07-16 00:15:44.466294] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.818 [2024-07-16 00:15:44.466313] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.818 [2024-07-16 00:15:44.474902] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.818 [2024-07-16 00:15:44.474922] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.818 [2024-07-16 00:15:44.481790] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.818 [2024-07-16 00:15:44.481810] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.818 [2024-07-16 00:15:44.492027] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.818 [2024-07-16 00:15:44.492047] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.818 [2024-07-16 00:15:44.501366] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.818 [2024-07-16 00:15:44.501385] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.818 [2024-07-16 00:15:44.509961] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.818 [2024-07-16 00:15:44.509979] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.818 [2024-07-16 00:15:44.519392] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.818 [2024-07-16 00:15:44.519410] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.818 [2024-07-16 00:15:44.528712] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.818 [2024-07-16 00:15:44.528731] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.818 [2024-07-16 00:15:44.537220] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.818 [2024-07-16 00:15:44.537245] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.818 [2024-07-16 00:15:44.544304] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.818 [2024-07-16 00:15:44.544322] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.818 [2024-07-16 00:15:44.554377] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.818 [2024-07-16 00:15:44.554396] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.818 [2024-07-16 00:15:44.561868] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.818 [2024-07-16 00:15:44.561887] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.818 [2024-07-16 00:15:44.572119] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.818 [2024-07-16 00:15:44.572138] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.818 [2024-07-16 00:15:44.580633] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.818 [2024-07-16 00:15:44.580652] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.818 [2024-07-16 00:15:44.587592] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.818 [2024-07-16 00:15:44.587609] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.818 [2024-07-16 00:15:44.598731] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.818 [2024-07-16 00:15:44.598750] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.818 [2024-07-16 00:15:44.607502] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.818 [2024-07-16 00:15:44.607520] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.818 [2024-07-16 00:15:44.616310] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.818 [2024-07-16 00:15:44.616329] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.818 [2024-07-16 00:15:44.625570] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.818 [2024-07-16 00:15:44.625589] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.818 [2024-07-16 00:15:44.634298] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.818 [2024-07-16 00:15:44.634317] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.818 [2024-07-16 00:15:44.642694] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.818 [2024-07-16 00:15:44.642713] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.818 [2024-07-16 00:15:44.651304] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.818 [2024-07-16 00:15:44.651323] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.818 [2024-07-16 00:15:44.660427] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.818 [2024-07-16 00:15:44.660446] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.818 [2024-07-16 00:15:44.669631] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:25.818 [2024-07-16 00:15:44.669652] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.077 [2024-07-16 00:15:44.678040] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.077 [2024-07-16 00:15:44.678059] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.077 [2024-07-16 00:15:44.687196] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.077 [2024-07-16 00:15:44.687215] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.077 [2024-07-16 00:15:44.695773] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.077 [2024-07-16 00:15:44.695792] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.077 [2024-07-16 00:15:44.704433] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.077 [2024-07-16 00:15:44.704452] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.077 [2024-07-16 00:15:44.711347] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.077 [2024-07-16 00:15:44.711365] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.077 [2024-07-16 00:15:44.722767] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.077 [2024-07-16 00:15:44.722787] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.077 [2024-07-16 00:15:44.731675] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.077 [2024-07-16 00:15:44.731694] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.077 [2024-07-16 00:15:44.740998] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.077 [2024-07-16 00:15:44.741016] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.077 [2024-07-16 00:15:44.750273] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.077 [2024-07-16 00:15:44.750291] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.077 [2024-07-16 00:15:44.759557] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.077 [2024-07-16 00:15:44.759575] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.077 [2024-07-16 00:15:44.766557] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.077 [2024-07-16 00:15:44.766576] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.077 [2024-07-16 00:15:44.777353] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.077 [2024-07-16 00:15:44.777371] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.077 [2024-07-16 00:15:44.786186] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.077 [2024-07-16 00:15:44.786205] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.077 [2024-07-16 00:15:44.793001] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.077 [2024-07-16 00:15:44.793020] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.077 [2024-07-16 00:15:44.803436] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.077 [2024-07-16 00:15:44.803455] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.077 [2024-07-16 00:15:44.811984] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.077 [2024-07-16 00:15:44.812004] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.077 [2024-07-16 00:15:44.818862] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.077 [2024-07-16 00:15:44.818880] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.077 [2024-07-16 00:15:44.829873] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.077 [2024-07-16 00:15:44.829892] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.077 [2024-07-16 00:15:44.839045] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.077 [2024-07-16 00:15:44.839065] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.077 [2024-07-16 00:15:44.847742] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.077 [2024-07-16 00:15:44.847761] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.077 [2024-07-16 00:15:44.856103] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.077 [2024-07-16 00:15:44.856122] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.077 [2024-07-16 00:15:44.864780] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.077 [2024-07-16 00:15:44.864798] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.077 [2024-07-16 00:15:44.873723] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.077 [2024-07-16 00:15:44.873740] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.077 [2024-07-16 00:15:44.882964] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.077 [2024-07-16 00:15:44.882983] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.077 [2024-07-16 00:15:44.891650] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.077 [2024-07-16 00:15:44.891669] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.077 [2024-07-16 00:15:44.900189] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.077 [2024-07-16 00:15:44.900207] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.077 [2024-07-16 00:15:44.909179] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.077 [2024-07-16 00:15:44.909197] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.077 [2024-07-16 00:15:44.917771] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.077 [2024-07-16 00:15:44.917788] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.077 [2024-07-16 00:15:44.926804] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.077 [2024-07-16 00:15:44.926822] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.336 [2024-07-16 00:15:44.935455] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.336 [2024-07-16 00:15:44.935484] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.336 [2024-07-16 00:15:44.944525] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.336 [2024-07-16 00:15:44.944543] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.336 [2024-07-16 00:15:44.953445] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.336 [2024-07-16 00:15:44.953463] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.336 [2024-07-16 00:15:44.962381] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.336 [2024-07-16 00:15:44.962399] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.336 [2024-07-16 00:15:44.971092] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.336 [2024-07-16 00:15:44.971110] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.336 [2024-07-16 00:15:44.980038] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.336 [2024-07-16 00:15:44.980057] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.336 [2024-07-16 00:15:44.989048] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.336 [2024-07-16 00:15:44.989066] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.336 [2024-07-16 00:15:44.997711] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.336 [2024-07-16 00:15:44.997729] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.336 [2024-07-16 00:15:45.005907] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.336 [2024-07-16 00:15:45.005925] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.336 [2024-07-16 00:15:45.014441] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.336 [2024-07-16 00:15:45.014459] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.336 [2024-07-16 00:15:45.023495] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.336 [2024-07-16 00:15:45.023513] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.336 [2024-07-16 00:15:45.031951] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.336 [2024-07-16 00:15:45.031968] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.336 [2024-07-16 00:15:45.040900] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.336 [2024-07-16 00:15:45.040918] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.336 [2024-07-16 00:15:45.049747] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.336 [2024-07-16 00:15:45.049765] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.336 [2024-07-16 00:15:45.058107] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.336 [2024-07-16 00:15:45.058125] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.336 [2024-07-16 00:15:45.067168] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.336 [2024-07-16 00:15:45.067186] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.336 [2024-07-16 00:15:45.076381] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.336 [2024-07-16 00:15:45.076399] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.336 [2024-07-16 00:15:45.085287] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.336 [2024-07-16 00:15:45.085310] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.336 [2024-07-16 00:15:45.093863] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.336 [2024-07-16 00:15:45.093881] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.336 [2024-07-16 00:15:45.102806] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.336 [2024-07-16 00:15:45.102824] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.336 [2024-07-16 00:15:45.111120] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.336 [2024-07-16 00:15:45.111138] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.336 [2024-07-16 00:15:45.119442] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.336 [2024-07-16 00:15:45.119460] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.336 [2024-07-16 00:15:45.127768] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.336 [2024-07-16 00:15:45.127786] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.336 [2024-07-16 00:15:45.136605] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.336 [2024-07-16 00:15:45.136623] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.336 [2024-07-16 00:15:45.145829] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.336 [2024-07-16 00:15:45.145848] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.336 [2024-07-16 00:15:45.154218] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.336 [2024-07-16 00:15:45.154243] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.336 [2024-07-16 00:15:45.162752] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.336 [2024-07-16 00:15:45.162771] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.336 [2024-07-16 00:15:45.171377] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.336 [2024-07-16 00:15:45.171394] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.336 [2024-07-16 00:15:45.180547] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.336 [2024-07-16 00:15:45.180566] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.595 [2024-07-16 00:15:45.189349] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.595 [2024-07-16 00:15:45.189368] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.595 [2024-07-16 00:15:45.198274] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.595 [2024-07-16 00:15:45.198296] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.595 [2024-07-16 00:15:45.207294] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.595 [2024-07-16 00:15:45.207312] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.595 [2024-07-16 00:15:45.215827] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.595 [2024-07-16 00:15:45.215845] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.595 [2024-07-16 00:15:45.224395] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.595 [2024-07-16 00:15:45.224412] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.595 [2024-07-16 00:15:45.231070] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.595 [2024-07-16 00:15:45.231088] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.595 [2024-07-16 00:15:45.241252] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.595 [2024-07-16 00:15:45.241270] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.595 [2024-07-16 00:15:45.249772] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.595 [2024-07-16 00:15:45.249795] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.595 [2024-07-16 00:15:45.258186] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.595 [2024-07-16 00:15:45.258204] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.595 [2024-07-16 00:15:45.266885] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.595 [2024-07-16 00:15:45.266903] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.595 [2024-07-16 00:15:45.275615] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.595 [2024-07-16 00:15:45.275633] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.595 [2024-07-16 00:15:45.284365] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.595 [2024-07-16 00:15:45.284384] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.595 [2024-07-16 00:15:45.292804] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.595 [2024-07-16 00:15:45.292823] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.595 [2024-07-16 00:15:45.301400] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.595 [2024-07-16 00:15:45.301418] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.595 [2024-07-16 00:15:45.309941] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.595 [2024-07-16 00:15:45.309958] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.595 [2024-07-16 00:15:45.318915] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.595 [2024-07-16 00:15:45.318933] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.595 [2024-07-16 00:15:45.327408] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.595 [2024-07-16 00:15:45.327427] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.595 [2024-07-16 00:15:45.336163] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.595 [2024-07-16 00:15:45.336182] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.595 [2024-07-16 00:15:45.344684] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.595 [2024-07-16 00:15:45.344701] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.595 [2024-07-16 00:15:45.353195] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.595 [2024-07-16 00:15:45.353213] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.595 [2024-07-16 00:15:45.359955] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.595 [2024-07-16 00:15:45.359972] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.595 [2024-07-16 00:15:45.370632] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.595 [2024-07-16 00:15:45.370651] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.595 [2024-07-16 00:15:45.379637] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.595 [2024-07-16 00:15:45.379656] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.595 [2024-07-16 00:15:45.387954] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.595 [2024-07-16 00:15:45.387973] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.595 [2024-07-16 00:15:45.396960] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.595 [2024-07-16 00:15:45.396978] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.595 [2024-07-16 00:15:45.405313] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.595 [2024-07-16 00:15:45.405331] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.595 [2024-07-16 00:15:45.414356] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.595 [2024-07-16 00:15:45.414378] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.595 [2024-07-16 00:15:45.423414] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.595 [2024-07-16 00:15:45.423432] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.595 [2024-07-16 00:15:45.432966] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.595 [2024-07-16 00:15:45.432984] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.595 [2024-07-16 00:15:45.441674] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.596 [2024-07-16 00:15:45.441692] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.854 [2024-07-16 00:15:45.450850] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.854 [2024-07-16 00:15:45.450870] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.854 [2024-07-16 00:15:45.459448] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.854 [2024-07-16 00:15:45.459466] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.854 [2024-07-16 00:15:45.468095] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.854 [2024-07-16 00:15:45.468113] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.854 [2024-07-16 00:15:45.477001] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.854 [2024-07-16 00:15:45.477019] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.854 [2024-07-16 00:15:45.485893] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.854 [2024-07-16 00:15:45.485911] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.854 [2024-07-16 00:15:45.494096] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.854 [2024-07-16 00:15:45.494115] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.854 [2024-07-16 00:15:45.500853] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.854 [2024-07-16 00:15:45.500871] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.854 [2024-07-16 00:15:45.511994] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.854 [2024-07-16 00:15:45.512012] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.854 [2024-07-16 00:15:45.520571] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.854 [2024-07-16 00:15:45.520589] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.854 [2024-07-16 00:15:45.530041] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.854 [2024-07-16 00:15:45.530059] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.854 [2024-07-16 00:15:45.538656] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.854 [2024-07-16 00:15:45.538674] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.854 [2024-07-16 00:15:45.547234] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.854 [2024-07-16 00:15:45.547253] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.854 [2024-07-16 00:15:45.556416] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.854 [2024-07-16 00:15:45.556434] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.854 [2024-07-16 00:15:45.565407] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.854 [2024-07-16 00:15:45.565425] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.854 [2024-07-16 00:15:45.573921] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.854 [2024-07-16 00:15:45.573939] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.854 [2024-07-16 00:15:45.582634] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.854 [2024-07-16 00:15:45.582659] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.854 [2024-07-16 00:15:45.591877] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.854 [2024-07-16 00:15:45.591896] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.854 [2024-07-16 00:15:45.600834] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.854 [2024-07-16 00:15:45.600852] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.854 [2024-07-16 00:15:45.609910] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.854 [2024-07-16 00:15:45.609928] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.854 [2024-07-16 00:15:45.619176] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.854 [2024-07-16 00:15:45.619194] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.854 [2024-07-16 00:15:45.627719] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.854 [2024-07-16 00:15:45.627737] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.854 [2024-07-16 00:15:45.636166] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.854 [2024-07-16 00:15:45.636184] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.854 [2024-07-16 00:15:45.644723] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.854 [2024-07-16 00:15:45.644741] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.854 [2024-07-16 00:15:45.653969] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.854 [2024-07-16 00:15:45.653987] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.854 [2024-07-16 00:15:45.662601] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.854 [2024-07-16 00:15:45.662619] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.854 [2024-07-16 00:15:45.671913] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.854 [2024-07-16 00:15:45.671931] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.854 [2024-07-16 00:15:45.680595] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.854 [2024-07-16 00:15:45.680613] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.854 [2024-07-16 00:15:45.689375] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.854 [2024-07-16 00:15:45.689393] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.854 [2024-07-16 00:15:45.698787] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.854 [2024-07-16 00:15:45.698805] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:27.114 [2024-07-16 00:15:45.707094] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:27.114 [2024-07-16 00:15:45.707114] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:27.114 [2024-07-16 00:15:45.715843] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:27.114 [2024-07-16 00:15:45.715860] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:27.114 [2024-07-16 00:15:45.724992] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:27.114 [2024-07-16 00:15:45.725010] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:27.114 [2024-07-16 00:15:45.733491] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:27.114 [2024-07-16 00:15:45.733509] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:27.114 [2024-07-16 00:15:45.742659] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:27.114 [2024-07-16 00:15:45.742677] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:27.114 [2024-07-16 00:15:45.751801] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:27.114 [2024-07-16 00:15:45.751823] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:27.114 [2024-07-16 00:15:45.760289] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:27.114 [2024-07-16 00:15:45.760307] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:27.114 [2024-07-16 00:15:45.769554] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:27.114 [2024-07-16 00:15:45.769573] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:27.114 [2024-07-16 00:15:45.777966] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:27.114 [2024-07-16 00:15:45.777984] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:27.114 [2024-07-16 00:15:45.787159] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:27.114 [2024-07-16 00:15:45.787177] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:27.114 [2024-07-16 00:15:45.795741] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:27.114 [2024-07-16 00:15:45.795759] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:27.114 [2024-07-16 00:15:45.804451] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:27.115 [2024-07-16 00:15:45.804469] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:27.115 [2024-07-16 00:15:45.812977] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:27.115 [2024-07-16 00:15:45.812995] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:27.115 [2024-07-16 00:15:45.822793] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:27.115 [2024-07-16 00:15:45.822811] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:27.115 [2024-07-16 00:15:45.831374] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:27.115 [2024-07-16 00:15:45.831392] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:27.115 [2024-07-16 00:15:45.837518] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:27.115 [2024-07-16 00:15:45.837535] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:27.115 00:14:27.115 Latency(us) 00:14:27.115 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:27.115 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:14:27.115 Nvme1n1 : 5.01 16636.75 129.97 0.00 0.00 7687.34 3191.32 18008.15 00:14:27.115 =================================================================================================================== 00:14:27.115 Total : 16636.75 129.97 0.00 0.00 7687.34 3191.32 18008.15 00:14:27.115 [2024-07-16 00:15:45.845532] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:27.115 [2024-07-16 00:15:45.845548] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:27.115 [2024-07-16 00:15:45.853555] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:27.115 [2024-07-16 00:15:45.853574] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:27.115 [2024-07-16 00:15:45.861580] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:27.115 [2024-07-16 00:15:45.861594] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:27.115 [2024-07-16 00:15:45.869605] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:27.115 [2024-07-16 00:15:45.869622] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:27.115 [2024-07-16 00:15:45.877619] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:27.115 [2024-07-16 00:15:45.877632] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:27.115 [2024-07-16 00:15:45.885644] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:27.115 [2024-07-16 00:15:45.885659] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:27.115 [2024-07-16 00:15:45.893662] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:27.115 [2024-07-16 00:15:45.893675] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:27.115 [2024-07-16 00:15:45.901682] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:27.115 [2024-07-16 00:15:45.901694] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:27.115 [2024-07-16 00:15:45.909703] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:27.115 [2024-07-16 00:15:45.909716] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:27.115 [2024-07-16 00:15:45.917725] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:27.115 [2024-07-16 00:15:45.917736] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:27.115 [2024-07-16 00:15:45.925747] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:27.115 [2024-07-16 00:15:45.925758] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:27.115 [2024-07-16 00:15:45.933768] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:27.115 [2024-07-16 00:15:45.933779] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:27.115 [2024-07-16 00:15:45.941789] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:27.115 [2024-07-16 00:15:45.941799] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:27.115 [2024-07-16 00:15:45.949809] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:27.115 [2024-07-16 00:15:45.949819] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:27.115 [2024-07-16 00:15:45.957830] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:27.115 [2024-07-16 00:15:45.957841] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:27.115 [2024-07-16 00:15:45.965855] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:27.115 [2024-07-16 00:15:45.965865] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:27.373 [2024-07-16 00:15:45.973875] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:27.373 [2024-07-16 00:15:45.973886] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:27.373 [2024-07-16 00:15:45.981897] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:27.373 [2024-07-16 00:15:45.981908] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:27.373 [2024-07-16 00:15:45.989918] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:27.373 [2024-07-16 00:15:45.989928] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:27.373 [2024-07-16 00:15:45.997940] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:27.373 [2024-07-16 00:15:45.997951] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:27.373 [2024-07-16 00:15:46.005964] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:27.373 [2024-07-16 00:15:46.005975] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:27.373 [2024-07-16 00:15:46.013984] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:27.373 [2024-07-16 00:15:46.013994] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:27.373 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1481766) - No such process 00:14:27.373 00:15:46 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1481766 00:14:27.373 00:15:46 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:27.373 00:15:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:27.373 00:15:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:27.373 00:15:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:27.373 00:15:46 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:27.373 00:15:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:27.373 00:15:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:27.373 delay0 00:14:27.373 00:15:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:27.373 00:15:46 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:14:27.373 00:15:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:27.373 00:15:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:27.373 00:15:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:27.373 00:15:46 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:14:27.373 [2024-07-16 00:15:46.143636] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:14:33.958 Initializing NVMe Controllers 00:14:33.958 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:33.958 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:33.958 Initialization complete. Launching workers. 00:14:33.958 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 104 00:14:33.958 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 391, failed to submit 33 00:14:33.958 success 184, unsuccess 207, failed 0 00:14:33.958 00:15:52 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:14:33.958 00:15:52 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:14:33.958 00:15:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:33.958 00:15:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:14:33.958 00:15:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:33.958 00:15:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:14:33.958 00:15:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:33.958 00:15:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:33.958 rmmod nvme_tcp 00:14:33.958 rmmod nvme_fabrics 00:14:33.958 rmmod nvme_keyring 00:14:33.958 00:15:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:33.958 00:15:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:14:33.958 00:15:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:14:33.958 00:15:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 1479880 ']' 00:14:33.958 00:15:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 1479880 00:14:33.958 00:15:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@942 -- # '[' -z 1479880 ']' 00:14:33.958 00:15:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@946 -- # kill -0 1479880 00:14:33.958 00:15:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@947 -- # uname 00:14:33.958 00:15:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:14:33.958 00:15:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1479880 00:14:33.958 00:15:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:14:33.958 00:15:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:14:33.958 00:15:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1479880' 00:14:33.958 killing process with pid 1479880 00:14:33.958 00:15:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@961 -- # kill 1479880 00:14:33.958 00:15:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # wait 1479880 00:14:33.958 00:15:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:33.958 00:15:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:33.958 00:15:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:33.958 00:15:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:33.958 00:15:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:33.958 00:15:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:33.958 00:15:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:33.958 00:15:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:35.857 00:15:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:35.857 00:14:35.857 real 0m31.094s 00:14:35.857 user 0m42.890s 00:14:35.857 sys 0m10.462s 00:14:35.857 00:15:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1118 -- # xtrace_disable 00:14:35.857 00:15:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:35.857 ************************************ 00:14:35.857 END TEST nvmf_zcopy 00:14:35.857 ************************************ 00:14:35.857 00:15:54 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:14:35.857 00:15:54 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:14:35.857 00:15:54 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:14:35.857 00:15:54 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:14:35.857 00:15:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:35.857 ************************************ 00:14:35.857 START TEST nvmf_nmic 00:14:35.857 ************************************ 00:14:35.857 00:15:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:14:36.116 * Looking for test storage... 00:14:36.116 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:36.116 00:15:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:36.116 00:15:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:14:36.116 00:15:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:36.116 00:15:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:36.116 00:15:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:36.116 00:15:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:36.116 00:15:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:36.116 00:15:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:36.116 00:15:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:36.116 00:15:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:36.116 00:15:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:36.116 00:15:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:36.116 00:15:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:36.116 00:15:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:36.116 00:15:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:36.116 00:15:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:36.116 00:15:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:36.116 00:15:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:36.116 00:15:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:36.116 00:15:54 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:36.116 00:15:54 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:36.116 00:15:54 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:36.116 00:15:54 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:36.116 00:15:54 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:36.116 00:15:54 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:36.116 00:15:54 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:14:36.116 00:15:54 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:36.116 00:15:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:14:36.116 00:15:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:36.116 00:15:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:36.116 00:15:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:36.116 00:15:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:36.116 00:15:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:36.116 00:15:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:36.116 00:15:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:36.116 00:15:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:36.116 00:15:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:36.116 00:15:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:36.116 00:15:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:14:36.116 00:15:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:36.116 00:15:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:36.116 00:15:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:36.116 00:15:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:36.116 00:15:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:36.116 00:15:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:36.116 00:15:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:36.116 00:15:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:36.116 00:15:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:36.116 00:15:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:36.116 00:15:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:14:36.116 00:15:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:41.390 00:16:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:41.390 00:16:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:14:41.390 00:16:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:41.390 00:16:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:41.390 00:16:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:41.390 00:16:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:41.390 00:16:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:41.390 00:16:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:14:41.390 00:16:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:41.390 00:16:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:14:41.390 00:16:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:14:41.390 00:16:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:14:41.390 00:16:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:14:41.390 00:16:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:14:41.390 00:16:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:14:41.390 00:16:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:41.390 00:16:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:41.390 00:16:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:41.390 00:16:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:41.390 00:16:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:41.390 00:16:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:41.390 00:16:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:41.390 00:16:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:41.390 00:16:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:41.390 00:16:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:41.390 00:16:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:41.390 00:16:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:41.390 00:16:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:41.390 00:16:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:41.390 00:16:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:41.390 00:16:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:41.390 00:16:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:41.390 00:16:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:41.390 00:16:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:41.390 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:41.390 00:16:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:41.390 00:16:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:41.390 00:16:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:41.390 00:16:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:41.390 00:16:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:41.390 00:16:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:41.390 00:16:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:41.390 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:41.390 00:16:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:41.390 00:16:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:41.390 00:16:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:41.390 00:16:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:41.390 00:16:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:41.390 00:16:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:41.390 00:16:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:41.390 00:16:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:41.390 00:16:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:41.390 00:16:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:41.390 00:16:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:41.390 00:16:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:41.390 00:16:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:41.390 00:16:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:41.390 00:16:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:41.390 00:16:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:41.390 Found net devices under 0000:86:00.0: cvl_0_0 00:14:41.390 00:16:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:41.390 00:16:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:41.390 00:16:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:41.390 00:16:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:41.390 00:16:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:41.390 00:16:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:41.390 00:16:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:41.390 00:16:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:41.390 00:16:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:41.390 Found net devices under 0000:86:00.1: cvl_0_1 00:14:41.390 00:16:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:41.390 00:16:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:41.390 00:16:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:14:41.390 00:16:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:41.390 00:16:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:41.390 00:16:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:41.390 00:16:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:41.390 00:16:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:41.390 00:16:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:41.390 00:16:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:41.390 00:16:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:41.390 00:16:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:41.390 00:16:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:41.390 00:16:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:41.390 00:16:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:41.391 00:16:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:41.391 00:16:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:41.391 00:16:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:41.391 00:16:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:41.391 00:16:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:41.651 00:16:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:41.651 00:16:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:41.651 00:16:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:41.651 00:16:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:41.651 00:16:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:41.651 00:16:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:41.651 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:41.651 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.188 ms 00:14:41.651 00:14:41.651 --- 10.0.0.2 ping statistics --- 00:14:41.651 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:41.651 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:14:41.651 00:16:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:41.651 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:41.651 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.272 ms 00:14:41.651 00:14:41.651 --- 10.0.0.1 ping statistics --- 00:14:41.651 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:41.651 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:14:41.651 00:16:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:41.651 00:16:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:14:41.651 00:16:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:41.651 00:16:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:41.651 00:16:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:41.651 00:16:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:41.651 00:16:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:41.651 00:16:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:41.651 00:16:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:41.651 00:16:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:14:41.651 00:16:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:41.651 00:16:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:41.651 00:16:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:41.651 00:16:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=1487321 00:14:41.651 00:16:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 1487321 00:14:41.651 00:16:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:41.651 00:16:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@823 -- # '[' -z 1487321 ']' 00:14:41.651 00:16:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:41.651 00:16:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@828 -- # local max_retries=100 00:14:41.651 00:16:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:41.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:41.651 00:16:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@832 -- # xtrace_disable 00:14:41.651 00:16:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:41.651 [2024-07-16 00:16:00.452998] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:14:41.651 [2024-07-16 00:16:00.453042] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:41.911 [2024-07-16 00:16:00.509569] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:41.911 [2024-07-16 00:16:00.590647] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:41.911 [2024-07-16 00:16:00.590682] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:41.911 [2024-07-16 00:16:00.590689] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:41.911 [2024-07-16 00:16:00.590695] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:41.911 [2024-07-16 00:16:00.590700] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:41.911 [2024-07-16 00:16:00.590737] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:41.911 [2024-07-16 00:16:00.590835] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:41.911 [2024-07-16 00:16:00.590922] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:41.911 [2024-07-16 00:16:00.590924] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:42.481 00:16:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:14:42.481 00:16:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@856 -- # return 0 00:14:42.481 00:16:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:42.481 00:16:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:42.481 00:16:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:42.481 00:16:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:42.481 00:16:01 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:42.481 00:16:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:42.481 00:16:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:42.481 [2024-07-16 00:16:01.313306] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:42.481 00:16:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:42.481 00:16:01 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:42.481 00:16:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:42.481 00:16:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:42.791 Malloc0 00:14:42.791 00:16:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:42.791 00:16:01 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:42.791 00:16:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:42.791 00:16:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:42.791 00:16:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:42.791 00:16:01 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:42.791 00:16:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:42.791 00:16:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:42.791 00:16:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:42.791 00:16:01 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:42.791 00:16:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:42.791 00:16:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:42.791 [2024-07-16 00:16:01.364908] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:42.791 00:16:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:42.791 00:16:01 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:14:42.791 test case1: single bdev can't be used in multiple subsystems 00:14:42.791 00:16:01 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:14:42.791 00:16:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:42.791 00:16:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:42.791 00:16:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:42.791 00:16:01 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:14:42.791 00:16:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:42.791 00:16:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:42.791 00:16:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:42.791 00:16:01 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:14:42.791 00:16:01 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:14:42.791 00:16:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:42.791 00:16:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:42.791 [2024-07-16 00:16:01.388824] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:14:42.791 [2024-07-16 00:16:01.388842] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:14:42.791 [2024-07-16 00:16:01.388849] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.791 request: 00:14:42.791 { 00:14:42.791 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:14:42.791 "namespace": { 00:14:42.791 "bdev_name": "Malloc0", 00:14:42.791 "no_auto_visible": false 00:14:42.791 }, 00:14:42.791 "method": "nvmf_subsystem_add_ns", 00:14:42.791 "req_id": 1 00:14:42.791 } 00:14:42.791 Got JSON-RPC error response 00:14:42.791 response: 00:14:42.791 { 00:14:42.791 "code": -32602, 00:14:42.791 "message": "Invalid parameters" 00:14:42.791 } 00:14:42.791 00:16:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@581 -- # [[ 1 == 0 ]] 00:14:42.791 00:16:01 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:14:42.791 00:16:01 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:14:42.791 00:16:01 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:14:42.791 Adding namespace failed - expected result. 00:14:42.791 00:16:01 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:14:42.791 test case2: host connect to nvmf target in multiple paths 00:14:42.791 00:16:01 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:14:42.791 00:16:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:42.791 00:16:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:42.791 [2024-07-16 00:16:01.400941] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:14:42.791 00:16:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:42.791 00:16:01 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:43.727 00:16:02 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:14:45.210 00:16:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:14:45.210 00:16:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1192 -- # local i=0 00:14:45.210 00:16:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1193 -- # local nvme_device_counter=1 nvme_devices=0 00:14:45.210 00:16:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1194 -- # [[ -n '' ]] 00:14:45.210 00:16:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # sleep 2 00:14:47.115 00:16:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # (( i++ <= 15 )) 00:14:47.115 00:16:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1201 -- # lsblk -l -o NAME,SERIAL 00:14:47.115 00:16:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1201 -- # grep -c SPDKISFASTANDAWESOME 00:14:47.115 00:16:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1201 -- # nvme_devices=1 00:14:47.115 00:16:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1202 -- # (( nvme_devices == nvme_device_counter )) 00:14:47.115 00:16:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1202 -- # return 0 00:14:47.115 00:16:05 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:14:47.116 [global] 00:14:47.116 thread=1 00:14:47.116 invalidate=1 00:14:47.116 rw=write 00:14:47.116 time_based=1 00:14:47.116 runtime=1 00:14:47.116 ioengine=libaio 00:14:47.116 direct=1 00:14:47.116 bs=4096 00:14:47.116 iodepth=1 00:14:47.116 norandommap=0 00:14:47.116 numjobs=1 00:14:47.116 00:14:47.116 verify_dump=1 00:14:47.116 verify_backlog=512 00:14:47.116 verify_state_save=0 00:14:47.116 do_verify=1 00:14:47.116 verify=crc32c-intel 00:14:47.116 [job0] 00:14:47.116 filename=/dev/nvme0n1 00:14:47.116 Could not set queue depth (nvme0n1) 00:14:47.374 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:47.374 fio-3.35 00:14:47.374 Starting 1 thread 00:14:48.754 00:14:48.754 job0: (groupid=0, jobs=1): err= 0: pid=1488400: Tue Jul 16 00:16:07 2024 00:14:48.754 read: IOPS=21, BW=84.9KiB/s (87.0kB/s)(88.0KiB/1036msec) 00:14:48.754 slat (nsec): min=10261, max=23223, avg=21283.77, stdev=2554.17 00:14:48.754 clat (usec): min=40785, max=41360, avg=40980.30, stdev=111.85 00:14:48.754 lat (usec): min=40808, max=41371, avg=41001.58, stdev=109.85 00:14:48.754 clat percentiles (usec): 00:14:48.754 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:14:48.754 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:14:48.754 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:14:48.754 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:14:48.754 | 99.99th=[41157] 00:14:48.754 write: IOPS=494, BW=1977KiB/s (2024kB/s)(2048KiB/1036msec); 0 zone resets 00:14:48.754 slat (usec): min=10, max=23875, avg=58.55, stdev=1054.64 00:14:48.754 clat (usec): min=183, max=391, avg=198.83, stdev=16.60 00:14:48.754 lat (usec): min=194, max=24267, avg=257.38, stdev=1063.29 00:14:48.754 clat percentiles (usec): 00:14:48.754 | 1.00th=[ 186], 5.00th=[ 188], 10.00th=[ 190], 20.00th=[ 192], 00:14:48.754 | 30.00th=[ 194], 40.00th=[ 194], 50.00th=[ 196], 60.00th=[ 198], 00:14:48.755 | 70.00th=[ 200], 80.00th=[ 202], 90.00th=[ 208], 95.00th=[ 221], 00:14:48.755 | 99.00th=[ 241], 99.50th=[ 306], 99.90th=[ 392], 99.95th=[ 392], 00:14:48.755 | 99.99th=[ 392] 00:14:48.755 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:14:48.755 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:14:48.755 lat (usec) : 250=94.94%, 500=0.94% 00:14:48.755 lat (msec) : 50=4.12% 00:14:48.755 cpu : usr=0.39%, sys=0.87%, ctx=537, majf=0, minf=2 00:14:48.755 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:48.755 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:48.755 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:48.755 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:48.755 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:48.755 00:14:48.755 Run status group 0 (all jobs): 00:14:48.755 READ: bw=84.9KiB/s (87.0kB/s), 84.9KiB/s-84.9KiB/s (87.0kB/s-87.0kB/s), io=88.0KiB (90.1kB), run=1036-1036msec 00:14:48.755 WRITE: bw=1977KiB/s (2024kB/s), 1977KiB/s-1977KiB/s (2024kB/s-2024kB/s), io=2048KiB (2097kB), run=1036-1036msec 00:14:48.755 00:14:48.755 Disk stats (read/write): 00:14:48.755 nvme0n1: ios=44/512, merge=0/0, ticks=1723/100, in_queue=1823, util=98.50% 00:14:48.755 00:16:07 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:48.755 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:14:48.755 00:16:07 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:48.755 00:16:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1213 -- # local i=0 00:14:48.755 00:16:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1214 -- # lsblk -o NAME,SERIAL 00:14:48.755 00:16:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1214 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:48.755 00:16:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1221 -- # lsblk -l -o NAME,SERIAL 00:14:48.755 00:16:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1221 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:48.755 00:16:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1225 -- # return 0 00:14:48.755 00:16:07 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:14:48.755 00:16:07 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:14:48.755 00:16:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:48.755 00:16:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:14:48.755 00:16:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:48.755 00:16:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:14:48.755 00:16:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:48.755 00:16:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:48.755 rmmod nvme_tcp 00:14:48.755 rmmod nvme_fabrics 00:14:48.755 rmmod nvme_keyring 00:14:48.755 00:16:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:48.755 00:16:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:14:48.755 00:16:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:14:48.755 00:16:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 1487321 ']' 00:14:48.755 00:16:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 1487321 00:14:48.755 00:16:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@942 -- # '[' -z 1487321 ']' 00:14:48.755 00:16:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@946 -- # kill -0 1487321 00:14:48.755 00:16:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@947 -- # uname 00:14:48.755 00:16:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:14:48.755 00:16:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1487321 00:14:49.014 00:16:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:14:49.014 00:16:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:14:49.014 00:16:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1487321' 00:14:49.014 killing process with pid 1487321 00:14:49.014 00:16:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@961 -- # kill 1487321 00:14:49.014 00:16:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # wait 1487321 00:14:49.014 00:16:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:49.014 00:16:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:49.014 00:16:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:49.014 00:16:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:49.014 00:16:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:49.014 00:16:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:49.014 00:16:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:49.014 00:16:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:51.551 00:16:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:51.551 00:14:51.551 real 0m15.237s 00:14:51.551 user 0m35.911s 00:14:51.551 sys 0m5.006s 00:14:51.551 00:16:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1118 -- # xtrace_disable 00:14:51.551 00:16:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:51.551 ************************************ 00:14:51.551 END TEST nvmf_nmic 00:14:51.551 ************************************ 00:14:51.551 00:16:09 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:14:51.551 00:16:09 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:14:51.551 00:16:09 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:14:51.551 00:16:09 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:14:51.551 00:16:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:51.551 ************************************ 00:14:51.551 START TEST nvmf_fio_target 00:14:51.551 ************************************ 00:14:51.551 00:16:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:14:51.551 * Looking for test storage... 00:14:51.551 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:51.551 00:16:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:51.551 00:16:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:14:51.551 00:16:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:51.551 00:16:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:51.551 00:16:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:51.551 00:16:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:51.551 00:16:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:51.551 00:16:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:51.551 00:16:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:51.551 00:16:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:51.551 00:16:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:51.551 00:16:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:51.551 00:16:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:51.551 00:16:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:51.551 00:16:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:51.551 00:16:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:51.551 00:16:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:51.551 00:16:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:51.551 00:16:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:51.551 00:16:10 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:51.551 00:16:10 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:51.551 00:16:10 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:51.551 00:16:10 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.552 00:16:10 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.552 00:16:10 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.552 00:16:10 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:14:51.552 00:16:10 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.552 00:16:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:14:51.552 00:16:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:51.552 00:16:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:51.552 00:16:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:51.552 00:16:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:51.552 00:16:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:51.552 00:16:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:51.552 00:16:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:51.552 00:16:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:51.552 00:16:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:51.552 00:16:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:51.552 00:16:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:51.552 00:16:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:14:51.552 00:16:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:51.552 00:16:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:51.552 00:16:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:51.552 00:16:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:51.552 00:16:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:51.552 00:16:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:51.552 00:16:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:51.552 00:16:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:51.552 00:16:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:51.552 00:16:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:51.552 00:16:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:14:51.552 00:16:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.830 00:16:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:56.830 00:16:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:14:56.830 00:16:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:56.830 00:16:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:56.830 00:16:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:56.830 00:16:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:56.830 00:16:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:56.830 00:16:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:14:56.830 00:16:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:56.830 00:16:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:14:56.830 00:16:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:14:56.830 00:16:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:14:56.830 00:16:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:14:56.830 00:16:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:14:56.830 00:16:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:14:56.830 00:16:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:56.830 00:16:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:56.830 00:16:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:56.830 00:16:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:56.830 00:16:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:56.830 00:16:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:56.830 00:16:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:56.830 00:16:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:56.830 00:16:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:56.830 00:16:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:56.830 00:16:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:56.830 00:16:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:56.830 00:16:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:56.830 00:16:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:56.830 00:16:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:56.830 00:16:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:56.830 00:16:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:56.830 00:16:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:56.830 00:16:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:56.830 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:56.830 00:16:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:56.830 00:16:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:56.830 00:16:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:56.830 00:16:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:56.830 00:16:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:56.830 00:16:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:56.830 00:16:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:56.830 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:56.830 00:16:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:56.830 00:16:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:56.830 00:16:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:56.830 00:16:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:56.830 00:16:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:56.830 00:16:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:56.830 00:16:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:56.830 00:16:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:56.830 00:16:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:56.830 00:16:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:56.830 00:16:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:56.830 00:16:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:56.830 00:16:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:56.830 00:16:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:56.830 00:16:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:56.830 00:16:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:56.830 Found net devices under 0000:86:00.0: cvl_0_0 00:14:56.830 00:16:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:56.830 00:16:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:56.830 00:16:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:56.830 00:16:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:56.830 00:16:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:56.830 00:16:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:56.830 00:16:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:56.830 00:16:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:56.830 00:16:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:56.830 Found net devices under 0000:86:00.1: cvl_0_1 00:14:56.830 00:16:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:56.830 00:16:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:56.830 00:16:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:14:56.830 00:16:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:56.830 00:16:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:56.830 00:16:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:56.830 00:16:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:56.830 00:16:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:56.830 00:16:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:56.830 00:16:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:56.830 00:16:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:56.830 00:16:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:56.830 00:16:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:56.830 00:16:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:56.830 00:16:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:56.830 00:16:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:56.830 00:16:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:56.830 00:16:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:56.830 00:16:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:56.830 00:16:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:56.830 00:16:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:56.830 00:16:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:56.830 00:16:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:56.830 00:16:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:56.830 00:16:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:56.830 00:16:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:56.830 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:56.830 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:14:56.830 00:14:56.830 --- 10.0.0.2 ping statistics --- 00:14:56.830 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:56.830 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:14:56.830 00:16:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:56.830 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:56.830 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.254 ms 00:14:56.830 00:14:56.830 --- 10.0.0.1 ping statistics --- 00:14:56.830 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:56.830 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:14:56.830 00:16:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:56.830 00:16:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:14:56.830 00:16:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:56.830 00:16:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:56.830 00:16:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:56.830 00:16:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:56.830 00:16:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:56.830 00:16:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:56.830 00:16:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:56.830 00:16:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:14:56.830 00:16:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:56.830 00:16:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:56.830 00:16:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.830 00:16:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=1492137 00:14:56.831 00:16:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 1492137 00:14:56.831 00:16:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:56.831 00:16:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@823 -- # '[' -z 1492137 ']' 00:14:56.831 00:16:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:56.831 00:16:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@828 -- # local max_retries=100 00:14:56.831 00:16:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:56.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:56.831 00:16:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@832 -- # xtrace_disable 00:14:56.831 00:16:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.089 [2024-07-16 00:16:15.685735] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:14:57.089 [2024-07-16 00:16:15.685783] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:57.089 [2024-07-16 00:16:15.741506] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:57.089 [2024-07-16 00:16:15.822214] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:57.089 [2024-07-16 00:16:15.822252] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:57.089 [2024-07-16 00:16:15.822258] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:57.089 [2024-07-16 00:16:15.822266] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:57.089 [2024-07-16 00:16:15.822275] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:57.089 [2024-07-16 00:16:15.822311] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:57.089 [2024-07-16 00:16:15.822398] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:57.089 [2024-07-16 00:16:15.822488] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:57.089 [2024-07-16 00:16:15.822489] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:57.655 00:16:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:14:57.655 00:16:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@856 -- # return 0 00:14:57.655 00:16:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:57.655 00:16:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:57.655 00:16:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.913 00:16:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:57.913 00:16:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:57.913 [2024-07-16 00:16:16.685790] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:57.913 00:16:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:58.170 00:16:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:14:58.170 00:16:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:58.428 00:16:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:14:58.428 00:16:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:58.686 00:16:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:14:58.686 00:16:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:58.686 00:16:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:14:58.686 00:16:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:14:58.945 00:16:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:59.204 00:16:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:14:59.204 00:16:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:59.463 00:16:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:14:59.463 00:16:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:59.463 00:16:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:14:59.463 00:16:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:14:59.722 00:16:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:59.983 00:16:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:14:59.983 00:16:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:59.983 00:16:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:14:59.983 00:16:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:00.241 00:16:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:00.499 [2024-07-16 00:16:19.154789] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:00.499 00:16:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:15:00.758 00:16:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:15:00.758 00:16:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:02.138 00:16:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:15:02.138 00:16:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1192 -- # local i=0 00:15:02.138 00:16:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1193 -- # local nvme_device_counter=1 nvme_devices=0 00:15:02.138 00:16:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1194 -- # [[ -n 4 ]] 00:15:02.138 00:16:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1195 -- # nvme_device_counter=4 00:15:02.138 00:16:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # sleep 2 00:15:04.045 00:16:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # (( i++ <= 15 )) 00:15:04.045 00:16:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # lsblk -l -o NAME,SERIAL 00:15:04.045 00:16:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # grep -c SPDKISFASTANDAWESOME 00:15:04.045 00:16:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_devices=4 00:15:04.045 00:16:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1202 -- # (( nvme_devices == nvme_device_counter )) 00:15:04.045 00:16:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1202 -- # return 0 00:15:04.045 00:16:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:15:04.045 [global] 00:15:04.045 thread=1 00:15:04.045 invalidate=1 00:15:04.045 rw=write 00:15:04.045 time_based=1 00:15:04.045 runtime=1 00:15:04.045 ioengine=libaio 00:15:04.045 direct=1 00:15:04.045 bs=4096 00:15:04.045 iodepth=1 00:15:04.045 norandommap=0 00:15:04.045 numjobs=1 00:15:04.045 00:15:04.045 verify_dump=1 00:15:04.045 verify_backlog=512 00:15:04.045 verify_state_save=0 00:15:04.045 do_verify=1 00:15:04.045 verify=crc32c-intel 00:15:04.045 [job0] 00:15:04.045 filename=/dev/nvme0n1 00:15:04.045 [job1] 00:15:04.045 filename=/dev/nvme0n2 00:15:04.045 [job2] 00:15:04.045 filename=/dev/nvme0n3 00:15:04.045 [job3] 00:15:04.045 filename=/dev/nvme0n4 00:15:04.045 Could not set queue depth (nvme0n1) 00:15:04.045 Could not set queue depth (nvme0n2) 00:15:04.045 Could not set queue depth (nvme0n3) 00:15:04.045 Could not set queue depth (nvme0n4) 00:15:04.304 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:04.304 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:04.304 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:04.304 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:04.304 fio-3.35 00:15:04.304 Starting 4 threads 00:15:05.687 00:15:05.687 job0: (groupid=0, jobs=1): err= 0: pid=1493498: Tue Jul 16 00:16:24 2024 00:15:05.687 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:15:05.687 slat (nsec): min=6371, max=22922, avg=7192.27, stdev=857.16 00:15:05.687 clat (usec): min=208, max=41400, avg=383.76, stdev=1049.22 00:15:05.687 lat (usec): min=215, max=41407, avg=390.95, stdev=1049.21 00:15:05.687 clat percentiles (usec): 00:15:05.687 | 1.00th=[ 227], 5.00th=[ 258], 10.00th=[ 297], 20.00th=[ 322], 00:15:05.687 | 30.00th=[ 326], 40.00th=[ 334], 50.00th=[ 343], 60.00th=[ 355], 00:15:05.687 | 70.00th=[ 379], 80.00th=[ 404], 90.00th=[ 453], 95.00th=[ 474], 00:15:05.687 | 99.00th=[ 502], 99.50th=[ 523], 99.90th=[ 1303], 99.95th=[41157], 00:15:05.687 | 99.99th=[41157] 00:15:05.687 write: IOPS=1703, BW=6813KiB/s (6977kB/s)(6820KiB/1001msec); 0 zone resets 00:15:05.687 slat (nsec): min=8896, max=60414, avg=10068.24, stdev=1917.14 00:15:05.687 clat (usec): min=164, max=460, avg=220.06, stdev=31.47 00:15:05.687 lat (usec): min=174, max=521, avg=230.13, stdev=31.92 00:15:05.687 clat percentiles (usec): 00:15:05.687 | 1.00th=[ 178], 5.00th=[ 188], 10.00th=[ 192], 20.00th=[ 198], 00:15:05.687 | 30.00th=[ 204], 40.00th=[ 206], 50.00th=[ 212], 60.00th=[ 217], 00:15:05.687 | 70.00th=[ 225], 80.00th=[ 237], 90.00th=[ 265], 95.00th=[ 285], 00:15:05.687 | 99.00th=[ 326], 99.50th=[ 343], 99.90th=[ 379], 99.95th=[ 461], 00:15:05.687 | 99.99th=[ 461] 00:15:05.687 bw ( KiB/s): min= 8192, max= 8192, per=32.45%, avg=8192.00, stdev= 0.00, samples=1 00:15:05.687 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:15:05.687 lat (usec) : 250=46.93%, 500=52.48%, 750=0.52% 00:15:05.687 lat (msec) : 2=0.03%, 50=0.03% 00:15:05.687 cpu : usr=1.70%, sys=2.90%, ctx=3242, majf=0, minf=1 00:15:05.687 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:05.687 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:05.687 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:05.687 issued rwts: total=1536,1705,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:05.687 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:05.687 job1: (groupid=0, jobs=1): err= 0: pid=1493500: Tue Jul 16 00:16:24 2024 00:15:05.687 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:15:05.687 slat (nsec): min=7357, max=41205, avg=9194.75, stdev=1566.63 00:15:05.687 clat (usec): min=276, max=1515, avg=389.82, stdev=74.28 00:15:05.687 lat (usec): min=284, max=1523, avg=399.01, stdev=74.38 00:15:05.687 clat percentiles (usec): 00:15:05.687 | 1.00th=[ 306], 5.00th=[ 318], 10.00th=[ 322], 20.00th=[ 330], 00:15:05.687 | 30.00th=[ 338], 40.00th=[ 347], 50.00th=[ 363], 60.00th=[ 383], 00:15:05.687 | 70.00th=[ 420], 80.00th=[ 478], 90.00th=[ 494], 95.00th=[ 506], 00:15:05.687 | 99.00th=[ 537], 99.50th=[ 553], 99.90th=[ 578], 99.95th=[ 1516], 00:15:05.687 | 99.99th=[ 1516] 00:15:05.687 write: IOPS=1539, BW=6158KiB/s (6306kB/s)(6164KiB/1001msec); 0 zone resets 00:15:05.687 slat (nsec): min=10571, max=40775, avg=12729.12, stdev=1604.48 00:15:05.687 clat (usec): min=166, max=540, avg=232.16, stdev=31.62 00:15:05.687 lat (usec): min=177, max=581, avg=244.89, stdev=31.64 00:15:05.687 clat percentiles (usec): 00:15:05.687 | 1.00th=[ 178], 5.00th=[ 194], 10.00th=[ 198], 20.00th=[ 206], 00:15:05.687 | 30.00th=[ 212], 40.00th=[ 221], 50.00th=[ 227], 60.00th=[ 237], 00:15:05.687 | 70.00th=[ 243], 80.00th=[ 253], 90.00th=[ 273], 95.00th=[ 293], 00:15:05.687 | 99.00th=[ 330], 99.50th=[ 338], 99.90th=[ 363], 99.95th=[ 537], 00:15:05.687 | 99.99th=[ 537] 00:15:05.687 bw ( KiB/s): min= 8192, max= 8192, per=32.45%, avg=8192.00, stdev= 0.00, samples=1 00:15:05.687 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:15:05.687 lat (usec) : 250=38.77%, 500=57.20%, 750=4.00% 00:15:05.687 lat (msec) : 2=0.03% 00:15:05.687 cpu : usr=2.10%, sys=3.30%, ctx=3081, majf=0, minf=1 00:15:05.687 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:05.687 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:05.687 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:05.687 issued rwts: total=1536,1541,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:05.687 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:05.687 job2: (groupid=0, jobs=1): err= 0: pid=1493501: Tue Jul 16 00:16:24 2024 00:15:05.687 read: IOPS=1332, BW=5331KiB/s (5459kB/s)(5336KiB/1001msec) 00:15:05.687 slat (nsec): min=7535, max=26444, avg=8913.36, stdev=1354.89 00:15:05.687 clat (usec): min=297, max=1375, avg=419.83, stdev=68.59 00:15:05.687 lat (usec): min=307, max=1384, avg=428.75, stdev=68.93 00:15:05.687 clat percentiles (usec): 00:15:05.687 | 1.00th=[ 322], 5.00th=[ 343], 10.00th=[ 355], 20.00th=[ 375], 00:15:05.687 | 30.00th=[ 388], 40.00th=[ 400], 50.00th=[ 412], 60.00th=[ 420], 00:15:05.687 | 70.00th=[ 433], 80.00th=[ 457], 90.00th=[ 494], 95.00th=[ 537], 00:15:05.687 | 99.00th=[ 611], 99.50th=[ 652], 99.90th=[ 1287], 99.95th=[ 1369], 00:15:05.687 | 99.99th=[ 1369] 00:15:05.687 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:15:05.687 slat (nsec): min=10082, max=37332, avg=12053.90, stdev=1910.64 00:15:05.687 clat (usec): min=161, max=1991, avg=260.72, stdev=70.51 00:15:05.687 lat (usec): min=174, max=2008, avg=272.78, stdev=71.02 00:15:05.687 clat percentiles (usec): 00:15:05.687 | 1.00th=[ 178], 5.00th=[ 198], 10.00th=[ 212], 20.00th=[ 223], 00:15:05.687 | 30.00th=[ 229], 40.00th=[ 237], 50.00th=[ 245], 60.00th=[ 255], 00:15:05.687 | 70.00th=[ 269], 80.00th=[ 302], 90.00th=[ 334], 95.00th=[ 359], 00:15:05.687 | 99.00th=[ 437], 99.50th=[ 465], 99.90th=[ 938], 99.95th=[ 1991], 00:15:05.687 | 99.99th=[ 1991] 00:15:05.687 bw ( KiB/s): min= 7528, max= 7528, per=29.82%, avg=7528.00, stdev= 0.00, samples=1 00:15:05.687 iops : min= 1882, max= 1882, avg=1882.00, stdev= 0.00, samples=1 00:15:05.688 lat (usec) : 250=30.07%, 500=65.96%, 750=3.76%, 1000=0.10% 00:15:05.688 lat (msec) : 2=0.10% 00:15:05.688 cpu : usr=2.70%, sys=4.40%, ctx=2871, majf=0, minf=1 00:15:05.688 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:05.688 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:05.688 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:05.688 issued rwts: total=1334,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:05.688 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:05.688 job3: (groupid=0, jobs=1): err= 0: pid=1493502: Tue Jul 16 00:16:24 2024 00:15:05.688 read: IOPS=1377, BW=5510KiB/s (5643kB/s)(5516KiB/1001msec) 00:15:05.688 slat (nsec): min=6274, max=26983, avg=7257.07, stdev=1212.41 00:15:05.688 clat (usec): min=289, max=41359, avg=442.80, stdev=1104.63 00:15:05.688 lat (usec): min=296, max=41370, avg=450.06, stdev=1104.74 00:15:05.688 clat percentiles (usec): 00:15:05.688 | 1.00th=[ 306], 5.00th=[ 322], 10.00th=[ 330], 20.00th=[ 347], 00:15:05.688 | 30.00th=[ 363], 40.00th=[ 383], 50.00th=[ 404], 60.00th=[ 429], 00:15:05.688 | 70.00th=[ 474], 80.00th=[ 490], 90.00th=[ 502], 95.00th=[ 510], 00:15:05.688 | 99.00th=[ 537], 99.50th=[ 553], 99.90th=[ 594], 99.95th=[41157], 00:15:05.688 | 99.99th=[41157] 00:15:05.688 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:15:05.688 slat (nsec): min=8847, max=57380, avg=10062.45, stdev=1702.74 00:15:05.688 clat (usec): min=170, max=449, avg=232.78, stdev=29.33 00:15:05.688 lat (usec): min=180, max=507, avg=242.84, stdev=29.65 00:15:05.688 clat percentiles (usec): 00:15:05.688 | 1.00th=[ 186], 5.00th=[ 192], 10.00th=[ 200], 20.00th=[ 208], 00:15:05.688 | 30.00th=[ 215], 40.00th=[ 223], 50.00th=[ 231], 60.00th=[ 239], 00:15:05.688 | 70.00th=[ 247], 80.00th=[ 255], 90.00th=[ 269], 95.00th=[ 281], 00:15:05.688 | 99.00th=[ 326], 99.50th=[ 351], 99.90th=[ 396], 99.95th=[ 449], 00:15:05.688 | 99.99th=[ 449] 00:15:05.688 bw ( KiB/s): min= 8192, max= 8192, per=32.45%, avg=8192.00, stdev= 0.00, samples=1 00:15:05.688 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:15:05.688 lat (usec) : 250=39.45%, 500=55.09%, 750=5.42% 00:15:05.688 lat (msec) : 50=0.03% 00:15:05.688 cpu : usr=1.70%, sys=2.40%, ctx=2916, majf=0, minf=2 00:15:05.688 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:05.688 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:05.688 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:05.688 issued rwts: total=1379,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:05.688 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:05.688 00:15:05.688 Run status group 0 (all jobs): 00:15:05.688 READ: bw=22.6MiB/s (23.7MB/s), 5331KiB/s-6138KiB/s (5459kB/s-6285kB/s), io=22.6MiB (23.7MB), run=1001-1001msec 00:15:05.688 WRITE: bw=24.7MiB/s (25.9MB/s), 6138KiB/s-6813KiB/s (6285kB/s-6977kB/s), io=24.7MiB (25.9MB), run=1001-1001msec 00:15:05.688 00:15:05.688 Disk stats (read/write): 00:15:05.688 nvme0n1: ios=1405/1536, merge=0/0, ticks=498/337, in_queue=835, util=87.17% 00:15:05.688 nvme0n2: ios=1137/1536, merge=0/0, ticks=1438/345, in_queue=1783, util=98.98% 00:15:05.688 nvme0n3: ios=1048/1500, merge=0/0, ticks=1395/376, in_queue=1771, util=98.65% 00:15:05.688 nvme0n4: ios=1141/1536, merge=0/0, ticks=461/353, in_queue=814, util=89.71% 00:15:05.688 00:16:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:15:05.688 [global] 00:15:05.688 thread=1 00:15:05.688 invalidate=1 00:15:05.688 rw=randwrite 00:15:05.688 time_based=1 00:15:05.688 runtime=1 00:15:05.688 ioengine=libaio 00:15:05.688 direct=1 00:15:05.688 bs=4096 00:15:05.688 iodepth=1 00:15:05.688 norandommap=0 00:15:05.688 numjobs=1 00:15:05.688 00:15:05.688 verify_dump=1 00:15:05.688 verify_backlog=512 00:15:05.688 verify_state_save=0 00:15:05.688 do_verify=1 00:15:05.688 verify=crc32c-intel 00:15:05.688 [job0] 00:15:05.688 filename=/dev/nvme0n1 00:15:05.688 [job1] 00:15:05.688 filename=/dev/nvme0n2 00:15:05.688 [job2] 00:15:05.688 filename=/dev/nvme0n3 00:15:05.688 [job3] 00:15:05.688 filename=/dev/nvme0n4 00:15:05.688 Could not set queue depth (nvme0n1) 00:15:05.688 Could not set queue depth (nvme0n2) 00:15:05.688 Could not set queue depth (nvme0n3) 00:15:05.688 Could not set queue depth (nvme0n4) 00:15:05.946 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:05.947 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:05.947 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:05.947 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:05.947 fio-3.35 00:15:05.947 Starting 4 threads 00:15:07.345 00:15:07.345 job0: (groupid=0, jobs=1): err= 0: pid=1493874: Tue Jul 16 00:16:25 2024 00:15:07.345 read: IOPS=21, BW=86.5KiB/s (88.6kB/s)(88.0KiB/1017msec) 00:15:07.345 slat (nsec): min=11057, max=25131, avg=22118.91, stdev=2552.90 00:15:07.345 clat (usec): min=40875, max=42478, avg=41068.60, stdev=330.67 00:15:07.345 lat (usec): min=40898, max=42501, avg=41090.72, stdev=330.74 00:15:07.345 clat percentiles (usec): 00:15:07.345 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:15:07.345 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:15:07.345 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:15:07.345 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:15:07.345 | 99.99th=[42730] 00:15:07.345 write: IOPS=503, BW=2014KiB/s (2062kB/s)(2048KiB/1017msec); 0 zone resets 00:15:07.345 slat (nsec): min=9536, max=50786, avg=12178.43, stdev=2908.13 00:15:07.345 clat (usec): min=157, max=452, avg=204.61, stdev=37.66 00:15:07.345 lat (usec): min=168, max=478, avg=216.79, stdev=38.43 00:15:07.345 clat percentiles (usec): 00:15:07.345 | 1.00th=[ 163], 5.00th=[ 169], 10.00th=[ 174], 20.00th=[ 178], 00:15:07.345 | 30.00th=[ 182], 40.00th=[ 186], 50.00th=[ 192], 60.00th=[ 200], 00:15:07.345 | 70.00th=[ 210], 80.00th=[ 227], 90.00th=[ 265], 95.00th=[ 277], 00:15:07.345 | 99.00th=[ 314], 99.50th=[ 330], 99.90th=[ 453], 99.95th=[ 453], 00:15:07.345 | 99.99th=[ 453] 00:15:07.345 bw ( KiB/s): min= 4096, max= 4096, per=25.69%, avg=4096.00, stdev= 0.00, samples=1 00:15:07.345 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:07.345 lat (usec) : 250=83.15%, 500=12.73% 00:15:07.345 lat (msec) : 50=4.12% 00:15:07.345 cpu : usr=0.59%, sys=0.79%, ctx=535, majf=0, minf=1 00:15:07.345 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:07.345 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:07.345 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:07.345 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:07.345 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:07.345 job1: (groupid=0, jobs=1): err= 0: pid=1493875: Tue Jul 16 00:16:25 2024 00:15:07.345 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:15:07.345 slat (nsec): min=7246, max=44108, avg=8411.55, stdev=2327.40 00:15:07.345 clat (usec): min=271, max=41326, avg=625.13, stdev=3349.22 00:15:07.345 lat (usec): min=279, max=41334, avg=633.54, stdev=3349.36 00:15:07.345 clat percentiles (usec): 00:15:07.345 | 1.00th=[ 285], 5.00th=[ 293], 10.00th=[ 297], 20.00th=[ 306], 00:15:07.345 | 30.00th=[ 318], 40.00th=[ 338], 50.00th=[ 351], 60.00th=[ 359], 00:15:07.345 | 70.00th=[ 367], 80.00th=[ 375], 90.00th=[ 392], 95.00th=[ 412], 00:15:07.345 | 99.00th=[ 529], 99.50th=[40633], 99.90th=[41157], 99.95th=[41157], 00:15:07.345 | 99.99th=[41157] 00:15:07.345 write: IOPS=1528, BW=6114KiB/s (6261kB/s)(6120KiB/1001msec); 0 zone resets 00:15:07.345 slat (nsec): min=10305, max=40734, avg=11634.51, stdev=1895.47 00:15:07.345 clat (usec): min=169, max=412, avg=212.94, stdev=34.37 00:15:07.345 lat (usec): min=180, max=423, avg=224.58, stdev=34.52 00:15:07.345 clat percentiles (usec): 00:15:07.345 | 1.00th=[ 176], 5.00th=[ 180], 10.00th=[ 184], 20.00th=[ 188], 00:15:07.345 | 30.00th=[ 190], 40.00th=[ 194], 50.00th=[ 200], 60.00th=[ 208], 00:15:07.345 | 70.00th=[ 223], 80.00th=[ 241], 90.00th=[ 262], 95.00th=[ 285], 00:15:07.345 | 99.00th=[ 314], 99.50th=[ 338], 99.90th=[ 404], 99.95th=[ 412], 00:15:07.345 | 99.99th=[ 412] 00:15:07.345 bw ( KiB/s): min= 8192, max= 8192, per=51.38%, avg=8192.00, stdev= 0.00, samples=1 00:15:07.345 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:15:07.345 lat (usec) : 250=51.64%, 500=47.85%, 750=0.23% 00:15:07.345 lat (msec) : 50=0.27% 00:15:07.345 cpu : usr=2.20%, sys=4.10%, ctx=2555, majf=0, minf=1 00:15:07.345 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:07.345 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:07.345 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:07.345 issued rwts: total=1024,1530,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:07.345 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:07.345 job2: (groupid=0, jobs=1): err= 0: pid=1493876: Tue Jul 16 00:16:25 2024 00:15:07.345 read: IOPS=1127, BW=4511KiB/s (4619kB/s)(4628KiB/1026msec) 00:15:07.345 slat (nsec): min=6431, max=25539, avg=7776.82, stdev=1320.43 00:15:07.345 clat (usec): min=267, max=41549, avg=560.93, stdev=2683.77 00:15:07.345 lat (usec): min=275, max=41557, avg=568.71, stdev=2683.82 00:15:07.345 clat percentiles (usec): 00:15:07.345 | 1.00th=[ 289], 5.00th=[ 310], 10.00th=[ 326], 20.00th=[ 355], 00:15:07.345 | 30.00th=[ 371], 40.00th=[ 375], 50.00th=[ 383], 60.00th=[ 392], 00:15:07.345 | 70.00th=[ 400], 80.00th=[ 416], 90.00th=[ 437], 95.00th=[ 457], 00:15:07.345 | 99.00th=[ 553], 99.50th=[ 685], 99.90th=[41157], 99.95th=[41681], 00:15:07.345 | 99.99th=[41681] 00:15:07.345 write: IOPS=1497, BW=5988KiB/s (6132kB/s)(6144KiB/1026msec); 0 zone resets 00:15:07.345 slat (nsec): min=8962, max=38809, avg=10714.09, stdev=1921.31 00:15:07.345 clat (usec): min=166, max=908, avg=223.95, stdev=52.08 00:15:07.345 lat (usec): min=177, max=919, avg=234.67, stdev=51.95 00:15:07.345 clat percentiles (usec): 00:15:07.345 | 1.00th=[ 174], 5.00th=[ 184], 10.00th=[ 188], 20.00th=[ 194], 00:15:07.345 | 30.00th=[ 198], 40.00th=[ 200], 50.00th=[ 204], 60.00th=[ 210], 00:15:07.345 | 70.00th=[ 223], 80.00th=[ 247], 90.00th=[ 289], 95.00th=[ 338], 00:15:07.345 | 99.00th=[ 400], 99.50th=[ 416], 99.90th=[ 693], 99.95th=[ 906], 00:15:07.345 | 99.99th=[ 906] 00:15:07.345 bw ( KiB/s): min= 4096, max= 8192, per=38.53%, avg=6144.00, stdev=2896.31, samples=2 00:15:07.345 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:15:07.345 lat (usec) : 250=46.34%, 500=52.47%, 750=0.97%, 1000=0.04% 00:15:07.345 lat (msec) : 50=0.19% 00:15:07.345 cpu : usr=2.05%, sys=3.02%, ctx=2693, majf=0, minf=1 00:15:07.345 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:07.345 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:07.345 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:07.345 issued rwts: total=1157,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:07.345 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:07.345 job3: (groupid=0, jobs=1): err= 0: pid=1493877: Tue Jul 16 00:16:25 2024 00:15:07.345 read: IOPS=25, BW=102KiB/s (104kB/s)(104KiB/1022msec) 00:15:07.345 slat (nsec): min=7356, max=23266, avg=19883.96, stdev=5776.54 00:15:07.345 clat (usec): min=306, max=42096, avg=34990.42, stdev=15042.79 00:15:07.345 lat (usec): min=316, max=42119, avg=35010.30, stdev=15044.59 00:15:07.345 clat percentiles (usec): 00:15:07.345 | 1.00th=[ 306], 5.00th=[ 338], 10.00th=[ 494], 20.00th=[40633], 00:15:07.345 | 30.00th=[40633], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:15:07.345 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:15:07.345 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:15:07.345 | 99.99th=[42206] 00:15:07.345 write: IOPS=500, BW=2004KiB/s (2052kB/s)(2048KiB/1022msec); 0 zone resets 00:15:07.345 slat (nsec): min=9227, max=90007, avg=13432.94, stdev=13978.01 00:15:07.345 clat (usec): min=153, max=447, avg=202.08, stdev=32.01 00:15:07.345 lat (usec): min=164, max=485, avg=215.51, stdev=35.83 00:15:07.345 clat percentiles (usec): 00:15:07.345 | 1.00th=[ 159], 5.00th=[ 172], 10.00th=[ 176], 20.00th=[ 180], 00:15:07.345 | 30.00th=[ 184], 40.00th=[ 188], 50.00th=[ 194], 60.00th=[ 198], 00:15:07.345 | 70.00th=[ 208], 80.00th=[ 223], 90.00th=[ 245], 95.00th=[ 269], 00:15:07.345 | 99.00th=[ 297], 99.50th=[ 347], 99.90th=[ 449], 99.95th=[ 449], 00:15:07.345 | 99.99th=[ 449] 00:15:07.345 bw ( KiB/s): min= 4096, max= 4096, per=25.69%, avg=4096.00, stdev= 0.00, samples=1 00:15:07.345 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:07.345 lat (usec) : 250=86.43%, 500=9.29%, 750=0.19% 00:15:07.345 lat (msec) : 50=4.09% 00:15:07.345 cpu : usr=0.20%, sys=0.59%, ctx=539, majf=0, minf=2 00:15:07.346 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:07.346 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:07.346 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:07.346 issued rwts: total=26,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:07.346 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:07.346 00:15:07.346 Run status group 0 (all jobs): 00:15:07.346 READ: bw=8690KiB/s (8899kB/s), 86.5KiB/s-4511KiB/s (88.6kB/s-4619kB/s), io=8916KiB (9130kB), run=1001-1026msec 00:15:07.346 WRITE: bw=15.6MiB/s (16.3MB/s), 2004KiB/s-6114KiB/s (2052kB/s-6261kB/s), io=16.0MiB (16.8MB), run=1001-1026msec 00:15:07.346 00:15:07.346 Disk stats (read/write): 00:15:07.346 nvme0n1: ios=55/512, merge=0/0, ticks=1660/95, in_queue=1755, util=97.29% 00:15:07.346 nvme0n2: ios=984/1024, merge=0/0, ticks=1553/207, in_queue=1760, util=98.58% 00:15:07.346 nvme0n3: ios=1041/1536, merge=0/0, ticks=472/325, in_queue=797, util=89.07% 00:15:07.346 nvme0n4: ios=47/512, merge=0/0, ticks=1690/102, in_queue=1792, util=98.53% 00:15:07.346 00:16:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:15:07.346 [global] 00:15:07.346 thread=1 00:15:07.346 invalidate=1 00:15:07.346 rw=write 00:15:07.346 time_based=1 00:15:07.346 runtime=1 00:15:07.346 ioengine=libaio 00:15:07.346 direct=1 00:15:07.346 bs=4096 00:15:07.346 iodepth=128 00:15:07.346 norandommap=0 00:15:07.346 numjobs=1 00:15:07.346 00:15:07.346 verify_dump=1 00:15:07.346 verify_backlog=512 00:15:07.346 verify_state_save=0 00:15:07.346 do_verify=1 00:15:07.346 verify=crc32c-intel 00:15:07.346 [job0] 00:15:07.346 filename=/dev/nvme0n1 00:15:07.346 [job1] 00:15:07.346 filename=/dev/nvme0n2 00:15:07.346 [job2] 00:15:07.346 filename=/dev/nvme0n3 00:15:07.346 [job3] 00:15:07.346 filename=/dev/nvme0n4 00:15:07.346 Could not set queue depth (nvme0n1) 00:15:07.346 Could not set queue depth (nvme0n2) 00:15:07.346 Could not set queue depth (nvme0n3) 00:15:07.346 Could not set queue depth (nvme0n4) 00:15:07.610 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:07.610 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:07.610 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:07.610 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:07.610 fio-3.35 00:15:07.610 Starting 4 threads 00:15:08.989 00:15:08.989 job0: (groupid=0, jobs=1): err= 0: pid=1494248: Tue Jul 16 00:16:27 2024 00:15:08.989 read: IOPS=4547, BW=17.8MiB/s (18.6MB/s)(17.8MiB/1004msec) 00:15:08.989 slat (nsec): min=1378, max=13987k, avg=117007.27, stdev=800408.18 00:15:08.989 clat (usec): min=3311, max=78182, avg=13889.81, stdev=8672.41 00:15:08.989 lat (usec): min=4280, max=78185, avg=14006.82, stdev=8771.34 00:15:08.989 clat percentiles (usec): 00:15:08.989 | 1.00th=[ 6652], 5.00th=[ 8717], 10.00th=[ 9503], 20.00th=[ 9765], 00:15:08.989 | 30.00th=[10552], 40.00th=[10945], 50.00th=[11338], 60.00th=[11994], 00:15:08.989 | 70.00th=[12911], 80.00th=[15139], 90.00th=[19792], 95.00th=[25822], 00:15:08.989 | 99.00th=[65274], 99.50th=[71828], 99.90th=[78119], 99.95th=[78119], 00:15:08.989 | 99.99th=[78119] 00:15:08.989 write: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec); 0 zone resets 00:15:08.989 slat (usec): min=2, max=14192, avg=88.48, stdev=524.88 00:15:08.989 clat (usec): min=1449, max=71897, avg=13837.14, stdev=10778.37 00:15:08.989 lat (usec): min=1517, max=71906, avg=13925.62, stdev=10802.32 00:15:08.989 clat percentiles (usec): 00:15:08.989 | 1.00th=[ 3261], 5.00th=[ 5604], 10.00th=[ 6849], 20.00th=[ 9372], 00:15:08.989 | 30.00th=[10290], 40.00th=[10945], 50.00th=[11338], 60.00th=[11469], 00:15:08.989 | 70.00th=[12387], 80.00th=[14222], 90.00th=[19792], 95.00th=[39584], 00:15:08.989 | 99.00th=[66323], 99.50th=[69731], 99.90th=[70779], 99.95th=[70779], 00:15:08.989 | 99.99th=[71828] 00:15:08.989 bw ( KiB/s): min=16351, max=20480, per=23.72%, avg=18415.50, stdev=2919.64, samples=2 00:15:08.989 iops : min= 4087, max= 5120, avg=4603.50, stdev=730.44, samples=2 00:15:08.989 lat (msec) : 2=0.12%, 4=0.99%, 10=22.69%, 20=66.94%, 50=6.93% 00:15:08.989 lat (msec) : 100=2.32% 00:15:08.989 cpu : usr=3.69%, sys=5.08%, ctx=506, majf=0, minf=1 00:15:08.989 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:15:08.989 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:08.989 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:08.989 issued rwts: total=4566,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:08.989 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:08.989 job1: (groupid=0, jobs=1): err= 0: pid=1494249: Tue Jul 16 00:16:27 2024 00:15:08.989 read: IOPS=4847, BW=18.9MiB/s (19.9MB/s)(19.0MiB/1003msec) 00:15:08.989 slat (nsec): min=1429, max=16412k, avg=100235.75, stdev=671455.98 00:15:08.989 clat (usec): min=617, max=44992, avg=13433.79, stdev=5849.20 00:15:08.989 lat (usec): min=2556, max=49511, avg=13534.03, stdev=5879.09 00:15:08.989 clat percentiles (usec): 00:15:08.989 | 1.00th=[ 5538], 5.00th=[ 8291], 10.00th=[ 9241], 20.00th=[ 9765], 00:15:08.989 | 30.00th=[10421], 40.00th=[10945], 50.00th=[11600], 60.00th=[12256], 00:15:08.989 | 70.00th=[13173], 80.00th=[14746], 90.00th=[22676], 95.00th=[27657], 00:15:08.989 | 99.00th=[30802], 99.50th=[41681], 99.90th=[44827], 99.95th=[44827], 00:15:08.989 | 99.99th=[44827] 00:15:08.989 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:15:08.989 slat (usec): min=2, max=14843, avg=94.14, stdev=624.51 00:15:08.989 clat (usec): min=1794, max=47104, avg=12004.66, stdev=5679.07 00:15:08.989 lat (usec): min=1853, max=47111, avg=12098.81, stdev=5718.15 00:15:08.989 clat percentiles (usec): 00:15:08.989 | 1.00th=[ 4752], 5.00th=[ 6652], 10.00th=[ 8160], 20.00th=[ 9372], 00:15:08.989 | 30.00th=[ 9896], 40.00th=[10290], 50.00th=[10683], 60.00th=[11076], 00:15:08.989 | 70.00th=[11600], 80.00th=[12649], 90.00th=[16712], 95.00th=[26346], 00:15:08.989 | 99.00th=[39060], 99.50th=[40633], 99.90th=[46924], 99.95th=[46924], 00:15:08.989 | 99.99th=[46924] 00:15:08.989 bw ( KiB/s): min=19488, max=21472, per=26.37%, avg=20480.00, stdev=1402.90, samples=2 00:15:08.989 iops : min= 4872, max= 5368, avg=5120.00, stdev=350.72, samples=2 00:15:08.989 lat (usec) : 750=0.01% 00:15:08.989 lat (msec) : 2=0.05%, 4=0.36%, 10=28.22%, 20=60.75%, 50=10.61% 00:15:08.989 cpu : usr=3.69%, sys=5.29%, ctx=433, majf=0, minf=1 00:15:08.989 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:15:08.989 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:08.989 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:08.989 issued rwts: total=4862,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:08.989 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:08.989 job2: (groupid=0, jobs=1): err= 0: pid=1494250: Tue Jul 16 00:16:27 2024 00:15:08.989 read: IOPS=4846, BW=18.9MiB/s (19.9MB/s)(19.0MiB/1003msec) 00:15:08.989 slat (nsec): min=1118, max=15953k, avg=96552.31, stdev=766587.07 00:15:08.989 clat (usec): min=1163, max=32887, avg=13710.42, stdev=4365.76 00:15:08.989 lat (usec): min=1175, max=32896, avg=13806.98, stdev=4415.19 00:15:08.989 clat percentiles (usec): 00:15:08.989 | 1.00th=[ 2442], 5.00th=[ 8160], 10.00th=[ 8979], 20.00th=[10159], 00:15:08.989 | 30.00th=[11469], 40.00th=[12256], 50.00th=[13304], 60.00th=[14353], 00:15:08.989 | 70.00th=[15664], 80.00th=[16909], 90.00th=[19792], 95.00th=[21627], 00:15:08.989 | 99.00th=[23987], 99.50th=[32900], 99.90th=[32900], 99.95th=[32900], 00:15:08.989 | 99.99th=[32900] 00:15:08.989 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:15:08.989 slat (usec): min=2, max=14177, avg=85.04, stdev=653.18 00:15:08.989 clat (usec): min=637, max=42711, avg=11846.42, stdev=4860.77 00:15:08.989 lat (usec): min=647, max=42714, avg=11931.46, stdev=4868.71 00:15:08.989 clat percentiles (usec): 00:15:08.989 | 1.00th=[ 816], 5.00th=[ 4178], 10.00th=[ 6587], 20.00th=[ 8160], 00:15:08.989 | 30.00th=[ 9896], 40.00th=[11076], 50.00th=[11731], 60.00th=[12780], 00:15:08.989 | 70.00th=[13698], 80.00th=[14484], 90.00th=[17171], 95.00th=[18220], 00:15:08.989 | 99.00th=[25822], 99.50th=[38011], 99.90th=[42730], 99.95th=[42730], 00:15:08.989 | 99.99th=[42730] 00:15:08.989 bw ( KiB/s): min=19344, max=21616, per=26.37%, avg=20480.00, stdev=1606.55, samples=2 00:15:08.989 iops : min= 4836, max= 5404, avg=5120.00, stdev=401.64, samples=2 00:15:08.989 lat (usec) : 750=0.10%, 1000=0.63% 00:15:08.989 lat (msec) : 2=0.72%, 4=1.89%, 10=21.13%, 20=69.10%, 50=6.42% 00:15:08.989 cpu : usr=3.69%, sys=5.99%, ctx=371, majf=0, minf=1 00:15:08.989 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:15:08.989 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:08.989 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:08.989 issued rwts: total=4861,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:08.989 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:08.989 job3: (groupid=0, jobs=1): err= 0: pid=1494251: Tue Jul 16 00:16:27 2024 00:15:08.989 read: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec) 00:15:08.989 slat (nsec): min=1235, max=16047k, avg=99328.99, stdev=788386.16 00:15:08.989 clat (usec): min=3950, max=55477, avg=14017.45, stdev=5382.49 00:15:08.989 lat (usec): min=4498, max=58367, avg=14116.78, stdev=5426.93 00:15:08.990 clat percentiles (usec): 00:15:08.990 | 1.00th=[ 5997], 5.00th=[ 7898], 10.00th=[ 9503], 20.00th=[10552], 00:15:08.990 | 30.00th=[11338], 40.00th=[11994], 50.00th=[12649], 60.00th=[12911], 00:15:08.990 | 70.00th=[15008], 80.00th=[17695], 90.00th=[20841], 95.00th=[23987], 00:15:08.990 | 99.00th=[31327], 99.50th=[31327], 99.90th=[55313], 99.95th=[55313], 00:15:08.990 | 99.99th=[55313] 00:15:08.990 write: IOPS=4623, BW=18.1MiB/s (18.9MB/s)(18.1MiB/1004msec); 0 zone resets 00:15:08.990 slat (nsec): min=1939, max=12219k, avg=84253.75, stdev=589391.25 00:15:08.990 clat (usec): min=984, max=52111, avg=13538.05, stdev=7862.86 00:15:08.990 lat (usec): min=992, max=52115, avg=13622.30, stdev=7892.87 00:15:08.990 clat percentiles (usec): 00:15:08.990 | 1.00th=[ 3130], 5.00th=[ 5342], 10.00th=[ 7046], 20.00th=[ 8160], 00:15:08.990 | 30.00th=[10028], 40.00th=[10552], 50.00th=[11863], 60.00th=[13173], 00:15:08.990 | 70.00th=[14353], 80.00th=[16712], 90.00th=[20055], 95.00th=[29230], 00:15:08.990 | 99.00th=[49021], 99.50th=[51119], 99.90th=[52167], 99.95th=[52167], 00:15:08.990 | 99.99th=[52167] 00:15:08.990 bw ( KiB/s): min=18072, max=18792, per=23.74%, avg=18432.00, stdev=509.12, samples=2 00:15:08.990 iops : min= 4518, max= 4698, avg=4608.00, stdev=127.28, samples=2 00:15:08.990 lat (usec) : 1000=0.08% 00:15:08.990 lat (msec) : 2=0.11%, 4=0.77%, 10=22.13%, 20=65.95%, 50=10.37% 00:15:08.990 lat (msec) : 100=0.61% 00:15:08.990 cpu : usr=2.59%, sys=5.48%, ctx=361, majf=0, minf=1 00:15:08.990 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:15:08.990 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:08.990 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:08.990 issued rwts: total=4608,4642,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:08.990 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:08.990 00:15:08.990 Run status group 0 (all jobs): 00:15:08.990 READ: bw=73.5MiB/s (77.1MB/s), 17.8MiB/s-18.9MiB/s (18.6MB/s-19.9MB/s), io=73.8MiB (77.4MB), run=1003-1004msec 00:15:08.990 WRITE: bw=75.8MiB/s (79.5MB/s), 17.9MiB/s-19.9MiB/s (18.8MB/s-20.9MB/s), io=76.1MiB (79.8MB), run=1003-1004msec 00:15:08.990 00:15:08.990 Disk stats (read/write): 00:15:08.990 nvme0n1: ios=3780/4096, merge=0/0, ticks=37844/47399, in_queue=85243, util=97.29% 00:15:08.990 nvme0n2: ios=4115/4177, merge=0/0, ticks=30509/27654, in_queue=58163, util=87.31% 00:15:08.990 nvme0n3: ios=4123/4402, merge=0/0, ticks=50628/42278, in_queue=92906, util=91.68% 00:15:08.990 nvme0n4: ios=3830/4096, merge=0/0, ticks=46287/48404, in_queue=94691, util=89.32% 00:15:08.990 00:16:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:15:08.990 [global] 00:15:08.990 thread=1 00:15:08.990 invalidate=1 00:15:08.990 rw=randwrite 00:15:08.990 time_based=1 00:15:08.990 runtime=1 00:15:08.990 ioengine=libaio 00:15:08.990 direct=1 00:15:08.990 bs=4096 00:15:08.990 iodepth=128 00:15:08.990 norandommap=0 00:15:08.990 numjobs=1 00:15:08.990 00:15:08.990 verify_dump=1 00:15:08.990 verify_backlog=512 00:15:08.990 verify_state_save=0 00:15:08.990 do_verify=1 00:15:08.990 verify=crc32c-intel 00:15:08.990 [job0] 00:15:08.990 filename=/dev/nvme0n1 00:15:08.990 [job1] 00:15:08.990 filename=/dev/nvme0n2 00:15:08.990 [job2] 00:15:08.990 filename=/dev/nvme0n3 00:15:08.990 [job3] 00:15:08.990 filename=/dev/nvme0n4 00:15:08.990 Could not set queue depth (nvme0n1) 00:15:08.990 Could not set queue depth (nvme0n2) 00:15:08.990 Could not set queue depth (nvme0n3) 00:15:08.990 Could not set queue depth (nvme0n4) 00:15:08.990 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:08.990 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:08.990 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:08.990 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:08.990 fio-3.35 00:15:08.990 Starting 4 threads 00:15:10.371 00:15:10.371 job0: (groupid=0, jobs=1): err= 0: pid=1494622: Tue Jul 16 00:16:28 2024 00:15:10.371 read: IOPS=2395, BW=9581KiB/s (9811kB/s)(9600KiB/1002msec) 00:15:10.371 slat (nsec): min=1503, max=14133k, avg=223905.83, stdev=1164517.73 00:15:10.371 clat (usec): min=662, max=57063, avg=27040.67, stdev=9700.19 00:15:10.371 lat (usec): min=3846, max=57071, avg=27264.57, stdev=9714.49 00:15:10.371 clat percentiles (usec): 00:15:10.371 | 1.00th=[ 4113], 5.00th=[16319], 10.00th=[17171], 20.00th=[19006], 00:15:10.371 | 30.00th=[21103], 40.00th=[23462], 50.00th=[25822], 60.00th=[26608], 00:15:10.371 | 70.00th=[29754], 80.00th=[34866], 90.00th=[40633], 95.00th=[47449], 00:15:10.371 | 99.00th=[55837], 99.50th=[56886], 99.90th=[56886], 99.95th=[56886], 00:15:10.371 | 99.99th=[56886] 00:15:10.371 write: IOPS=2554, BW=9.98MiB/s (10.5MB/s)(10.0MiB/1002msec); 0 zone resets 00:15:10.371 slat (usec): min=2, max=10545, avg=173.65, stdev=921.63 00:15:10.371 clat (usec): min=10506, max=46867, avg=23757.39, stdev=7963.25 00:15:10.371 lat (usec): min=14540, max=46879, avg=23931.03, stdev=7958.88 00:15:10.371 clat percentiles (usec): 00:15:10.371 | 1.00th=[14484], 5.00th=[15270], 10.00th=[15533], 20.00th=[16450], 00:15:10.371 | 30.00th=[16909], 40.00th=[19268], 50.00th=[21365], 60.00th=[23725], 00:15:10.371 | 70.00th=[27132], 80.00th=[30802], 90.00th=[36963], 95.00th=[38536], 00:15:10.371 | 99.00th=[41157], 99.50th=[46924], 99.90th=[46924], 99.95th=[46924], 00:15:10.371 | 99.99th=[46924] 00:15:10.371 bw ( KiB/s): min= 8175, max=12288, per=15.69%, avg=10231.50, stdev=2908.33, samples=2 00:15:10.371 iops : min= 2043, max= 3072, avg=2557.50, stdev=727.61, samples=2 00:15:10.371 lat (usec) : 750=0.02% 00:15:10.371 lat (msec) : 4=0.28%, 10=0.69%, 20=33.65%, 50=63.89%, 100=1.47% 00:15:10.371 cpu : usr=2.00%, sys=4.40%, ctx=246, majf=0, minf=1 00:15:10.371 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:15:10.371 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:10.371 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:10.371 issued rwts: total=2400,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:10.371 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:10.371 job1: (groupid=0, jobs=1): err= 0: pid=1494623: Tue Jul 16 00:16:28 2024 00:15:10.371 read: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec) 00:15:10.371 slat (nsec): min=1002, max=13499k, avg=114034.94, stdev=761012.81 00:15:10.371 clat (usec): min=5044, max=42054, avg=14545.13, stdev=6400.66 00:15:10.371 lat (usec): min=5049, max=42063, avg=14659.16, stdev=6454.92 00:15:10.371 clat percentiles (usec): 00:15:10.371 | 1.00th=[ 5997], 5.00th=[ 8586], 10.00th=[ 9241], 20.00th=[10290], 00:15:10.371 | 30.00th=[10814], 40.00th=[10945], 50.00th=[11600], 60.00th=[13173], 00:15:10.371 | 70.00th=[15795], 80.00th=[19792], 90.00th=[23200], 95.00th=[26346], 00:15:10.371 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:15:10.371 | 99.99th=[42206] 00:15:10.371 write: IOPS=4824, BW=18.8MiB/s (19.8MB/s)(18.9MiB/1002msec); 0 zone resets 00:15:10.371 slat (nsec): min=1661, max=9714.8k, avg=92785.34, stdev=551709.65 00:15:10.371 clat (usec): min=304, max=31041, avg=12315.38, stdev=5822.63 00:15:10.371 lat (usec): min=1168, max=38364, avg=12408.16, stdev=5851.70 00:15:10.371 clat percentiles (usec): 00:15:10.371 | 1.00th=[ 2900], 5.00th=[ 5735], 10.00th=[ 7242], 20.00th=[ 8029], 00:15:10.371 | 30.00th=[ 8979], 40.00th=[10028], 50.00th=[10552], 60.00th=[11600], 00:15:10.371 | 70.00th=[13566], 80.00th=[15533], 90.00th=[19792], 95.00th=[26870], 00:15:10.371 | 99.00th=[31065], 99.50th=[31065], 99.90th=[31065], 99.95th=[31065], 00:15:10.371 | 99.99th=[31065] 00:15:10.371 bw ( KiB/s): min=16351, max=16351, per=25.07%, avg=16351.00, stdev= 0.00, samples=1 00:15:10.371 iops : min= 4087, max= 4087, avg=4087.00, stdev= 0.00, samples=1 00:15:10.371 lat (usec) : 500=0.01% 00:15:10.371 lat (msec) : 2=0.16%, 4=0.90%, 10=27.14%, 20=58.45%, 50=13.33% 00:15:10.371 cpu : usr=2.50%, sys=4.50%, ctx=457, majf=0, minf=1 00:15:10.371 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:15:10.371 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:10.371 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:10.371 issued rwts: total=4608,4834,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:10.371 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:10.371 job2: (groupid=0, jobs=1): err= 0: pid=1494625: Tue Jul 16 00:16:28 2024 00:15:10.371 read: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec) 00:15:10.371 slat (nsec): min=1053, max=11441k, avg=91095.83, stdev=572871.03 00:15:10.371 clat (usec): min=4414, max=53397, avg=12393.30, stdev=5578.95 00:15:10.371 lat (usec): min=4426, max=54584, avg=12484.40, stdev=5588.82 00:15:10.371 clat percentiles (usec): 00:15:10.371 | 1.00th=[ 6783], 5.00th=[ 7963], 10.00th=[ 8979], 20.00th=[ 9503], 00:15:10.371 | 30.00th=[10159], 40.00th=[10552], 50.00th=[11076], 60.00th=[11731], 00:15:10.371 | 70.00th=[12649], 80.00th=[13829], 90.00th=[16057], 95.00th=[20579], 00:15:10.371 | 99.00th=[49021], 99.50th=[51643], 99.90th=[53216], 99.95th=[53216], 00:15:10.371 | 99.99th=[53216] 00:15:10.371 write: IOPS=5369, BW=21.0MiB/s (22.0MB/s)(21.1MiB/1004msec); 0 zone resets 00:15:10.371 slat (nsec): min=1804, max=41068k, avg=91751.08, stdev=716268.40 00:15:10.371 clat (usec): min=1664, max=53968, avg=11706.82, stdev=6708.68 00:15:10.371 lat (usec): min=1677, max=53971, avg=11798.58, stdev=6750.80 00:15:10.371 clat percentiles (usec): 00:15:10.371 | 1.00th=[ 3523], 5.00th=[ 6325], 10.00th=[ 7373], 20.00th=[ 8586], 00:15:10.371 | 30.00th=[ 9241], 40.00th=[10028], 50.00th=[10683], 60.00th=[11207], 00:15:10.371 | 70.00th=[11600], 80.00th=[12256], 90.00th=[14877], 95.00th=[24511], 00:15:10.371 | 99.00th=[48497], 99.50th=[50594], 99.90th=[53216], 99.95th=[53740], 00:15:10.371 | 99.99th=[53740] 00:15:10.371 bw ( KiB/s): min=20608, max=21461, per=32.25%, avg=21034.50, stdev=603.16, samples=2 00:15:10.371 iops : min= 5152, max= 5365, avg=5258.50, stdev=150.61, samples=2 00:15:10.371 lat (msec) : 2=0.10%, 4=0.68%, 10=32.60%, 20=60.82%, 50=5.19% 00:15:10.371 lat (msec) : 100=0.60% 00:15:10.371 cpu : usr=2.89%, sys=4.29%, ctx=683, majf=0, minf=1 00:15:10.371 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:15:10.372 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:10.372 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:10.372 issued rwts: total=5120,5391,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:10.372 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:10.372 job3: (groupid=0, jobs=1): err= 0: pid=1494626: Tue Jul 16 00:16:28 2024 00:15:10.372 read: IOPS=3345, BW=13.1MiB/s (13.7MB/s)(13.1MiB/1002msec) 00:15:10.372 slat (nsec): min=1131, max=13164k, avg=157137.30, stdev=869586.64 00:15:10.372 clat (usec): min=559, max=41140, avg=19228.68, stdev=6448.83 00:15:10.372 lat (usec): min=3832, max=41149, avg=19385.81, stdev=6503.46 00:15:10.372 clat percentiles (usec): 00:15:10.372 | 1.00th=[ 8029], 5.00th=[10552], 10.00th=[11994], 20.00th=[13698], 00:15:10.372 | 30.00th=[15401], 40.00th=[17171], 50.00th=[18744], 60.00th=[19530], 00:15:10.372 | 70.00th=[21627], 80.00th=[24511], 90.00th=[27657], 95.00th=[32113], 00:15:10.372 | 99.00th=[38536], 99.50th=[38536], 99.90th=[41157], 99.95th=[41157], 00:15:10.372 | 99.99th=[41157] 00:15:10.372 write: IOPS=3576, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1002msec); 0 zone resets 00:15:10.372 slat (nsec): min=1936, max=9900.9k, avg=125592.14, stdev=669616.66 00:15:10.372 clat (usec): min=7928, max=41061, avg=17323.84, stdev=5855.37 00:15:10.372 lat (usec): min=7940, max=41734, avg=17449.43, stdev=5897.79 00:15:10.372 clat percentiles (usec): 00:15:10.372 | 1.00th=[ 8848], 5.00th=[10552], 10.00th=[11731], 20.00th=[12649], 00:15:10.372 | 30.00th=[13566], 40.00th=[14222], 50.00th=[15664], 60.00th=[17433], 00:15:10.372 | 70.00th=[19006], 80.00th=[21365], 90.00th=[26084], 95.00th=[27395], 00:15:10.372 | 99.00th=[35390], 99.50th=[38536], 99.90th=[40109], 99.95th=[40109], 00:15:10.372 | 99.99th=[41157] 00:15:10.372 bw ( KiB/s): min=14435, max=14435, per=22.13%, avg=14435.00, stdev= 0.00, samples=1 00:15:10.372 iops : min= 3608, max= 3608, avg=3608.00, stdev= 0.00, samples=1 00:15:10.372 lat (usec) : 750=0.01% 00:15:10.372 lat (msec) : 4=0.26%, 10=3.26%, 20=64.50%, 50=31.96% 00:15:10.372 cpu : usr=3.00%, sys=4.70%, ctx=332, majf=0, minf=1 00:15:10.372 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:15:10.372 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:10.372 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:10.372 issued rwts: total=3352,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:10.372 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:10.372 00:15:10.372 Run status group 0 (all jobs): 00:15:10.372 READ: bw=60.2MiB/s (63.2MB/s), 9581KiB/s-19.9MiB/s (9811kB/s-20.9MB/s), io=60.5MiB (63.4MB), run=1002-1004msec 00:15:10.372 WRITE: bw=63.7MiB/s (66.8MB/s), 9.98MiB/s-21.0MiB/s (10.5MB/s-22.0MB/s), io=63.9MiB (67.0MB), run=1002-1004msec 00:15:10.372 00:15:10.372 Disk stats (read/write): 00:15:10.372 nvme0n1: ios=2098/2135, merge=0/0, ticks=14845/11494, in_queue=26339, util=86.97% 00:15:10.372 nvme0n2: ios=4006/4096, merge=0/0, ticks=26254/19598, in_queue=45852, util=86.46% 00:15:10.372 nvme0n3: ios=4174/4608, merge=0/0, ticks=27332/23549, in_queue=50881, util=96.56% 00:15:10.372 nvme0n4: ios=2687/3072, merge=0/0, ticks=22474/19010, in_queue=41484, util=96.85% 00:15:10.372 00:16:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:15:10.372 00:16:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1494855 00:15:10.372 00:16:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:15:10.372 00:16:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:15:10.372 [global] 00:15:10.372 thread=1 00:15:10.372 invalidate=1 00:15:10.372 rw=read 00:15:10.372 time_based=1 00:15:10.372 runtime=10 00:15:10.372 ioengine=libaio 00:15:10.372 direct=1 00:15:10.372 bs=4096 00:15:10.372 iodepth=1 00:15:10.372 norandommap=1 00:15:10.372 numjobs=1 00:15:10.372 00:15:10.372 [job0] 00:15:10.372 filename=/dev/nvme0n1 00:15:10.372 [job1] 00:15:10.372 filename=/dev/nvme0n2 00:15:10.372 [job2] 00:15:10.372 filename=/dev/nvme0n3 00:15:10.372 [job3] 00:15:10.372 filename=/dev/nvme0n4 00:15:10.372 Could not set queue depth (nvme0n1) 00:15:10.372 Could not set queue depth (nvme0n2) 00:15:10.372 Could not set queue depth (nvme0n3) 00:15:10.372 Could not set queue depth (nvme0n4) 00:15:10.630 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:10.630 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:10.630 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:10.630 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:10.630 fio-3.35 00:15:10.630 Starting 4 threads 00:15:13.165 00:16:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:15:13.425 00:16:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:15:13.425 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=274432, buflen=4096 00:15:13.425 fio: pid=1495011, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:15:13.684 00:16:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:13.684 00:16:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:15:13.684 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=1015808, buflen=4096 00:15:13.684 fio: pid=1495007, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:15:13.943 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=32436224, buflen=4096 00:15:13.943 fio: pid=1494996, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:15:13.943 00:16:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:13.943 00:16:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:15:14.202 00:16:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:14.202 00:16:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:15:14.202 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=335872, buflen=4096 00:15:14.202 fio: pid=1494999, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:15:14.202 00:15:14.203 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1494996: Tue Jul 16 00:16:32 2024 00:15:14.203 read: IOPS=2543, BW=9.93MiB/s (10.4MB/s)(30.9MiB/3114msec) 00:15:14.203 slat (usec): min=7, max=8496, avg=10.34, stdev=133.81 00:15:14.203 clat (usec): min=212, max=40900, avg=377.09, stdev=460.32 00:15:14.203 lat (usec): min=220, max=40911, avg=387.43, stdev=480.13 00:15:14.203 clat percentiles (usec): 00:15:14.203 | 1.00th=[ 239], 5.00th=[ 269], 10.00th=[ 355], 20.00th=[ 363], 00:15:14.203 | 30.00th=[ 367], 40.00th=[ 371], 50.00th=[ 375], 60.00th=[ 379], 00:15:14.203 | 70.00th=[ 383], 80.00th=[ 388], 90.00th=[ 396], 95.00th=[ 408], 00:15:14.203 | 99.00th=[ 486], 99.50th=[ 537], 99.90th=[ 693], 99.95th=[ 2245], 00:15:14.203 | 99.99th=[41157] 00:15:14.203 bw ( KiB/s): min= 9256, max=11424, per=100.00%, avg=10234.83, stdev=749.82, samples=6 00:15:14.203 iops : min= 2314, max= 2856, avg=2558.67, stdev=187.50, samples=6 00:15:14.203 lat (usec) : 250=2.90%, 500=96.19%, 750=0.83% 00:15:14.203 lat (msec) : 2=0.01%, 4=0.03%, 10=0.01%, 50=0.01% 00:15:14.203 cpu : usr=1.96%, sys=3.60%, ctx=7924, majf=0, minf=1 00:15:14.203 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:14.203 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:14.203 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:14.203 issued rwts: total=7920,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:14.203 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:14.203 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1494999: Tue Jul 16 00:16:32 2024 00:15:14.203 read: IOPS=24, BW=98.4KiB/s (101kB/s)(328KiB/3334msec) 00:15:14.203 slat (usec): min=11, max=13727, avg=361.22, stdev=1897.20 00:15:14.203 clat (usec): min=361, max=42011, avg=40038.53, stdev=6290.21 00:15:14.203 lat (usec): min=386, max=54906, avg=40403.93, stdev=6634.24 00:15:14.203 clat percentiles (usec): 00:15:14.203 | 1.00th=[ 363], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:15:14.203 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:15:14.203 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:15:14.203 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:15:14.203 | 99.99th=[42206] 00:15:14.203 bw ( KiB/s): min= 96, max= 106, per=0.99%, avg=99.00, stdev= 4.69, samples=6 00:15:14.203 iops : min= 24, max= 26, avg=24.67, stdev= 1.03, samples=6 00:15:14.203 lat (usec) : 500=1.20%, 750=1.20% 00:15:14.203 lat (msec) : 50=96.39% 00:15:14.203 cpu : usr=0.12%, sys=0.00%, ctx=88, majf=0, minf=1 00:15:14.203 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:14.203 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:14.203 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:14.203 issued rwts: total=83,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:14.203 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:14.203 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1495007: Tue Jul 16 00:16:32 2024 00:15:14.203 read: IOPS=84, BW=338KiB/s (346kB/s)(992KiB/2934msec) 00:15:14.203 slat (usec): min=8, max=139, avg=13.97, stdev=10.68 00:15:14.203 clat (usec): min=303, max=42119, avg=11731.10, stdev=18280.01 00:15:14.203 lat (usec): min=312, max=42132, avg=11745.03, stdev=18285.81 00:15:14.203 clat percentiles (usec): 00:15:14.203 | 1.00th=[ 306], 5.00th=[ 314], 10.00th=[ 322], 20.00th=[ 334], 00:15:14.203 | 30.00th=[ 351], 40.00th=[ 416], 50.00th=[ 429], 60.00th=[ 445], 00:15:14.203 | 70.00th=[ 510], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:15:14.203 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:15:14.203 | 99.99th=[42206] 00:15:14.203 bw ( KiB/s): min= 96, max= 1496, per=3.81%, avg=380.80, stdev=623.43, samples=5 00:15:14.203 iops : min= 24, max= 374, avg=95.20, stdev=155.86, samples=5 00:15:14.203 lat (usec) : 500=67.47%, 750=3.61% 00:15:14.203 lat (msec) : 2=0.80%, 50=27.71% 00:15:14.203 cpu : usr=0.14%, sys=0.07%, ctx=251, majf=0, minf=1 00:15:14.203 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:14.203 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:14.203 complete : 0=0.4%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:14.203 issued rwts: total=249,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:14.203 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:14.203 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1495011: Tue Jul 16 00:16:32 2024 00:15:14.203 read: IOPS=24, BW=98.0KiB/s (100kB/s)(268KiB/2735msec) 00:15:14.203 slat (nsec): min=10482, max=35386, avg=22050.07, stdev=2592.99 00:15:14.203 clat (usec): min=544, max=42028, avg=40490.50, stdev=4965.07 00:15:14.203 lat (usec): min=579, max=42049, avg=40512.54, stdev=4963.37 00:15:14.203 clat percentiles (usec): 00:15:14.203 | 1.00th=[ 545], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:15:14.203 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:15:14.203 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:15:14.203 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:15:14.203 | 99.99th=[42206] 00:15:14.203 bw ( KiB/s): min= 96, max= 104, per=0.97%, avg=97.60, stdev= 3.58, samples=5 00:15:14.203 iops : min= 24, max= 26, avg=24.40, stdev= 0.89, samples=5 00:15:14.203 lat (usec) : 750=1.47% 00:15:14.203 lat (msec) : 50=97.06% 00:15:14.203 cpu : usr=0.07%, sys=0.00%, ctx=68, majf=0, minf=2 00:15:14.203 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:14.203 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:14.203 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:14.203 issued rwts: total=68,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:14.203 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:14.203 00:15:14.203 Run status group 0 (all jobs): 00:15:14.203 READ: bw=9977KiB/s (10.2MB/s), 98.0KiB/s-9.93MiB/s (100kB/s-10.4MB/s), io=32.5MiB (34.1MB), run=2735-3334msec 00:15:14.203 00:15:14.203 Disk stats (read/write): 00:15:14.203 nvme0n1: ios=7955/0, merge=0/0, ticks=3883/0, in_queue=3883, util=98.92% 00:15:14.203 nvme0n2: ios=114/0, merge=0/0, ticks=4005/0, in_queue=4005, util=98.98% 00:15:14.203 nvme0n3: ios=289/0, merge=0/0, ticks=3679/0, in_queue=3679, util=99.22% 00:15:14.203 nvme0n4: ios=64/0, merge=0/0, ticks=2590/0, in_queue=2590, util=96.41% 00:15:14.203 00:16:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:14.203 00:16:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:15:14.462 00:16:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:14.462 00:16:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:15:14.720 00:16:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:14.720 00:16:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:15:14.720 00:16:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:14.720 00:16:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:15:14.979 00:16:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:15:14.979 00:16:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 1494855 00:15:14.979 00:16:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:15:14.979 00:16:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:15.238 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:15.238 00:16:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:15.238 00:16:33 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1213 -- # local i=0 00:15:15.238 00:16:33 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1214 -- # lsblk -o NAME,SERIAL 00:15:15.238 00:16:33 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1214 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:15.238 00:16:33 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1221 -- # lsblk -l -o NAME,SERIAL 00:15:15.238 00:16:33 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1221 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:15.238 00:16:33 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1225 -- # return 0 00:15:15.238 00:16:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:15:15.238 00:16:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:15:15.238 nvmf hotplug test: fio failed as expected 00:15:15.238 00:16:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:15.238 00:16:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:15:15.238 00:16:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:15:15.498 00:16:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:15:15.498 00:16:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:15:15.498 00:16:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:15:15.498 00:16:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:15.498 00:16:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:15:15.498 00:16:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:15.498 00:16:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:15:15.498 00:16:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:15.498 00:16:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:15.498 rmmod nvme_tcp 00:15:15.498 rmmod nvme_fabrics 00:15:15.498 rmmod nvme_keyring 00:15:15.498 00:16:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:15.498 00:16:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:15:15.498 00:16:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:15:15.498 00:16:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 1492137 ']' 00:15:15.498 00:16:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 1492137 00:15:15.498 00:16:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@942 -- # '[' -z 1492137 ']' 00:15:15.498 00:16:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@946 -- # kill -0 1492137 00:15:15.498 00:16:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@947 -- # uname 00:15:15.498 00:16:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:15:15.498 00:16:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1492137 00:15:15.498 00:16:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:15:15.498 00:16:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:15:15.498 00:16:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1492137' 00:15:15.499 killing process with pid 1492137 00:15:15.499 00:16:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@961 -- # kill 1492137 00:15:15.499 00:16:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # wait 1492137 00:15:15.759 00:16:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:15.759 00:16:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:15.759 00:16:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:15.759 00:16:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:15.759 00:16:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:15.759 00:16:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:15.759 00:16:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:15.759 00:16:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:17.667 00:16:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:17.667 00:15:17.667 real 0m26.507s 00:15:17.667 user 1m47.382s 00:15:17.667 sys 0m7.752s 00:15:17.667 00:16:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1118 -- # xtrace_disable 00:15:17.667 00:16:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.667 ************************************ 00:15:17.667 END TEST nvmf_fio_target 00:15:17.667 ************************************ 00:15:17.667 00:16:36 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:15:17.667 00:16:36 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:15:17.667 00:16:36 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:15:17.667 00:16:36 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:15:17.667 00:16:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:17.927 ************************************ 00:15:17.927 START TEST nvmf_bdevio 00:15:17.927 ************************************ 00:15:17.927 00:16:36 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:15:17.927 * Looking for test storage... 00:15:17.927 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:17.927 00:16:36 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:17.927 00:16:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:15:17.927 00:16:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:17.927 00:16:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:17.927 00:16:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:17.927 00:16:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:17.927 00:16:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:17.927 00:16:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:17.927 00:16:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:17.927 00:16:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:17.927 00:16:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:17.927 00:16:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:17.927 00:16:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:17.927 00:16:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:17.927 00:16:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:17.927 00:16:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:17.927 00:16:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:17.927 00:16:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:17.927 00:16:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:17.927 00:16:36 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:17.927 00:16:36 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:17.927 00:16:36 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:17.927 00:16:36 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.927 00:16:36 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.927 00:16:36 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.927 00:16:36 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:15:17.927 00:16:36 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.927 00:16:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:15:17.927 00:16:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:17.927 00:16:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:17.927 00:16:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:17.927 00:16:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:17.927 00:16:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:17.927 00:16:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:17.927 00:16:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:17.927 00:16:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:17.927 00:16:36 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:17.927 00:16:36 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:17.927 00:16:36 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:15:17.927 00:16:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:17.927 00:16:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:17.927 00:16:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:17.927 00:16:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:17.927 00:16:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:17.927 00:16:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:17.927 00:16:36 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:17.927 00:16:36 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:17.927 00:16:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:17.927 00:16:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:17.927 00:16:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:15:17.927 00:16:36 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:23.264 00:16:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:23.264 00:16:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:15:23.264 00:16:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:23.264 00:16:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:23.264 00:16:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:23.264 00:16:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:23.264 00:16:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:23.264 00:16:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:15:23.264 00:16:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:23.264 00:16:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:15:23.264 00:16:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:15:23.264 00:16:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:15:23.264 00:16:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:15:23.264 00:16:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:15:23.264 00:16:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:15:23.264 00:16:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:23.264 00:16:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:23.264 00:16:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:23.264 00:16:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:23.264 00:16:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:23.264 00:16:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:23.264 00:16:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:23.264 00:16:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:23.264 00:16:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:23.264 00:16:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:23.264 00:16:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:23.264 00:16:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:23.264 00:16:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:23.264 00:16:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:23.264 00:16:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:23.264 00:16:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:23.264 00:16:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:23.264 00:16:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:23.264 00:16:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:23.264 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:23.264 00:16:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:23.264 00:16:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:23.264 00:16:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:23.264 00:16:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:23.264 00:16:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:23.264 00:16:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:23.264 00:16:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:23.264 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:23.264 00:16:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:23.264 00:16:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:23.264 00:16:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:23.264 00:16:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:23.264 00:16:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:23.264 00:16:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:23.264 00:16:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:23.264 00:16:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:23.264 00:16:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:23.264 00:16:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:23.264 00:16:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:23.264 00:16:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:23.264 00:16:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:23.264 00:16:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:23.264 00:16:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:23.264 00:16:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:23.264 Found net devices under 0000:86:00.0: cvl_0_0 00:15:23.264 00:16:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:23.264 00:16:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:23.264 00:16:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:23.264 00:16:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:23.264 00:16:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:23.264 00:16:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:23.264 00:16:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:23.264 00:16:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:23.264 00:16:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:23.264 Found net devices under 0000:86:00.1: cvl_0_1 00:15:23.264 00:16:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:23.264 00:16:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:23.264 00:16:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:15:23.264 00:16:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:23.264 00:16:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:23.264 00:16:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:23.264 00:16:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:23.264 00:16:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:23.264 00:16:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:23.264 00:16:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:23.264 00:16:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:23.264 00:16:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:23.264 00:16:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:23.264 00:16:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:23.264 00:16:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:23.264 00:16:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:23.264 00:16:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:23.264 00:16:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:23.264 00:16:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:23.264 00:16:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:23.264 00:16:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:23.264 00:16:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:23.264 00:16:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:23.264 00:16:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:23.264 00:16:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:23.264 00:16:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:23.264 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:23.264 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:15:23.264 00:15:23.264 --- 10.0.0.2 ping statistics --- 00:15:23.264 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:23.264 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:15:23.264 00:16:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:23.264 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:23.264 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.258 ms 00:15:23.264 00:15:23.264 --- 10.0.0.1 ping statistics --- 00:15:23.264 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:23.264 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:15:23.264 00:16:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:23.265 00:16:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:15:23.265 00:16:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:23.265 00:16:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:23.265 00:16:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:23.265 00:16:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:23.265 00:16:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:23.265 00:16:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:23.265 00:16:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:23.265 00:16:41 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:15:23.265 00:16:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:23.265 00:16:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:23.265 00:16:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:23.265 00:16:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=1499228 00:15:23.265 00:16:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 1499228 00:15:23.265 00:16:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@823 -- # '[' -z 1499228 ']' 00:15:23.265 00:16:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:23.265 00:16:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@828 -- # local max_retries=100 00:15:23.265 00:16:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:23.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:23.265 00:16:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:15:23.265 00:16:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@832 -- # xtrace_disable 00:15:23.265 00:16:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:23.265 [2024-07-16 00:16:41.667562] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:15:23.265 [2024-07-16 00:16:41.667606] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:23.265 [2024-07-16 00:16:41.723185] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:23.265 [2024-07-16 00:16:41.802782] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:23.265 [2024-07-16 00:16:41.802817] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:23.265 [2024-07-16 00:16:41.802824] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:23.265 [2024-07-16 00:16:41.802830] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:23.265 [2024-07-16 00:16:41.802835] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:23.265 [2024-07-16 00:16:41.802949] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:15:23.265 [2024-07-16 00:16:41.803056] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:15:23.265 [2024-07-16 00:16:41.803162] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:23.265 [2024-07-16 00:16:41.803163] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:15:23.834 00:16:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:15:23.834 00:16:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@856 -- # return 0 00:15:23.834 00:16:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:23.834 00:16:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:23.834 00:16:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:23.834 00:16:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:23.834 00:16:42 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:23.834 00:16:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:23.834 00:16:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:23.834 [2024-07-16 00:16:42.506219] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:23.834 00:16:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:23.834 00:16:42 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:23.834 00:16:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:23.834 00:16:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:23.834 Malloc0 00:15:23.834 00:16:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:23.834 00:16:42 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:23.834 00:16:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:23.834 00:16:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:23.834 00:16:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:23.834 00:16:42 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:23.834 00:16:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:23.834 00:16:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:23.834 00:16:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:23.834 00:16:42 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:23.834 00:16:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:23.834 00:16:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:23.834 [2024-07-16 00:16:42.557798] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:23.834 00:16:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:23.834 00:16:42 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:15:23.834 00:16:42 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:15:23.834 00:16:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:15:23.834 00:16:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:15:23.834 00:16:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:23.834 00:16:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:23.834 { 00:15:23.834 "params": { 00:15:23.834 "name": "Nvme$subsystem", 00:15:23.834 "trtype": "$TEST_TRANSPORT", 00:15:23.834 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:23.834 "adrfam": "ipv4", 00:15:23.834 "trsvcid": "$NVMF_PORT", 00:15:23.834 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:23.834 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:23.834 "hdgst": ${hdgst:-false}, 00:15:23.834 "ddgst": ${ddgst:-false} 00:15:23.834 }, 00:15:23.834 "method": "bdev_nvme_attach_controller" 00:15:23.834 } 00:15:23.834 EOF 00:15:23.834 )") 00:15:23.834 00:16:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:15:23.834 00:16:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:15:23.834 00:16:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:15:23.834 00:16:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:23.834 "params": { 00:15:23.834 "name": "Nvme1", 00:15:23.834 "trtype": "tcp", 00:15:23.834 "traddr": "10.0.0.2", 00:15:23.834 "adrfam": "ipv4", 00:15:23.834 "trsvcid": "4420", 00:15:23.834 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:23.834 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:23.834 "hdgst": false, 00:15:23.834 "ddgst": false 00:15:23.834 }, 00:15:23.834 "method": "bdev_nvme_attach_controller" 00:15:23.834 }' 00:15:23.834 [2024-07-16 00:16:42.607059] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:15:23.834 [2024-07-16 00:16:42.607104] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1499475 ] 00:15:23.834 [2024-07-16 00:16:42.661128] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:24.093 [2024-07-16 00:16:42.738024] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:24.093 [2024-07-16 00:16:42.738121] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:24.093 [2024-07-16 00:16:42.738123] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:24.353 I/O targets: 00:15:24.353 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:15:24.353 00:15:24.353 00:15:24.353 CUnit - A unit testing framework for C - Version 2.1-3 00:15:24.353 http://cunit.sourceforge.net/ 00:15:24.353 00:15:24.353 00:15:24.353 Suite: bdevio tests on: Nvme1n1 00:15:24.353 Test: blockdev write read block ...passed 00:15:24.353 Test: blockdev write zeroes read block ...passed 00:15:24.353 Test: blockdev write zeroes read no split ...passed 00:15:24.353 Test: blockdev write zeroes read split ...passed 00:15:24.614 Test: blockdev write zeroes read split partial ...passed 00:15:24.614 Test: blockdev reset ...[2024-07-16 00:16:43.223015] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:24.614 [2024-07-16 00:16:43.223076] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x253f6d0 (9): Bad file descriptor 00:15:24.614 [2024-07-16 00:16:43.236091] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:24.614 passed 00:15:24.614 Test: blockdev write read 8 blocks ...passed 00:15:24.614 Test: blockdev write read size > 128k ...passed 00:15:24.614 Test: blockdev write read invalid size ...passed 00:15:24.614 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:24.614 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:24.614 Test: blockdev write read max offset ...passed 00:15:24.614 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:24.614 Test: blockdev writev readv 8 blocks ...passed 00:15:24.614 Test: blockdev writev readv 30 x 1block ...passed 00:15:24.614 Test: blockdev writev readv block ...passed 00:15:24.614 Test: blockdev writev readv size > 128k ...passed 00:15:24.614 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:24.614 Test: blockdev comparev and writev ...[2024-07-16 00:16:43.409099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:24.614 [2024-07-16 00:16:43.409128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:24.614 [2024-07-16 00:16:43.409141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:24.614 [2024-07-16 00:16:43.409149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:15:24.614 [2024-07-16 00:16:43.409439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:24.614 [2024-07-16 00:16:43.409450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:15:24.614 [2024-07-16 00:16:43.409462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:24.614 [2024-07-16 00:16:43.409469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:15:24.614 [2024-07-16 00:16:43.409759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:24.614 [2024-07-16 00:16:43.409771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:15:24.614 [2024-07-16 00:16:43.409782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:24.614 [2024-07-16 00:16:43.409789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:15:24.614 [2024-07-16 00:16:43.410084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:24.614 [2024-07-16 00:16:43.410096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:15:24.614 [2024-07-16 00:16:43.410107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:24.614 [2024-07-16 00:16:43.410115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:15:24.614 passed 00:15:24.873 Test: blockdev nvme passthru rw ...passed 00:15:24.874 Test: blockdev nvme passthru vendor specific ...[2024-07-16 00:16:43.492666] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:24.874 [2024-07-16 00:16:43.492685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:15:24.874 [2024-07-16 00:16:43.492848] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:24.874 [2024-07-16 00:16:43.492859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:15:24.874 [2024-07-16 00:16:43.493018] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:24.874 [2024-07-16 00:16:43.493028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:15:24.874 [2024-07-16 00:16:43.493186] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:24.874 [2024-07-16 00:16:43.493197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:15:24.874 passed 00:15:24.874 Test: blockdev nvme admin passthru ...passed 00:15:24.874 Test: blockdev copy ...passed 00:15:24.874 00:15:24.874 Run Summary: Type Total Ran Passed Failed Inactive 00:15:24.874 suites 1 1 n/a 0 0 00:15:24.874 tests 23 23 23 0 0 00:15:24.874 asserts 152 152 152 0 n/a 00:15:24.874 00:15:24.874 Elapsed time = 1.056 seconds 00:15:24.874 00:16:43 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:24.874 00:16:43 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:24.874 00:16:43 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:24.874 00:16:43 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:24.874 00:16:43 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:15:24.874 00:16:43 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:15:24.874 00:16:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:24.874 00:16:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:15:24.874 00:16:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:24.874 00:16:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:15:24.874 00:16:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:24.874 00:16:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:25.133 rmmod nvme_tcp 00:15:25.133 rmmod nvme_fabrics 00:15:25.133 rmmod nvme_keyring 00:15:25.133 00:16:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:25.133 00:16:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:15:25.133 00:16:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:15:25.133 00:16:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 1499228 ']' 00:15:25.133 00:16:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 1499228 00:15:25.133 00:16:43 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@942 -- # '[' -z 1499228 ']' 00:15:25.133 00:16:43 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@946 -- # kill -0 1499228 00:15:25.133 00:16:43 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@947 -- # uname 00:15:25.133 00:16:43 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:15:25.133 00:16:43 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1499228 00:15:25.133 00:16:43 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # process_name=reactor_3 00:15:25.133 00:16:43 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # '[' reactor_3 = sudo ']' 00:15:25.133 00:16:43 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1499228' 00:15:25.133 killing process with pid 1499228 00:15:25.133 00:16:43 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@961 -- # kill 1499228 00:15:25.133 00:16:43 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # wait 1499228 00:15:25.392 00:16:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:25.392 00:16:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:25.392 00:16:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:25.392 00:16:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:25.392 00:16:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:25.392 00:16:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:25.392 00:16:44 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:25.392 00:16:44 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:27.295 00:16:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:27.295 00:15:27.295 real 0m9.572s 00:15:27.295 user 0m12.471s 00:15:27.295 sys 0m4.245s 00:15:27.295 00:16:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1118 -- # xtrace_disable 00:15:27.295 00:16:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:27.295 ************************************ 00:15:27.295 END TEST nvmf_bdevio 00:15:27.295 ************************************ 00:15:27.295 00:16:46 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:15:27.295 00:16:46 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:15:27.295 00:16:46 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:15:27.295 00:16:46 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:15:27.295 00:16:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:27.553 ************************************ 00:15:27.553 START TEST nvmf_auth_target 00:15:27.553 ************************************ 00:15:27.553 00:16:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:15:27.553 * Looking for test storage... 00:15:27.553 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:27.553 00:16:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:27.553 00:16:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:15:27.553 00:16:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:27.553 00:16:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:27.553 00:16:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:27.553 00:16:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:27.553 00:16:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:27.553 00:16:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:27.553 00:16:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:27.553 00:16:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:27.553 00:16:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:27.553 00:16:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:27.553 00:16:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:27.553 00:16:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:27.553 00:16:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:27.553 00:16:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:27.553 00:16:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:27.553 00:16:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:27.553 00:16:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:27.553 00:16:46 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:27.553 00:16:46 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:27.553 00:16:46 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:27.553 00:16:46 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.554 00:16:46 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.554 00:16:46 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.554 00:16:46 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:15:27.554 00:16:46 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.554 00:16:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:15:27.554 00:16:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:27.554 00:16:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:27.554 00:16:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:27.554 00:16:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:27.554 00:16:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:27.554 00:16:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:27.554 00:16:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:27.554 00:16:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:27.554 00:16:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:15:27.554 00:16:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:15:27.554 00:16:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:15:27.554 00:16:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:27.554 00:16:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:15:27.554 00:16:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:15:27.554 00:16:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:15:27.554 00:16:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:15:27.554 00:16:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:27.554 00:16:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:27.554 00:16:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:27.554 00:16:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:27.554 00:16:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:27.554 00:16:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:27.554 00:16:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:27.554 00:16:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:27.554 00:16:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:27.554 00:16:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:27.554 00:16:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:15:27.554 00:16:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.828 00:16:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:32.828 00:16:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:15:32.828 00:16:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:32.828 00:16:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:32.828 00:16:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:32.828 00:16:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:32.828 00:16:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:32.828 00:16:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:15:32.828 00:16:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:32.828 00:16:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:15:32.828 00:16:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:15:32.828 00:16:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:15:32.828 00:16:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:15:32.828 00:16:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:15:32.828 00:16:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:15:32.828 00:16:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:32.828 00:16:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:32.828 00:16:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:32.828 00:16:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:32.828 00:16:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:32.828 00:16:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:32.828 00:16:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:32.828 00:16:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:32.828 00:16:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:32.828 00:16:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:32.828 00:16:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:32.828 00:16:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:32.828 00:16:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:32.828 00:16:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:32.828 00:16:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:32.828 00:16:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:32.828 00:16:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:32.828 00:16:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:32.828 00:16:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:32.828 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:32.828 00:16:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:32.829 00:16:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:32.829 00:16:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:32.829 00:16:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:32.829 00:16:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:32.829 00:16:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:32.829 00:16:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:32.829 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:32.829 00:16:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:32.829 00:16:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:32.829 00:16:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:32.829 00:16:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:32.829 00:16:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:32.829 00:16:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:32.829 00:16:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:32.829 00:16:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:32.829 00:16:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:32.829 00:16:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:32.829 00:16:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:32.829 00:16:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:32.829 00:16:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:32.829 00:16:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:32.829 00:16:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:32.829 00:16:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:32.829 Found net devices under 0000:86:00.0: cvl_0_0 00:15:32.829 00:16:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:32.829 00:16:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:32.829 00:16:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:32.829 00:16:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:32.829 00:16:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:32.829 00:16:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:32.829 00:16:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:32.829 00:16:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:32.829 00:16:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:32.829 Found net devices under 0000:86:00.1: cvl_0_1 00:15:32.829 00:16:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:32.829 00:16:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:32.829 00:16:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:15:32.829 00:16:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:32.829 00:16:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:32.829 00:16:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:32.829 00:16:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:32.829 00:16:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:32.829 00:16:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:32.829 00:16:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:32.829 00:16:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:32.829 00:16:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:32.829 00:16:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:32.829 00:16:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:32.829 00:16:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:32.829 00:16:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:32.829 00:16:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:32.829 00:16:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:32.829 00:16:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:32.829 00:16:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:32.829 00:16:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:32.829 00:16:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:32.829 00:16:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:32.829 00:16:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:32.829 00:16:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:32.829 00:16:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:32.829 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:32.829 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.153 ms 00:15:32.829 00:15:32.829 --- 10.0.0.2 ping statistics --- 00:15:32.829 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:32.829 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:15:32.829 00:16:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:32.829 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:32.829 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.237 ms 00:15:32.829 00:15:32.829 --- 10.0.0.1 ping statistics --- 00:15:32.829 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:32.829 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:15:32.829 00:16:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:32.829 00:16:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:15:32.829 00:16:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:32.829 00:16:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:32.829 00:16:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:32.829 00:16:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:32.829 00:16:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:32.829 00:16:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:32.829 00:16:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:32.829 00:16:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:15:32.829 00:16:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:32.829 00:16:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:32.829 00:16:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.829 00:16:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1503004 00:15:32.829 00:16:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1503004 00:15:32.829 00:16:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:15:32.829 00:16:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@823 -- # '[' -z 1503004 ']' 00:15:32.829 00:16:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:32.829 00:16:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@828 -- # local max_retries=100 00:15:32.829 00:16:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:32.829 00:16:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # xtrace_disable 00:15:32.829 00:16:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.769 00:16:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:15:33.769 00:16:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # return 0 00:15:33.769 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:33.769 00:16:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:33.769 00:16:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.769 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:33.769 00:16:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=1503246 00:15:33.769 00:16:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:15:33.769 00:16:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:15:33.769 00:16:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:15:33.769 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:33.769 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:33.769 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:33.769 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:15:33.769 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:15:33.769 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:33.769 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=7b2bd3bfb057b5fb32492106e0803c2cca9f1d345be5971b 00:15:33.769 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:15:33.769 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.2WA 00:15:33.769 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 7b2bd3bfb057b5fb32492106e0803c2cca9f1d345be5971b 0 00:15:33.769 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 7b2bd3bfb057b5fb32492106e0803c2cca9f1d345be5971b 0 00:15:33.769 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:33.769 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:33.769 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=7b2bd3bfb057b5fb32492106e0803c2cca9f1d345be5971b 00:15:33.769 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:15:33.769 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:33.769 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.2WA 00:15:33.769 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.2WA 00:15:33.769 00:16:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.2WA 00:15:33.769 00:16:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:15:33.769 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:33.769 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:33.769 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:33.769 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:15:33.769 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:15:33.769 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:33.769 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=8acfb6dfca898f2e213c2d8528e5be805da022a380a0a8ac0711b5e26c40b9da 00:15:33.769 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:15:33.769 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.uMA 00:15:33.769 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 8acfb6dfca898f2e213c2d8528e5be805da022a380a0a8ac0711b5e26c40b9da 3 00:15:33.769 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 8acfb6dfca898f2e213c2d8528e5be805da022a380a0a8ac0711b5e26c40b9da 3 00:15:33.769 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:33.769 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:33.769 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=8acfb6dfca898f2e213c2d8528e5be805da022a380a0a8ac0711b5e26c40b9da 00:15:33.769 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:15:33.769 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:33.769 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.uMA 00:15:33.769 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.uMA 00:15:33.769 00:16:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.uMA 00:15:33.769 00:16:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:15:33.769 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:33.769 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:33.769 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:33.769 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:15:33.769 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:15:33.769 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:33.769 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=ee7f90a260f5dc0af911bd03f90e5ad1 00:15:33.769 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:15:33.769 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.bH9 00:15:33.769 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key ee7f90a260f5dc0af911bd03f90e5ad1 1 00:15:33.769 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 ee7f90a260f5dc0af911bd03f90e5ad1 1 00:15:33.769 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:33.769 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:33.769 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=ee7f90a260f5dc0af911bd03f90e5ad1 00:15:33.769 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:15:33.769 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:33.769 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.bH9 00:15:33.769 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.bH9 00:15:33.769 00:16:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.bH9 00:15:33.769 00:16:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:15:33.769 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:33.769 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:33.769 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:33.769 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:15:33.769 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:15:33.769 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:33.769 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=bbf78fdc92d4b097dfece495918f13f6ce233f12a038bd13 00:15:33.769 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:15:33.769 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.X6F 00:15:33.769 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key bbf78fdc92d4b097dfece495918f13f6ce233f12a038bd13 2 00:15:33.769 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 bbf78fdc92d4b097dfece495918f13f6ce233f12a038bd13 2 00:15:33.769 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:33.769 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:33.769 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=bbf78fdc92d4b097dfece495918f13f6ce233f12a038bd13 00:15:33.769 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:15:33.769 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:34.029 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.X6F 00:15:34.029 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.X6F 00:15:34.029 00:16:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.X6F 00:15:34.029 00:16:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:15:34.029 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:34.029 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:34.029 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:34.029 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:15:34.029 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:15:34.029 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:34.029 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=d972f80ca14373a9bfd4f0e6cd1266f13971a38d838f9747 00:15:34.029 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:15:34.029 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.ZdC 00:15:34.029 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key d972f80ca14373a9bfd4f0e6cd1266f13971a38d838f9747 2 00:15:34.029 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 d972f80ca14373a9bfd4f0e6cd1266f13971a38d838f9747 2 00:15:34.029 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:34.029 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:34.029 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=d972f80ca14373a9bfd4f0e6cd1266f13971a38d838f9747 00:15:34.029 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:15:34.029 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:34.029 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.ZdC 00:15:34.029 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.ZdC 00:15:34.029 00:16:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.ZdC 00:15:34.029 00:16:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:15:34.029 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:34.029 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:34.029 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:34.029 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:15:34.029 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:15:34.029 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:34.029 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=d4b7b991e94c49f2c6c107701542178d 00:15:34.029 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:15:34.029 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.kfl 00:15:34.029 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key d4b7b991e94c49f2c6c107701542178d 1 00:15:34.029 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 d4b7b991e94c49f2c6c107701542178d 1 00:15:34.029 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:34.029 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:34.029 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=d4b7b991e94c49f2c6c107701542178d 00:15:34.029 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:15:34.029 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:34.029 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.kfl 00:15:34.029 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.kfl 00:15:34.029 00:16:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.kfl 00:15:34.030 00:16:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:15:34.030 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:34.030 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:34.030 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:34.030 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:15:34.030 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:15:34.030 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:34.030 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=54ec470cf83f0984983832fd9b4618ea5b4b613129607bea98606c4ffe9ab4c1 00:15:34.030 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:15:34.030 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.YWs 00:15:34.030 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 54ec470cf83f0984983832fd9b4618ea5b4b613129607bea98606c4ffe9ab4c1 3 00:15:34.030 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 54ec470cf83f0984983832fd9b4618ea5b4b613129607bea98606c4ffe9ab4c1 3 00:15:34.030 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:34.030 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:34.030 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=54ec470cf83f0984983832fd9b4618ea5b4b613129607bea98606c4ffe9ab4c1 00:15:34.030 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:15:34.030 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:34.030 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.YWs 00:15:34.030 00:16:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.YWs 00:15:34.030 00:16:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.YWs 00:15:34.030 00:16:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:15:34.030 00:16:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 1503004 00:15:34.030 00:16:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@823 -- # '[' -z 1503004 ']' 00:15:34.030 00:16:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:34.030 00:16:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@828 -- # local max_retries=100 00:15:34.030 00:16:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:34.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:34.030 00:16:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # xtrace_disable 00:15:34.030 00:16:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.289 00:16:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:15:34.289 00:16:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # return 0 00:15:34.289 00:16:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 1503246 /var/tmp/host.sock 00:15:34.289 00:16:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@823 -- # '[' -z 1503246 ']' 00:15:34.289 00:16:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/host.sock 00:15:34.289 00:16:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@828 -- # local max_retries=100 00:15:34.289 00:16:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:15:34.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:15:34.289 00:16:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # xtrace_disable 00:15:34.289 00:16:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.546 00:16:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:15:34.546 00:16:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # return 0 00:15:34.546 00:16:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:15:34.546 00:16:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:34.546 00:16:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.546 00:16:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:34.546 00:16:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:15:34.546 00:16:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.2WA 00:15:34.546 00:16:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:34.546 00:16:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.546 00:16:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:34.546 00:16:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.2WA 00:15:34.546 00:16:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.2WA 00:15:34.804 00:16:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.uMA ]] 00:15:34.804 00:16:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.uMA 00:15:34.804 00:16:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:34.804 00:16:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.804 00:16:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:34.804 00:16:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.uMA 00:15:34.804 00:16:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.uMA 00:15:34.804 00:16:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:15:34.804 00:16:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.bH9 00:15:34.804 00:16:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:34.804 00:16:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.804 00:16:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:34.804 00:16:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.bH9 00:15:34.804 00:16:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.bH9 00:15:35.062 00:16:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.X6F ]] 00:15:35.062 00:16:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.X6F 00:15:35.062 00:16:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:35.062 00:16:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.062 00:16:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:35.062 00:16:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.X6F 00:15:35.062 00:16:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.X6F 00:15:35.320 00:16:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:15:35.320 00:16:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.ZdC 00:15:35.320 00:16:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:35.320 00:16:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.320 00:16:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:35.320 00:16:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.ZdC 00:15:35.320 00:16:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.ZdC 00:15:35.320 00:16:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.kfl ]] 00:15:35.320 00:16:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.kfl 00:15:35.320 00:16:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:35.320 00:16:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.320 00:16:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:35.320 00:16:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.kfl 00:15:35.320 00:16:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.kfl 00:15:35.579 00:16:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:15:35.579 00:16:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.YWs 00:15:35.579 00:16:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:35.579 00:16:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.579 00:16:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:35.579 00:16:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.YWs 00:15:35.579 00:16:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.YWs 00:15:35.839 00:16:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:15:35.839 00:16:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:15:35.839 00:16:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:35.839 00:16:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:35.839 00:16:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:35.839 00:16:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:35.839 00:16:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:15:35.839 00:16:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:35.839 00:16:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:35.839 00:16:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:35.839 00:16:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:35.839 00:16:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:35.839 00:16:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:35.839 00:16:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:35.839 00:16:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.098 00:16:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:36.098 00:16:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:36.098 00:16:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:36.098 00:15:36.098 00:16:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:36.098 00:16:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:36.098 00:16:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:36.357 00:16:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:36.357 00:16:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:36.357 00:16:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:36.357 00:16:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.357 00:16:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:36.357 00:16:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:36.357 { 00:15:36.357 "cntlid": 1, 00:15:36.357 "qid": 0, 00:15:36.357 "state": "enabled", 00:15:36.357 "thread": "nvmf_tgt_poll_group_000", 00:15:36.357 "listen_address": { 00:15:36.357 "trtype": "TCP", 00:15:36.357 "adrfam": "IPv4", 00:15:36.357 "traddr": "10.0.0.2", 00:15:36.357 "trsvcid": "4420" 00:15:36.357 }, 00:15:36.357 "peer_address": { 00:15:36.357 "trtype": "TCP", 00:15:36.357 "adrfam": "IPv4", 00:15:36.357 "traddr": "10.0.0.1", 00:15:36.357 "trsvcid": "42248" 00:15:36.357 }, 00:15:36.357 "auth": { 00:15:36.357 "state": "completed", 00:15:36.357 "digest": "sha256", 00:15:36.357 "dhgroup": "null" 00:15:36.357 } 00:15:36.357 } 00:15:36.357 ]' 00:15:36.357 00:16:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:36.357 00:16:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:36.357 00:16:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:36.357 00:16:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:36.357 00:16:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:36.615 00:16:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:36.615 00:16:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:36.615 00:16:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:36.615 00:16:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:N2IyYmQzYmZiMDU3YjVmYjMyNDkyMTA2ZTA4MDNjMmNjYTlmMWQzNDViZTU5NzFiAN8c7w==: --dhchap-ctrl-secret DHHC-1:03:OGFjZmI2ZGZjYTg5OGYyZTIxM2MyZDg1MjhlNWJlODA1ZGEwMjJhMzgwYTBhOGFjMDcxMWI1ZTI2YzQwYjlkYf2tkOw=: 00:15:37.182 00:16:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:37.182 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:37.182 00:16:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:37.182 00:16:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:37.182 00:16:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.182 00:16:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:37.182 00:16:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:37.182 00:16:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:37.182 00:16:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:37.441 00:16:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:15:37.441 00:16:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:37.441 00:16:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:37.441 00:16:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:37.441 00:16:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:37.441 00:16:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:37.441 00:16:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:37.441 00:16:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:37.441 00:16:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.441 00:16:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:37.441 00:16:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:37.441 00:16:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:37.700 00:15:37.700 00:16:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:37.700 00:16:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:37.700 00:16:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:37.958 00:16:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:37.958 00:16:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:37.958 00:16:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:37.958 00:16:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.958 00:16:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:37.958 00:16:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:37.958 { 00:15:37.958 "cntlid": 3, 00:15:37.958 "qid": 0, 00:15:37.958 "state": "enabled", 00:15:37.958 "thread": "nvmf_tgt_poll_group_000", 00:15:37.958 "listen_address": { 00:15:37.958 "trtype": "TCP", 00:15:37.959 "adrfam": "IPv4", 00:15:37.959 "traddr": "10.0.0.2", 00:15:37.959 "trsvcid": "4420" 00:15:37.959 }, 00:15:37.959 "peer_address": { 00:15:37.959 "trtype": "TCP", 00:15:37.959 "adrfam": "IPv4", 00:15:37.959 "traddr": "10.0.0.1", 00:15:37.959 "trsvcid": "42284" 00:15:37.959 }, 00:15:37.959 "auth": { 00:15:37.959 "state": "completed", 00:15:37.959 "digest": "sha256", 00:15:37.959 "dhgroup": "null" 00:15:37.959 } 00:15:37.959 } 00:15:37.959 ]' 00:15:37.959 00:16:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:37.959 00:16:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:37.959 00:16:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:37.959 00:16:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:37.959 00:16:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:37.959 00:16:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:37.959 00:16:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:37.959 00:16:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:38.253 00:16:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZWU3ZjkwYTI2MGY1ZGMwYWY5MTFiZDAzZjkwZTVhZDGjnqBT: --dhchap-ctrl-secret DHHC-1:02:YmJmNzhmZGM5MmQ0YjA5N2RmZWNlNDk1OTE4ZjEzZjZjZTIzM2YxMmEwMzhiZDEzb47+Ng==: 00:15:38.819 00:16:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:38.819 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:38.819 00:16:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:38.819 00:16:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:38.819 00:16:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.819 00:16:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:38.819 00:16:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:38.819 00:16:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:38.819 00:16:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:39.077 00:16:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:15:39.077 00:16:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:39.077 00:16:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:39.077 00:16:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:39.077 00:16:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:39.077 00:16:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:39.077 00:16:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:39.077 00:16:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:39.077 00:16:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.077 00:16:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:39.078 00:16:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:39.078 00:16:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:39.078 00:15:39.078 00:16:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:39.078 00:16:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:39.078 00:16:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:39.337 00:16:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:39.337 00:16:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:39.337 00:16:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:39.337 00:16:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.337 00:16:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:39.337 00:16:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:39.337 { 00:15:39.337 "cntlid": 5, 00:15:39.337 "qid": 0, 00:15:39.337 "state": "enabled", 00:15:39.337 "thread": "nvmf_tgt_poll_group_000", 00:15:39.337 "listen_address": { 00:15:39.337 "trtype": "TCP", 00:15:39.337 "adrfam": "IPv4", 00:15:39.337 "traddr": "10.0.0.2", 00:15:39.337 "trsvcid": "4420" 00:15:39.337 }, 00:15:39.337 "peer_address": { 00:15:39.337 "trtype": "TCP", 00:15:39.337 "adrfam": "IPv4", 00:15:39.337 "traddr": "10.0.0.1", 00:15:39.337 "trsvcid": "41974" 00:15:39.337 }, 00:15:39.337 "auth": { 00:15:39.337 "state": "completed", 00:15:39.337 "digest": "sha256", 00:15:39.337 "dhgroup": "null" 00:15:39.337 } 00:15:39.337 } 00:15:39.337 ]' 00:15:39.337 00:16:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:39.337 00:16:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:39.337 00:16:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:39.337 00:16:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:39.337 00:16:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:39.596 00:16:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:39.596 00:16:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:39.596 00:16:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:39.596 00:16:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZDk3MmY4MGNhMTQzNzNhOWJmZDRmMGU2Y2QxMjY2ZjEzOTcxYTM4ZDgzOGY5NzQ3C29SFA==: --dhchap-ctrl-secret DHHC-1:01:ZDRiN2I5OTFlOTRjNDlmMmM2YzEwNzcwMTU0MjE3OGQEbxAq: 00:15:40.163 00:16:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:40.163 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:40.163 00:16:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:40.163 00:16:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:40.163 00:16:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.163 00:16:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:40.163 00:16:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:40.163 00:16:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:40.163 00:16:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:40.422 00:16:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:15:40.422 00:16:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:40.422 00:16:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:40.422 00:16:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:40.422 00:16:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:40.422 00:16:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:40.422 00:16:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:40.422 00:16:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:40.422 00:16:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.422 00:16:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:40.422 00:16:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:40.422 00:16:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:40.681 00:15:40.681 00:16:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:40.681 00:16:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:40.681 00:16:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:40.940 00:16:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:40.940 00:16:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:40.940 00:16:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:40.940 00:16:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.940 00:16:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:40.940 00:16:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:40.940 { 00:15:40.940 "cntlid": 7, 00:15:40.940 "qid": 0, 00:15:40.940 "state": "enabled", 00:15:40.940 "thread": "nvmf_tgt_poll_group_000", 00:15:40.940 "listen_address": { 00:15:40.940 "trtype": "TCP", 00:15:40.940 "adrfam": "IPv4", 00:15:40.940 "traddr": "10.0.0.2", 00:15:40.940 "trsvcid": "4420" 00:15:40.940 }, 00:15:40.940 "peer_address": { 00:15:40.940 "trtype": "TCP", 00:15:40.940 "adrfam": "IPv4", 00:15:40.940 "traddr": "10.0.0.1", 00:15:40.940 "trsvcid": "42012" 00:15:40.940 }, 00:15:40.940 "auth": { 00:15:40.940 "state": "completed", 00:15:40.940 "digest": "sha256", 00:15:40.940 "dhgroup": "null" 00:15:40.940 } 00:15:40.940 } 00:15:40.940 ]' 00:15:40.940 00:16:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:40.940 00:16:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:40.940 00:16:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:40.940 00:16:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:40.940 00:16:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:40.940 00:16:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:40.940 00:16:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:40.940 00:16:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:41.198 00:16:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NTRlYzQ3MGNmODNmMDk4NDk4MzgzMmZkOWI0NjE4ZWE1YjRiNjEzMTI5NjA3YmVhOTg2MDZjNGZmZTlhYjRjMcBoLk4=: 00:15:41.765 00:17:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:41.765 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:41.765 00:17:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:41.765 00:17:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:41.765 00:17:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.765 00:17:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:41.765 00:17:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:41.765 00:17:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:41.765 00:17:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:41.765 00:17:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:41.765 00:17:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:15:41.765 00:17:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:41.765 00:17:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:41.765 00:17:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:41.765 00:17:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:41.765 00:17:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:41.765 00:17:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:41.765 00:17:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:41.765 00:17:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.765 00:17:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:41.765 00:17:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:41.765 00:17:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:42.023 00:15:42.023 00:17:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:42.023 00:17:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:42.023 00:17:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:42.281 00:17:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:42.281 00:17:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:42.281 00:17:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:42.281 00:17:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.281 00:17:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:42.281 00:17:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:42.281 { 00:15:42.281 "cntlid": 9, 00:15:42.281 "qid": 0, 00:15:42.281 "state": "enabled", 00:15:42.281 "thread": "nvmf_tgt_poll_group_000", 00:15:42.281 "listen_address": { 00:15:42.281 "trtype": "TCP", 00:15:42.281 "adrfam": "IPv4", 00:15:42.281 "traddr": "10.0.0.2", 00:15:42.281 "trsvcid": "4420" 00:15:42.281 }, 00:15:42.281 "peer_address": { 00:15:42.281 "trtype": "TCP", 00:15:42.281 "adrfam": "IPv4", 00:15:42.281 "traddr": "10.0.0.1", 00:15:42.281 "trsvcid": "42030" 00:15:42.281 }, 00:15:42.281 "auth": { 00:15:42.281 "state": "completed", 00:15:42.281 "digest": "sha256", 00:15:42.281 "dhgroup": "ffdhe2048" 00:15:42.281 } 00:15:42.281 } 00:15:42.281 ]' 00:15:42.281 00:17:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:42.281 00:17:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:42.281 00:17:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:42.281 00:17:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:42.281 00:17:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:42.540 00:17:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:42.540 00:17:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:42.540 00:17:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:42.540 00:17:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:N2IyYmQzYmZiMDU3YjVmYjMyNDkyMTA2ZTA4MDNjMmNjYTlmMWQzNDViZTU5NzFiAN8c7w==: --dhchap-ctrl-secret DHHC-1:03:OGFjZmI2ZGZjYTg5OGYyZTIxM2MyZDg1MjhlNWJlODA1ZGEwMjJhMzgwYTBhOGFjMDcxMWI1ZTI2YzQwYjlkYf2tkOw=: 00:15:43.143 00:17:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:43.143 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:43.143 00:17:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:43.143 00:17:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:43.143 00:17:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.144 00:17:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:43.144 00:17:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:43.144 00:17:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:43.144 00:17:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:43.403 00:17:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:15:43.403 00:17:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:43.403 00:17:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:43.403 00:17:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:43.403 00:17:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:43.403 00:17:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:43.403 00:17:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:43.403 00:17:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:43.403 00:17:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.403 00:17:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:43.403 00:17:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:43.403 00:17:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:43.662 00:15:43.662 00:17:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:43.662 00:17:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:43.662 00:17:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:43.662 00:17:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:43.662 00:17:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:43.662 00:17:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:43.662 00:17:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.920 00:17:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:43.920 00:17:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:43.920 { 00:15:43.920 "cntlid": 11, 00:15:43.920 "qid": 0, 00:15:43.920 "state": "enabled", 00:15:43.920 "thread": "nvmf_tgt_poll_group_000", 00:15:43.920 "listen_address": { 00:15:43.920 "trtype": "TCP", 00:15:43.920 "adrfam": "IPv4", 00:15:43.920 "traddr": "10.0.0.2", 00:15:43.920 "trsvcid": "4420" 00:15:43.920 }, 00:15:43.920 "peer_address": { 00:15:43.920 "trtype": "TCP", 00:15:43.920 "adrfam": "IPv4", 00:15:43.920 "traddr": "10.0.0.1", 00:15:43.920 "trsvcid": "42058" 00:15:43.920 }, 00:15:43.920 "auth": { 00:15:43.920 "state": "completed", 00:15:43.920 "digest": "sha256", 00:15:43.920 "dhgroup": "ffdhe2048" 00:15:43.920 } 00:15:43.920 } 00:15:43.920 ]' 00:15:43.920 00:17:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:43.920 00:17:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:43.920 00:17:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:43.920 00:17:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:43.920 00:17:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:43.920 00:17:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:43.920 00:17:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:43.920 00:17:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:44.179 00:17:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZWU3ZjkwYTI2MGY1ZGMwYWY5MTFiZDAzZjkwZTVhZDGjnqBT: --dhchap-ctrl-secret DHHC-1:02:YmJmNzhmZGM5MmQ0YjA5N2RmZWNlNDk1OTE4ZjEzZjZjZTIzM2YxMmEwMzhiZDEzb47+Ng==: 00:15:44.748 00:17:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:44.748 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:44.748 00:17:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:44.748 00:17:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:44.748 00:17:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.748 00:17:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:44.748 00:17:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:44.748 00:17:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:44.748 00:17:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:44.748 00:17:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:15:44.748 00:17:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:44.748 00:17:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:44.748 00:17:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:44.748 00:17:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:44.748 00:17:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:44.748 00:17:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:44.748 00:17:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:44.748 00:17:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.748 00:17:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:44.748 00:17:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:44.748 00:17:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:45.006 00:15:45.006 00:17:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:45.006 00:17:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:45.006 00:17:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:45.264 00:17:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:45.264 00:17:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:45.264 00:17:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:45.264 00:17:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.264 00:17:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:45.264 00:17:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:45.264 { 00:15:45.264 "cntlid": 13, 00:15:45.264 "qid": 0, 00:15:45.264 "state": "enabled", 00:15:45.264 "thread": "nvmf_tgt_poll_group_000", 00:15:45.264 "listen_address": { 00:15:45.264 "trtype": "TCP", 00:15:45.264 "adrfam": "IPv4", 00:15:45.264 "traddr": "10.0.0.2", 00:15:45.264 "trsvcid": "4420" 00:15:45.264 }, 00:15:45.264 "peer_address": { 00:15:45.264 "trtype": "TCP", 00:15:45.264 "adrfam": "IPv4", 00:15:45.264 "traddr": "10.0.0.1", 00:15:45.264 "trsvcid": "42082" 00:15:45.264 }, 00:15:45.264 "auth": { 00:15:45.264 "state": "completed", 00:15:45.264 "digest": "sha256", 00:15:45.264 "dhgroup": "ffdhe2048" 00:15:45.264 } 00:15:45.264 } 00:15:45.264 ]' 00:15:45.264 00:17:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:45.264 00:17:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:45.264 00:17:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:45.264 00:17:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:45.264 00:17:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:45.523 00:17:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:45.523 00:17:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:45.523 00:17:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:45.523 00:17:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZDk3MmY4MGNhMTQzNzNhOWJmZDRmMGU2Y2QxMjY2ZjEzOTcxYTM4ZDgzOGY5NzQ3C29SFA==: --dhchap-ctrl-secret DHHC-1:01:ZDRiN2I5OTFlOTRjNDlmMmM2YzEwNzcwMTU0MjE3OGQEbxAq: 00:15:46.089 00:17:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:46.089 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:46.089 00:17:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:46.089 00:17:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:46.089 00:17:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.090 00:17:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:46.090 00:17:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:46.090 00:17:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:46.090 00:17:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:46.348 00:17:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:15:46.348 00:17:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:46.348 00:17:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:46.348 00:17:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:46.348 00:17:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:46.348 00:17:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:46.348 00:17:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:46.348 00:17:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:46.348 00:17:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.348 00:17:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:46.348 00:17:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:46.348 00:17:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:46.607 00:15:46.607 00:17:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:46.607 00:17:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:46.607 00:17:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:46.607 00:17:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:46.607 00:17:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:46.607 00:17:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:46.607 00:17:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.866 00:17:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:46.866 00:17:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:46.866 { 00:15:46.866 "cntlid": 15, 00:15:46.866 "qid": 0, 00:15:46.866 "state": "enabled", 00:15:46.866 "thread": "nvmf_tgt_poll_group_000", 00:15:46.866 "listen_address": { 00:15:46.866 "trtype": "TCP", 00:15:46.866 "adrfam": "IPv4", 00:15:46.866 "traddr": "10.0.0.2", 00:15:46.866 "trsvcid": "4420" 00:15:46.866 }, 00:15:46.866 "peer_address": { 00:15:46.866 "trtype": "TCP", 00:15:46.866 "adrfam": "IPv4", 00:15:46.866 "traddr": "10.0.0.1", 00:15:46.866 "trsvcid": "42114" 00:15:46.866 }, 00:15:46.866 "auth": { 00:15:46.866 "state": "completed", 00:15:46.866 "digest": "sha256", 00:15:46.866 "dhgroup": "ffdhe2048" 00:15:46.866 } 00:15:46.866 } 00:15:46.866 ]' 00:15:46.866 00:17:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:46.866 00:17:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:46.866 00:17:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:46.866 00:17:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:46.866 00:17:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:46.866 00:17:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:46.867 00:17:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:46.867 00:17:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:47.126 00:17:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NTRlYzQ3MGNmODNmMDk4NDk4MzgzMmZkOWI0NjE4ZWE1YjRiNjEzMTI5NjA3YmVhOTg2MDZjNGZmZTlhYjRjMcBoLk4=: 00:15:47.694 00:17:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:47.694 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:47.694 00:17:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:47.694 00:17:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:47.694 00:17:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.694 00:17:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:47.694 00:17:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:47.694 00:17:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:47.694 00:17:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:47.694 00:17:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:47.694 00:17:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:15:47.694 00:17:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:47.694 00:17:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:47.694 00:17:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:47.694 00:17:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:47.694 00:17:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:47.694 00:17:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:47.694 00:17:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:47.694 00:17:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.953 00:17:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:47.953 00:17:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:47.953 00:17:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:47.953 00:15:47.953 00:17:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:47.953 00:17:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:47.953 00:17:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:48.213 00:17:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:48.213 00:17:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:48.213 00:17:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:48.213 00:17:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.213 00:17:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:48.213 00:17:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:48.213 { 00:15:48.213 "cntlid": 17, 00:15:48.213 "qid": 0, 00:15:48.213 "state": "enabled", 00:15:48.213 "thread": "nvmf_tgt_poll_group_000", 00:15:48.213 "listen_address": { 00:15:48.213 "trtype": "TCP", 00:15:48.213 "adrfam": "IPv4", 00:15:48.213 "traddr": "10.0.0.2", 00:15:48.213 "trsvcid": "4420" 00:15:48.213 }, 00:15:48.213 "peer_address": { 00:15:48.213 "trtype": "TCP", 00:15:48.213 "adrfam": "IPv4", 00:15:48.213 "traddr": "10.0.0.1", 00:15:48.213 "trsvcid": "42152" 00:15:48.213 }, 00:15:48.213 "auth": { 00:15:48.213 "state": "completed", 00:15:48.213 "digest": "sha256", 00:15:48.213 "dhgroup": "ffdhe3072" 00:15:48.213 } 00:15:48.213 } 00:15:48.213 ]' 00:15:48.213 00:17:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:48.213 00:17:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:48.213 00:17:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:48.473 00:17:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:48.473 00:17:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:48.473 00:17:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:48.473 00:17:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:48.473 00:17:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:48.473 00:17:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:N2IyYmQzYmZiMDU3YjVmYjMyNDkyMTA2ZTA4MDNjMmNjYTlmMWQzNDViZTU5NzFiAN8c7w==: --dhchap-ctrl-secret DHHC-1:03:OGFjZmI2ZGZjYTg5OGYyZTIxM2MyZDg1MjhlNWJlODA1ZGEwMjJhMzgwYTBhOGFjMDcxMWI1ZTI2YzQwYjlkYf2tkOw=: 00:15:49.041 00:17:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:49.041 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:49.041 00:17:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:49.041 00:17:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:49.041 00:17:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.041 00:17:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:49.041 00:17:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:49.041 00:17:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:49.041 00:17:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:49.300 00:17:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:15:49.300 00:17:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:49.300 00:17:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:49.300 00:17:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:49.300 00:17:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:49.300 00:17:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:49.300 00:17:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:49.300 00:17:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:49.300 00:17:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.300 00:17:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:49.300 00:17:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:49.300 00:17:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:49.560 00:15:49.560 00:17:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:49.560 00:17:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:49.560 00:17:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:49.819 00:17:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:49.819 00:17:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:49.819 00:17:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:49.819 00:17:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.819 00:17:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:49.819 00:17:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:49.819 { 00:15:49.819 "cntlid": 19, 00:15:49.819 "qid": 0, 00:15:49.819 "state": "enabled", 00:15:49.819 "thread": "nvmf_tgt_poll_group_000", 00:15:49.819 "listen_address": { 00:15:49.819 "trtype": "TCP", 00:15:49.819 "adrfam": "IPv4", 00:15:49.819 "traddr": "10.0.0.2", 00:15:49.819 "trsvcid": "4420" 00:15:49.819 }, 00:15:49.819 "peer_address": { 00:15:49.819 "trtype": "TCP", 00:15:49.819 "adrfam": "IPv4", 00:15:49.819 "traddr": "10.0.0.1", 00:15:49.819 "trsvcid": "52884" 00:15:49.819 }, 00:15:49.819 "auth": { 00:15:49.819 "state": "completed", 00:15:49.819 "digest": "sha256", 00:15:49.819 "dhgroup": "ffdhe3072" 00:15:49.819 } 00:15:49.819 } 00:15:49.819 ]' 00:15:49.819 00:17:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:49.819 00:17:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:49.819 00:17:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:49.819 00:17:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:49.819 00:17:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:49.819 00:17:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:49.819 00:17:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:49.819 00:17:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:50.078 00:17:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZWU3ZjkwYTI2MGY1ZGMwYWY5MTFiZDAzZjkwZTVhZDGjnqBT: --dhchap-ctrl-secret DHHC-1:02:YmJmNzhmZGM5MmQ0YjA5N2RmZWNlNDk1OTE4ZjEzZjZjZTIzM2YxMmEwMzhiZDEzb47+Ng==: 00:15:50.646 00:17:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:50.646 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:50.646 00:17:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:50.646 00:17:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:50.646 00:17:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.646 00:17:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:50.646 00:17:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:50.646 00:17:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:50.646 00:17:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:50.905 00:17:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:15:50.905 00:17:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:50.905 00:17:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:50.905 00:17:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:50.905 00:17:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:50.905 00:17:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:50.905 00:17:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:50.905 00:17:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:50.905 00:17:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.905 00:17:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:50.906 00:17:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:50.906 00:17:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:51.164 00:15:51.164 00:17:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:51.164 00:17:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:51.164 00:17:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:51.164 00:17:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:51.164 00:17:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:51.164 00:17:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:51.164 00:17:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.164 00:17:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:51.164 00:17:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:51.164 { 00:15:51.164 "cntlid": 21, 00:15:51.164 "qid": 0, 00:15:51.164 "state": "enabled", 00:15:51.164 "thread": "nvmf_tgt_poll_group_000", 00:15:51.164 "listen_address": { 00:15:51.164 "trtype": "TCP", 00:15:51.164 "adrfam": "IPv4", 00:15:51.164 "traddr": "10.0.0.2", 00:15:51.164 "trsvcid": "4420" 00:15:51.164 }, 00:15:51.164 "peer_address": { 00:15:51.164 "trtype": "TCP", 00:15:51.164 "adrfam": "IPv4", 00:15:51.164 "traddr": "10.0.0.1", 00:15:51.164 "trsvcid": "52912" 00:15:51.164 }, 00:15:51.164 "auth": { 00:15:51.164 "state": "completed", 00:15:51.164 "digest": "sha256", 00:15:51.164 "dhgroup": "ffdhe3072" 00:15:51.164 } 00:15:51.164 } 00:15:51.164 ]' 00:15:51.164 00:17:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:51.164 00:17:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:51.164 00:17:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:51.424 00:17:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:51.424 00:17:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:51.424 00:17:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:51.424 00:17:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:51.424 00:17:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:51.424 00:17:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZDk3MmY4MGNhMTQzNzNhOWJmZDRmMGU2Y2QxMjY2ZjEzOTcxYTM4ZDgzOGY5NzQ3C29SFA==: --dhchap-ctrl-secret DHHC-1:01:ZDRiN2I5OTFlOTRjNDlmMmM2YzEwNzcwMTU0MjE3OGQEbxAq: 00:15:51.992 00:17:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:52.252 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:52.252 00:17:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:52.252 00:17:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:52.252 00:17:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.252 00:17:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:52.252 00:17:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:52.252 00:17:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:52.252 00:17:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:52.252 00:17:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:15:52.252 00:17:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:52.252 00:17:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:52.252 00:17:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:52.252 00:17:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:52.252 00:17:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:52.252 00:17:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:52.252 00:17:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:52.252 00:17:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.252 00:17:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:52.252 00:17:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:52.252 00:17:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:52.511 00:15:52.511 00:17:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:52.511 00:17:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:52.511 00:17:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:52.770 00:17:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:52.770 00:17:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:52.770 00:17:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:52.770 00:17:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.770 00:17:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:52.770 00:17:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:52.770 { 00:15:52.770 "cntlid": 23, 00:15:52.770 "qid": 0, 00:15:52.770 "state": "enabled", 00:15:52.770 "thread": "nvmf_tgt_poll_group_000", 00:15:52.770 "listen_address": { 00:15:52.770 "trtype": "TCP", 00:15:52.770 "adrfam": "IPv4", 00:15:52.770 "traddr": "10.0.0.2", 00:15:52.770 "trsvcid": "4420" 00:15:52.770 }, 00:15:52.770 "peer_address": { 00:15:52.770 "trtype": "TCP", 00:15:52.770 "adrfam": "IPv4", 00:15:52.770 "traddr": "10.0.0.1", 00:15:52.770 "trsvcid": "52922" 00:15:52.770 }, 00:15:52.770 "auth": { 00:15:52.770 "state": "completed", 00:15:52.770 "digest": "sha256", 00:15:52.770 "dhgroup": "ffdhe3072" 00:15:52.770 } 00:15:52.770 } 00:15:52.770 ]' 00:15:52.770 00:17:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:52.770 00:17:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:52.770 00:17:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:52.770 00:17:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:52.770 00:17:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:52.770 00:17:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:52.770 00:17:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:52.770 00:17:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:53.029 00:17:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NTRlYzQ3MGNmODNmMDk4NDk4MzgzMmZkOWI0NjE4ZWE1YjRiNjEzMTI5NjA3YmVhOTg2MDZjNGZmZTlhYjRjMcBoLk4=: 00:15:53.598 00:17:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:53.598 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:53.598 00:17:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:53.598 00:17:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:53.598 00:17:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.598 00:17:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:53.598 00:17:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:53.598 00:17:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:53.598 00:17:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:53.598 00:17:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:53.859 00:17:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:15:53.859 00:17:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:53.859 00:17:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:53.859 00:17:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:53.859 00:17:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:53.859 00:17:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:53.859 00:17:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:53.859 00:17:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:53.859 00:17:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.859 00:17:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:53.859 00:17:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:53.859 00:17:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:54.118 00:15:54.118 00:17:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:54.118 00:17:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:54.118 00:17:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:54.377 00:17:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:54.377 00:17:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:54.377 00:17:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:54.377 00:17:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.377 00:17:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:54.377 00:17:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:54.377 { 00:15:54.377 "cntlid": 25, 00:15:54.377 "qid": 0, 00:15:54.377 "state": "enabled", 00:15:54.377 "thread": "nvmf_tgt_poll_group_000", 00:15:54.377 "listen_address": { 00:15:54.377 "trtype": "TCP", 00:15:54.377 "adrfam": "IPv4", 00:15:54.377 "traddr": "10.0.0.2", 00:15:54.377 "trsvcid": "4420" 00:15:54.377 }, 00:15:54.377 "peer_address": { 00:15:54.377 "trtype": "TCP", 00:15:54.377 "adrfam": "IPv4", 00:15:54.377 "traddr": "10.0.0.1", 00:15:54.377 "trsvcid": "52942" 00:15:54.377 }, 00:15:54.377 "auth": { 00:15:54.377 "state": "completed", 00:15:54.377 "digest": "sha256", 00:15:54.377 "dhgroup": "ffdhe4096" 00:15:54.377 } 00:15:54.377 } 00:15:54.377 ]' 00:15:54.377 00:17:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:54.377 00:17:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:54.377 00:17:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:54.377 00:17:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:54.377 00:17:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:54.377 00:17:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:54.377 00:17:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:54.377 00:17:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:54.641 00:17:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:N2IyYmQzYmZiMDU3YjVmYjMyNDkyMTA2ZTA4MDNjMmNjYTlmMWQzNDViZTU5NzFiAN8c7w==: --dhchap-ctrl-secret DHHC-1:03:OGFjZmI2ZGZjYTg5OGYyZTIxM2MyZDg1MjhlNWJlODA1ZGEwMjJhMzgwYTBhOGFjMDcxMWI1ZTI2YzQwYjlkYf2tkOw=: 00:15:55.208 00:17:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:55.208 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:55.208 00:17:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:55.208 00:17:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:55.208 00:17:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.208 00:17:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:55.208 00:17:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:55.208 00:17:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:55.208 00:17:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:55.467 00:17:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:15:55.467 00:17:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:55.467 00:17:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:55.467 00:17:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:55.467 00:17:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:55.467 00:17:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:55.467 00:17:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:55.467 00:17:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:55.467 00:17:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.467 00:17:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:55.467 00:17:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:55.467 00:17:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:55.726 00:15:55.726 00:17:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:55.726 00:17:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:55.726 00:17:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:55.726 00:17:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:55.726 00:17:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:55.726 00:17:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:55.726 00:17:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.985 00:17:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:55.985 00:17:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:55.985 { 00:15:55.985 "cntlid": 27, 00:15:55.985 "qid": 0, 00:15:55.985 "state": "enabled", 00:15:55.985 "thread": "nvmf_tgt_poll_group_000", 00:15:55.985 "listen_address": { 00:15:55.985 "trtype": "TCP", 00:15:55.985 "adrfam": "IPv4", 00:15:55.985 "traddr": "10.0.0.2", 00:15:55.985 "trsvcid": "4420" 00:15:55.985 }, 00:15:55.985 "peer_address": { 00:15:55.985 "trtype": "TCP", 00:15:55.985 "adrfam": "IPv4", 00:15:55.985 "traddr": "10.0.0.1", 00:15:55.985 "trsvcid": "52962" 00:15:55.985 }, 00:15:55.985 "auth": { 00:15:55.985 "state": "completed", 00:15:55.985 "digest": "sha256", 00:15:55.985 "dhgroup": "ffdhe4096" 00:15:55.985 } 00:15:55.985 } 00:15:55.985 ]' 00:15:55.985 00:17:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:55.985 00:17:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:55.985 00:17:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:55.985 00:17:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:55.985 00:17:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:55.985 00:17:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:55.985 00:17:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:55.985 00:17:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:56.256 00:17:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZWU3ZjkwYTI2MGY1ZGMwYWY5MTFiZDAzZjkwZTVhZDGjnqBT: --dhchap-ctrl-secret DHHC-1:02:YmJmNzhmZGM5MmQ0YjA5N2RmZWNlNDk1OTE4ZjEzZjZjZTIzM2YxMmEwMzhiZDEzb47+Ng==: 00:15:56.881 00:17:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:56.881 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:56.881 00:17:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:56.881 00:17:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:56.881 00:17:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.881 00:17:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:56.881 00:17:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:56.881 00:17:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:56.881 00:17:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:56.881 00:17:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:15:56.881 00:17:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:56.881 00:17:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:56.881 00:17:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:56.881 00:17:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:56.881 00:17:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:56.881 00:17:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:56.881 00:17:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:56.881 00:17:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.881 00:17:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:56.881 00:17:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:56.881 00:17:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:57.139 00:15:57.139 00:17:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:57.139 00:17:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:57.139 00:17:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:57.397 00:17:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:57.397 00:17:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:57.397 00:17:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:57.397 00:17:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.397 00:17:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:57.397 00:17:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:57.397 { 00:15:57.397 "cntlid": 29, 00:15:57.397 "qid": 0, 00:15:57.397 "state": "enabled", 00:15:57.397 "thread": "nvmf_tgt_poll_group_000", 00:15:57.397 "listen_address": { 00:15:57.397 "trtype": "TCP", 00:15:57.397 "adrfam": "IPv4", 00:15:57.397 "traddr": "10.0.0.2", 00:15:57.397 "trsvcid": "4420" 00:15:57.397 }, 00:15:57.397 "peer_address": { 00:15:57.397 "trtype": "TCP", 00:15:57.397 "adrfam": "IPv4", 00:15:57.397 "traddr": "10.0.0.1", 00:15:57.397 "trsvcid": "53002" 00:15:57.397 }, 00:15:57.397 "auth": { 00:15:57.397 "state": "completed", 00:15:57.397 "digest": "sha256", 00:15:57.397 "dhgroup": "ffdhe4096" 00:15:57.397 } 00:15:57.397 } 00:15:57.397 ]' 00:15:57.397 00:17:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:57.397 00:17:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:57.397 00:17:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:57.397 00:17:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:57.397 00:17:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:57.397 00:17:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:57.397 00:17:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:57.397 00:17:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:57.656 00:17:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZDk3MmY4MGNhMTQzNzNhOWJmZDRmMGU2Y2QxMjY2ZjEzOTcxYTM4ZDgzOGY5NzQ3C29SFA==: --dhchap-ctrl-secret DHHC-1:01:ZDRiN2I5OTFlOTRjNDlmMmM2YzEwNzcwMTU0MjE3OGQEbxAq: 00:15:58.221 00:17:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:58.221 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:58.221 00:17:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:58.221 00:17:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:58.221 00:17:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.221 00:17:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:58.221 00:17:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:58.221 00:17:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:58.221 00:17:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:58.478 00:17:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:15:58.478 00:17:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:58.478 00:17:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:58.478 00:17:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:58.478 00:17:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:58.478 00:17:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:58.478 00:17:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:58.478 00:17:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:58.478 00:17:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.478 00:17:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:58.478 00:17:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:58.478 00:17:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:58.735 00:15:58.735 00:17:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:58.735 00:17:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:58.735 00:17:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:58.993 00:17:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:58.993 00:17:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:58.993 00:17:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:58.993 00:17:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.993 00:17:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:58.993 00:17:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:58.993 { 00:15:58.993 "cntlid": 31, 00:15:58.993 "qid": 0, 00:15:58.993 "state": "enabled", 00:15:58.993 "thread": "nvmf_tgt_poll_group_000", 00:15:58.993 "listen_address": { 00:15:58.993 "trtype": "TCP", 00:15:58.993 "adrfam": "IPv4", 00:15:58.993 "traddr": "10.0.0.2", 00:15:58.993 "trsvcid": "4420" 00:15:58.993 }, 00:15:58.993 "peer_address": { 00:15:58.993 "trtype": "TCP", 00:15:58.993 "adrfam": "IPv4", 00:15:58.993 "traddr": "10.0.0.1", 00:15:58.993 "trsvcid": "38368" 00:15:58.993 }, 00:15:58.993 "auth": { 00:15:58.993 "state": "completed", 00:15:58.993 "digest": "sha256", 00:15:58.993 "dhgroup": "ffdhe4096" 00:15:58.993 } 00:15:58.993 } 00:15:58.993 ]' 00:15:58.993 00:17:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:58.993 00:17:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:58.993 00:17:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:58.993 00:17:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:58.993 00:17:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:58.993 00:17:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:58.993 00:17:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:58.993 00:17:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:59.251 00:17:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NTRlYzQ3MGNmODNmMDk4NDk4MzgzMmZkOWI0NjE4ZWE1YjRiNjEzMTI5NjA3YmVhOTg2MDZjNGZmZTlhYjRjMcBoLk4=: 00:15:59.816 00:17:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:59.816 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:59.816 00:17:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:59.816 00:17:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:59.816 00:17:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.816 00:17:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:59.816 00:17:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:59.816 00:17:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:59.816 00:17:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:59.816 00:17:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:59.816 00:17:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:15:59.816 00:17:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:59.816 00:17:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:59.816 00:17:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:59.816 00:17:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:59.816 00:17:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:59.816 00:17:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:59.816 00:17:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:59.816 00:17:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.816 00:17:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:59.816 00:17:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:59.817 00:17:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:00.383 00:16:00.383 00:17:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:00.383 00:17:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:00.383 00:17:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:00.383 00:17:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:00.383 00:17:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:00.383 00:17:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:00.383 00:17:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.383 00:17:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:00.383 00:17:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:00.383 { 00:16:00.383 "cntlid": 33, 00:16:00.383 "qid": 0, 00:16:00.383 "state": "enabled", 00:16:00.383 "thread": "nvmf_tgt_poll_group_000", 00:16:00.383 "listen_address": { 00:16:00.383 "trtype": "TCP", 00:16:00.383 "adrfam": "IPv4", 00:16:00.383 "traddr": "10.0.0.2", 00:16:00.383 "trsvcid": "4420" 00:16:00.383 }, 00:16:00.383 "peer_address": { 00:16:00.383 "trtype": "TCP", 00:16:00.383 "adrfam": "IPv4", 00:16:00.383 "traddr": "10.0.0.1", 00:16:00.383 "trsvcid": "38392" 00:16:00.383 }, 00:16:00.383 "auth": { 00:16:00.383 "state": "completed", 00:16:00.383 "digest": "sha256", 00:16:00.383 "dhgroup": "ffdhe6144" 00:16:00.383 } 00:16:00.383 } 00:16:00.383 ]' 00:16:00.383 00:17:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:00.383 00:17:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:00.383 00:17:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:00.644 00:17:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:00.644 00:17:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:00.644 00:17:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:00.644 00:17:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:00.644 00:17:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:00.644 00:17:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:N2IyYmQzYmZiMDU3YjVmYjMyNDkyMTA2ZTA4MDNjMmNjYTlmMWQzNDViZTU5NzFiAN8c7w==: --dhchap-ctrl-secret DHHC-1:03:OGFjZmI2ZGZjYTg5OGYyZTIxM2MyZDg1MjhlNWJlODA1ZGEwMjJhMzgwYTBhOGFjMDcxMWI1ZTI2YzQwYjlkYf2tkOw=: 00:16:01.211 00:17:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:01.211 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:01.211 00:17:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:01.211 00:17:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:01.211 00:17:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.211 00:17:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:01.211 00:17:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:01.211 00:17:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:01.211 00:17:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:01.470 00:17:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:16:01.470 00:17:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:01.470 00:17:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:01.470 00:17:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:01.470 00:17:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:01.470 00:17:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:01.470 00:17:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:01.470 00:17:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:01.470 00:17:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.470 00:17:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:01.470 00:17:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:01.470 00:17:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:01.727 00:16:01.728 00:17:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:01.728 00:17:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:01.728 00:17:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:01.986 00:17:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:01.986 00:17:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:01.986 00:17:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:01.986 00:17:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.986 00:17:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:01.986 00:17:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:01.986 { 00:16:01.986 "cntlid": 35, 00:16:01.986 "qid": 0, 00:16:01.986 "state": "enabled", 00:16:01.986 "thread": "nvmf_tgt_poll_group_000", 00:16:01.986 "listen_address": { 00:16:01.986 "trtype": "TCP", 00:16:01.986 "adrfam": "IPv4", 00:16:01.986 "traddr": "10.0.0.2", 00:16:01.986 "trsvcid": "4420" 00:16:01.986 }, 00:16:01.986 "peer_address": { 00:16:01.986 "trtype": "TCP", 00:16:01.986 "adrfam": "IPv4", 00:16:01.986 "traddr": "10.0.0.1", 00:16:01.986 "trsvcid": "38408" 00:16:01.986 }, 00:16:01.986 "auth": { 00:16:01.986 "state": "completed", 00:16:01.986 "digest": "sha256", 00:16:01.986 "dhgroup": "ffdhe6144" 00:16:01.986 } 00:16:01.986 } 00:16:01.986 ]' 00:16:01.986 00:17:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:01.986 00:17:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:01.986 00:17:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:01.986 00:17:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:01.986 00:17:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:02.245 00:17:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:02.245 00:17:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:02.245 00:17:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:02.245 00:17:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZWU3ZjkwYTI2MGY1ZGMwYWY5MTFiZDAzZjkwZTVhZDGjnqBT: --dhchap-ctrl-secret DHHC-1:02:YmJmNzhmZGM5MmQ0YjA5N2RmZWNlNDk1OTE4ZjEzZjZjZTIzM2YxMmEwMzhiZDEzb47+Ng==: 00:16:02.812 00:17:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:02.812 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:02.812 00:17:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:02.812 00:17:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:02.812 00:17:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.812 00:17:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:02.812 00:17:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:02.812 00:17:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:02.812 00:17:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:03.070 00:17:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:16:03.070 00:17:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:03.070 00:17:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:03.070 00:17:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:03.070 00:17:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:03.070 00:17:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:03.070 00:17:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:03.070 00:17:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:03.070 00:17:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.070 00:17:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:03.070 00:17:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:03.070 00:17:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:03.329 00:16:03.329 00:17:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:03.329 00:17:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:03.329 00:17:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:03.588 00:17:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:03.588 00:17:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:03.588 00:17:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:03.588 00:17:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.588 00:17:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:03.588 00:17:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:03.588 { 00:16:03.588 "cntlid": 37, 00:16:03.588 "qid": 0, 00:16:03.588 "state": "enabled", 00:16:03.588 "thread": "nvmf_tgt_poll_group_000", 00:16:03.588 "listen_address": { 00:16:03.588 "trtype": "TCP", 00:16:03.588 "adrfam": "IPv4", 00:16:03.588 "traddr": "10.0.0.2", 00:16:03.588 "trsvcid": "4420" 00:16:03.588 }, 00:16:03.588 "peer_address": { 00:16:03.588 "trtype": "TCP", 00:16:03.588 "adrfam": "IPv4", 00:16:03.588 "traddr": "10.0.0.1", 00:16:03.588 "trsvcid": "38430" 00:16:03.588 }, 00:16:03.588 "auth": { 00:16:03.588 "state": "completed", 00:16:03.588 "digest": "sha256", 00:16:03.588 "dhgroup": "ffdhe6144" 00:16:03.588 } 00:16:03.588 } 00:16:03.588 ]' 00:16:03.588 00:17:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:03.588 00:17:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:03.589 00:17:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:03.589 00:17:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:03.589 00:17:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:03.847 00:17:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:03.847 00:17:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:03.847 00:17:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:03.847 00:17:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZDk3MmY4MGNhMTQzNzNhOWJmZDRmMGU2Y2QxMjY2ZjEzOTcxYTM4ZDgzOGY5NzQ3C29SFA==: --dhchap-ctrl-secret DHHC-1:01:ZDRiN2I5OTFlOTRjNDlmMmM2YzEwNzcwMTU0MjE3OGQEbxAq: 00:16:04.414 00:17:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:04.414 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:04.414 00:17:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:04.414 00:17:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:04.414 00:17:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.414 00:17:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:04.414 00:17:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:04.414 00:17:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:04.414 00:17:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:04.672 00:17:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:16:04.672 00:17:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:04.672 00:17:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:04.672 00:17:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:04.672 00:17:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:04.672 00:17:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:04.672 00:17:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:04.672 00:17:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:04.672 00:17:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.672 00:17:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:04.672 00:17:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:04.672 00:17:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:04.930 00:16:04.930 00:17:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:04.930 00:17:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:04.930 00:17:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:05.189 00:17:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:05.189 00:17:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:05.189 00:17:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:05.189 00:17:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.189 00:17:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:05.189 00:17:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:05.189 { 00:16:05.189 "cntlid": 39, 00:16:05.189 "qid": 0, 00:16:05.189 "state": "enabled", 00:16:05.189 "thread": "nvmf_tgt_poll_group_000", 00:16:05.189 "listen_address": { 00:16:05.189 "trtype": "TCP", 00:16:05.189 "adrfam": "IPv4", 00:16:05.189 "traddr": "10.0.0.2", 00:16:05.189 "trsvcid": "4420" 00:16:05.189 }, 00:16:05.189 "peer_address": { 00:16:05.189 "trtype": "TCP", 00:16:05.189 "adrfam": "IPv4", 00:16:05.189 "traddr": "10.0.0.1", 00:16:05.189 "trsvcid": "38456" 00:16:05.189 }, 00:16:05.189 "auth": { 00:16:05.189 "state": "completed", 00:16:05.189 "digest": "sha256", 00:16:05.189 "dhgroup": "ffdhe6144" 00:16:05.189 } 00:16:05.189 } 00:16:05.189 ]' 00:16:05.189 00:17:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:05.189 00:17:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:05.189 00:17:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:05.448 00:17:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:05.448 00:17:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:05.448 00:17:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:05.448 00:17:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:05.448 00:17:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:05.448 00:17:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NTRlYzQ3MGNmODNmMDk4NDk4MzgzMmZkOWI0NjE4ZWE1YjRiNjEzMTI5NjA3YmVhOTg2MDZjNGZmZTlhYjRjMcBoLk4=: 00:16:06.016 00:17:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:06.016 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:06.016 00:17:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:06.016 00:17:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:06.016 00:17:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.016 00:17:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:06.016 00:17:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:06.017 00:17:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:06.017 00:17:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:06.017 00:17:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:06.276 00:17:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:16:06.276 00:17:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:06.276 00:17:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:06.276 00:17:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:06.276 00:17:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:06.276 00:17:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:06.276 00:17:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:06.276 00:17:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:06.276 00:17:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.276 00:17:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:06.276 00:17:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:06.276 00:17:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:06.844 00:16:06.844 00:17:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:06.844 00:17:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:06.844 00:17:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:07.103 00:17:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:07.103 00:17:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:07.103 00:17:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:07.103 00:17:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.103 00:17:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:07.103 00:17:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:07.103 { 00:16:07.103 "cntlid": 41, 00:16:07.103 "qid": 0, 00:16:07.103 "state": "enabled", 00:16:07.103 "thread": "nvmf_tgt_poll_group_000", 00:16:07.103 "listen_address": { 00:16:07.103 "trtype": "TCP", 00:16:07.103 "adrfam": "IPv4", 00:16:07.103 "traddr": "10.0.0.2", 00:16:07.103 "trsvcid": "4420" 00:16:07.103 }, 00:16:07.103 "peer_address": { 00:16:07.103 "trtype": "TCP", 00:16:07.103 "adrfam": "IPv4", 00:16:07.103 "traddr": "10.0.0.1", 00:16:07.103 "trsvcid": "38494" 00:16:07.103 }, 00:16:07.103 "auth": { 00:16:07.103 "state": "completed", 00:16:07.103 "digest": "sha256", 00:16:07.103 "dhgroup": "ffdhe8192" 00:16:07.103 } 00:16:07.103 } 00:16:07.103 ]' 00:16:07.103 00:17:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:07.103 00:17:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:07.103 00:17:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:07.103 00:17:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:07.103 00:17:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:07.103 00:17:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:07.103 00:17:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:07.103 00:17:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:07.362 00:17:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:N2IyYmQzYmZiMDU3YjVmYjMyNDkyMTA2ZTA4MDNjMmNjYTlmMWQzNDViZTU5NzFiAN8c7w==: --dhchap-ctrl-secret DHHC-1:03:OGFjZmI2ZGZjYTg5OGYyZTIxM2MyZDg1MjhlNWJlODA1ZGEwMjJhMzgwYTBhOGFjMDcxMWI1ZTI2YzQwYjlkYf2tkOw=: 00:16:07.930 00:17:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:07.930 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:07.930 00:17:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:07.930 00:17:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:07.930 00:17:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.930 00:17:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:07.930 00:17:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:07.930 00:17:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:07.930 00:17:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:07.930 00:17:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:16:07.930 00:17:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:07.930 00:17:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:07.930 00:17:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:07.930 00:17:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:07.930 00:17:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:07.930 00:17:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:07.930 00:17:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:07.930 00:17:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.189 00:17:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:08.189 00:17:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:08.189 00:17:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:08.448 00:16:08.448 00:17:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:08.448 00:17:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:08.448 00:17:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:08.707 00:17:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:08.707 00:17:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:08.707 00:17:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:08.707 00:17:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.707 00:17:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:08.707 00:17:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:08.707 { 00:16:08.707 "cntlid": 43, 00:16:08.707 "qid": 0, 00:16:08.707 "state": "enabled", 00:16:08.707 "thread": "nvmf_tgt_poll_group_000", 00:16:08.707 "listen_address": { 00:16:08.707 "trtype": "TCP", 00:16:08.707 "adrfam": "IPv4", 00:16:08.707 "traddr": "10.0.0.2", 00:16:08.707 "trsvcid": "4420" 00:16:08.707 }, 00:16:08.707 "peer_address": { 00:16:08.707 "trtype": "TCP", 00:16:08.707 "adrfam": "IPv4", 00:16:08.707 "traddr": "10.0.0.1", 00:16:08.707 "trsvcid": "52494" 00:16:08.707 }, 00:16:08.707 "auth": { 00:16:08.707 "state": "completed", 00:16:08.707 "digest": "sha256", 00:16:08.707 "dhgroup": "ffdhe8192" 00:16:08.707 } 00:16:08.707 } 00:16:08.707 ]' 00:16:08.707 00:17:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:08.707 00:17:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:08.707 00:17:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:08.707 00:17:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:08.707 00:17:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:08.966 00:17:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:08.966 00:17:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:08.966 00:17:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:08.966 00:17:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZWU3ZjkwYTI2MGY1ZGMwYWY5MTFiZDAzZjkwZTVhZDGjnqBT: --dhchap-ctrl-secret DHHC-1:02:YmJmNzhmZGM5MmQ0YjA5N2RmZWNlNDk1OTE4ZjEzZjZjZTIzM2YxMmEwMzhiZDEzb47+Ng==: 00:16:09.533 00:17:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:09.534 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:09.534 00:17:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:09.534 00:17:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:09.534 00:17:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.534 00:17:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:09.534 00:17:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:09.534 00:17:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:09.534 00:17:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:09.792 00:17:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:16:09.792 00:17:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:09.792 00:17:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:09.792 00:17:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:09.792 00:17:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:09.792 00:17:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:09.792 00:17:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:09.792 00:17:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:09.792 00:17:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.792 00:17:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:09.793 00:17:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:09.793 00:17:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:10.359 00:16:10.359 00:17:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:10.359 00:17:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:10.359 00:17:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:10.359 00:17:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:10.359 00:17:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:10.359 00:17:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:10.359 00:17:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.359 00:17:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:10.359 00:17:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:10.359 { 00:16:10.359 "cntlid": 45, 00:16:10.359 "qid": 0, 00:16:10.359 "state": "enabled", 00:16:10.359 "thread": "nvmf_tgt_poll_group_000", 00:16:10.359 "listen_address": { 00:16:10.359 "trtype": "TCP", 00:16:10.359 "adrfam": "IPv4", 00:16:10.359 "traddr": "10.0.0.2", 00:16:10.359 "trsvcid": "4420" 00:16:10.359 }, 00:16:10.359 "peer_address": { 00:16:10.359 "trtype": "TCP", 00:16:10.359 "adrfam": "IPv4", 00:16:10.359 "traddr": "10.0.0.1", 00:16:10.359 "trsvcid": "52526" 00:16:10.359 }, 00:16:10.359 "auth": { 00:16:10.359 "state": "completed", 00:16:10.359 "digest": "sha256", 00:16:10.359 "dhgroup": "ffdhe8192" 00:16:10.359 } 00:16:10.359 } 00:16:10.359 ]' 00:16:10.359 00:17:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:10.359 00:17:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:10.359 00:17:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:10.617 00:17:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:10.617 00:17:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:10.617 00:17:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:10.617 00:17:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:10.617 00:17:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:10.617 00:17:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZDk3MmY4MGNhMTQzNzNhOWJmZDRmMGU2Y2QxMjY2ZjEzOTcxYTM4ZDgzOGY5NzQ3C29SFA==: --dhchap-ctrl-secret DHHC-1:01:ZDRiN2I5OTFlOTRjNDlmMmM2YzEwNzcwMTU0MjE3OGQEbxAq: 00:16:11.185 00:17:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:11.185 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:11.185 00:17:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:11.185 00:17:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:11.185 00:17:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.185 00:17:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:11.185 00:17:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:11.185 00:17:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:11.185 00:17:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:11.444 00:17:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:16:11.444 00:17:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:11.444 00:17:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:11.444 00:17:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:11.444 00:17:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:11.444 00:17:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:11.444 00:17:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:11.444 00:17:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:11.444 00:17:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.444 00:17:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:11.444 00:17:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:11.444 00:17:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:12.012 00:16:12.012 00:17:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:12.012 00:17:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:12.012 00:17:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:12.271 00:17:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:12.271 00:17:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:12.271 00:17:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:12.271 00:17:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.271 00:17:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:12.271 00:17:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:12.271 { 00:16:12.271 "cntlid": 47, 00:16:12.271 "qid": 0, 00:16:12.271 "state": "enabled", 00:16:12.271 "thread": "nvmf_tgt_poll_group_000", 00:16:12.271 "listen_address": { 00:16:12.271 "trtype": "TCP", 00:16:12.271 "adrfam": "IPv4", 00:16:12.271 "traddr": "10.0.0.2", 00:16:12.271 "trsvcid": "4420" 00:16:12.271 }, 00:16:12.271 "peer_address": { 00:16:12.271 "trtype": "TCP", 00:16:12.271 "adrfam": "IPv4", 00:16:12.271 "traddr": "10.0.0.1", 00:16:12.271 "trsvcid": "52544" 00:16:12.271 }, 00:16:12.271 "auth": { 00:16:12.271 "state": "completed", 00:16:12.271 "digest": "sha256", 00:16:12.271 "dhgroup": "ffdhe8192" 00:16:12.271 } 00:16:12.271 } 00:16:12.271 ]' 00:16:12.271 00:17:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:12.271 00:17:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:12.271 00:17:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:12.271 00:17:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:12.271 00:17:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:12.271 00:17:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:12.271 00:17:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:12.271 00:17:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:12.545 00:17:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NTRlYzQ3MGNmODNmMDk4NDk4MzgzMmZkOWI0NjE4ZWE1YjRiNjEzMTI5NjA3YmVhOTg2MDZjNGZmZTlhYjRjMcBoLk4=: 00:16:13.141 00:17:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:13.141 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:13.141 00:17:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:13.141 00:17:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:13.141 00:17:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.141 00:17:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:13.141 00:17:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:16:13.141 00:17:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:13.141 00:17:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:13.141 00:17:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:13.141 00:17:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:13.141 00:17:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:16:13.141 00:17:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:13.141 00:17:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:13.141 00:17:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:13.141 00:17:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:13.141 00:17:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:13.141 00:17:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:13.141 00:17:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:13.141 00:17:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.141 00:17:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:13.141 00:17:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:13.141 00:17:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:13.399 00:16:13.399 00:17:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:13.399 00:17:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:13.399 00:17:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:13.657 00:17:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:13.657 00:17:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:13.657 00:17:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:13.657 00:17:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.657 00:17:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:13.657 00:17:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:13.657 { 00:16:13.657 "cntlid": 49, 00:16:13.657 "qid": 0, 00:16:13.657 "state": "enabled", 00:16:13.657 "thread": "nvmf_tgt_poll_group_000", 00:16:13.657 "listen_address": { 00:16:13.657 "trtype": "TCP", 00:16:13.657 "adrfam": "IPv4", 00:16:13.657 "traddr": "10.0.0.2", 00:16:13.657 "trsvcid": "4420" 00:16:13.657 }, 00:16:13.657 "peer_address": { 00:16:13.657 "trtype": "TCP", 00:16:13.657 "adrfam": "IPv4", 00:16:13.657 "traddr": "10.0.0.1", 00:16:13.657 "trsvcid": "52580" 00:16:13.657 }, 00:16:13.657 "auth": { 00:16:13.657 "state": "completed", 00:16:13.657 "digest": "sha384", 00:16:13.657 "dhgroup": "null" 00:16:13.657 } 00:16:13.657 } 00:16:13.657 ]' 00:16:13.657 00:17:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:13.657 00:17:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:13.657 00:17:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:13.657 00:17:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:13.657 00:17:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:13.915 00:17:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:13.915 00:17:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:13.915 00:17:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:13.915 00:17:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:N2IyYmQzYmZiMDU3YjVmYjMyNDkyMTA2ZTA4MDNjMmNjYTlmMWQzNDViZTU5NzFiAN8c7w==: --dhchap-ctrl-secret DHHC-1:03:OGFjZmI2ZGZjYTg5OGYyZTIxM2MyZDg1MjhlNWJlODA1ZGEwMjJhMzgwYTBhOGFjMDcxMWI1ZTI2YzQwYjlkYf2tkOw=: 00:16:14.480 00:17:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:14.480 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:14.480 00:17:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:14.480 00:17:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:14.480 00:17:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.480 00:17:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:14.480 00:17:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:14.481 00:17:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:14.481 00:17:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:14.739 00:17:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:16:14.739 00:17:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:14.739 00:17:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:14.739 00:17:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:14.739 00:17:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:14.739 00:17:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:14.739 00:17:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:14.739 00:17:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:14.739 00:17:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.739 00:17:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:14.739 00:17:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:14.739 00:17:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:14.997 00:16:14.997 00:17:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:14.997 00:17:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:14.998 00:17:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:15.255 00:17:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:15.255 00:17:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:15.255 00:17:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:15.255 00:17:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.255 00:17:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:15.255 00:17:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:15.255 { 00:16:15.255 "cntlid": 51, 00:16:15.256 "qid": 0, 00:16:15.256 "state": "enabled", 00:16:15.256 "thread": "nvmf_tgt_poll_group_000", 00:16:15.256 "listen_address": { 00:16:15.256 "trtype": "TCP", 00:16:15.256 "adrfam": "IPv4", 00:16:15.256 "traddr": "10.0.0.2", 00:16:15.256 "trsvcid": "4420" 00:16:15.256 }, 00:16:15.256 "peer_address": { 00:16:15.256 "trtype": "TCP", 00:16:15.256 "adrfam": "IPv4", 00:16:15.256 "traddr": "10.0.0.1", 00:16:15.256 "trsvcid": "52602" 00:16:15.256 }, 00:16:15.256 "auth": { 00:16:15.256 "state": "completed", 00:16:15.256 "digest": "sha384", 00:16:15.256 "dhgroup": "null" 00:16:15.256 } 00:16:15.256 } 00:16:15.256 ]' 00:16:15.256 00:17:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:15.256 00:17:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:15.256 00:17:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:15.256 00:17:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:15.256 00:17:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:15.256 00:17:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:15.256 00:17:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:15.256 00:17:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:15.513 00:17:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZWU3ZjkwYTI2MGY1ZGMwYWY5MTFiZDAzZjkwZTVhZDGjnqBT: --dhchap-ctrl-secret DHHC-1:02:YmJmNzhmZGM5MmQ0YjA5N2RmZWNlNDk1OTE4ZjEzZjZjZTIzM2YxMmEwMzhiZDEzb47+Ng==: 00:16:16.080 00:17:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:16.080 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:16.080 00:17:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:16.080 00:17:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:16.080 00:17:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.080 00:17:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:16.080 00:17:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:16.080 00:17:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:16.080 00:17:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:16.080 00:17:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:16:16.080 00:17:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:16.080 00:17:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:16.080 00:17:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:16.080 00:17:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:16.080 00:17:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:16.080 00:17:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:16.080 00:17:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:16.080 00:17:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.339 00:17:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:16.339 00:17:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:16.339 00:17:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:16.339 00:16:16.339 00:17:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:16.339 00:17:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:16.339 00:17:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:16.597 00:17:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:16.597 00:17:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:16.597 00:17:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:16.597 00:17:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.597 00:17:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:16.597 00:17:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:16.597 { 00:16:16.597 "cntlid": 53, 00:16:16.597 "qid": 0, 00:16:16.597 "state": "enabled", 00:16:16.597 "thread": "nvmf_tgt_poll_group_000", 00:16:16.597 "listen_address": { 00:16:16.597 "trtype": "TCP", 00:16:16.597 "adrfam": "IPv4", 00:16:16.597 "traddr": "10.0.0.2", 00:16:16.597 "trsvcid": "4420" 00:16:16.597 }, 00:16:16.597 "peer_address": { 00:16:16.597 "trtype": "TCP", 00:16:16.597 "adrfam": "IPv4", 00:16:16.597 "traddr": "10.0.0.1", 00:16:16.597 "trsvcid": "52624" 00:16:16.597 }, 00:16:16.597 "auth": { 00:16:16.597 "state": "completed", 00:16:16.597 "digest": "sha384", 00:16:16.597 "dhgroup": "null" 00:16:16.597 } 00:16:16.597 } 00:16:16.597 ]' 00:16:16.597 00:17:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:16.597 00:17:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:16.597 00:17:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:16.597 00:17:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:16.597 00:17:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:16.854 00:17:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:16.854 00:17:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:16.854 00:17:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:16.854 00:17:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZDk3MmY4MGNhMTQzNzNhOWJmZDRmMGU2Y2QxMjY2ZjEzOTcxYTM4ZDgzOGY5NzQ3C29SFA==: --dhchap-ctrl-secret DHHC-1:01:ZDRiN2I5OTFlOTRjNDlmMmM2YzEwNzcwMTU0MjE3OGQEbxAq: 00:16:17.418 00:17:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:17.418 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:17.418 00:17:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:17.418 00:17:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:17.418 00:17:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.418 00:17:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:17.418 00:17:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:17.418 00:17:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:17.418 00:17:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:17.676 00:17:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:16:17.676 00:17:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:17.676 00:17:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:17.676 00:17:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:17.676 00:17:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:17.676 00:17:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:17.676 00:17:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:17.676 00:17:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:17.676 00:17:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.676 00:17:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:17.676 00:17:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:17.676 00:17:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:17.934 00:16:17.935 00:17:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:17.935 00:17:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:17.935 00:17:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:18.194 00:17:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:18.194 00:17:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:18.194 00:17:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:18.194 00:17:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.194 00:17:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:18.194 00:17:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:18.194 { 00:16:18.194 "cntlid": 55, 00:16:18.194 "qid": 0, 00:16:18.194 "state": "enabled", 00:16:18.194 "thread": "nvmf_tgt_poll_group_000", 00:16:18.194 "listen_address": { 00:16:18.194 "trtype": "TCP", 00:16:18.194 "adrfam": "IPv4", 00:16:18.194 "traddr": "10.0.0.2", 00:16:18.194 "trsvcid": "4420" 00:16:18.194 }, 00:16:18.194 "peer_address": { 00:16:18.194 "trtype": "TCP", 00:16:18.194 "adrfam": "IPv4", 00:16:18.194 "traddr": "10.0.0.1", 00:16:18.194 "trsvcid": "52654" 00:16:18.194 }, 00:16:18.194 "auth": { 00:16:18.194 "state": "completed", 00:16:18.194 "digest": "sha384", 00:16:18.194 "dhgroup": "null" 00:16:18.194 } 00:16:18.194 } 00:16:18.194 ]' 00:16:18.194 00:17:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:18.194 00:17:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:18.194 00:17:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:18.194 00:17:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:18.194 00:17:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:18.194 00:17:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:18.194 00:17:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:18.194 00:17:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:18.453 00:17:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NTRlYzQ3MGNmODNmMDk4NDk4MzgzMmZkOWI0NjE4ZWE1YjRiNjEzMTI5NjA3YmVhOTg2MDZjNGZmZTlhYjRjMcBoLk4=: 00:16:19.020 00:17:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:19.020 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:19.020 00:17:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:19.020 00:17:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:19.020 00:17:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.020 00:17:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:19.020 00:17:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:19.020 00:17:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:19.020 00:17:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:19.020 00:17:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:19.020 00:17:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:16:19.020 00:17:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:19.020 00:17:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:19.020 00:17:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:19.020 00:17:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:19.020 00:17:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:19.020 00:17:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:19.020 00:17:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:19.020 00:17:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.020 00:17:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:19.020 00:17:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:19.020 00:17:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:19.279 00:16:19.279 00:17:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:19.279 00:17:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:19.279 00:17:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:19.538 00:17:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:19.538 00:17:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:19.538 00:17:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:19.538 00:17:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.538 00:17:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:19.538 00:17:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:19.538 { 00:16:19.538 "cntlid": 57, 00:16:19.538 "qid": 0, 00:16:19.538 "state": "enabled", 00:16:19.538 "thread": "nvmf_tgt_poll_group_000", 00:16:19.538 "listen_address": { 00:16:19.538 "trtype": "TCP", 00:16:19.538 "adrfam": "IPv4", 00:16:19.538 "traddr": "10.0.0.2", 00:16:19.538 "trsvcid": "4420" 00:16:19.538 }, 00:16:19.538 "peer_address": { 00:16:19.538 "trtype": "TCP", 00:16:19.538 "adrfam": "IPv4", 00:16:19.538 "traddr": "10.0.0.1", 00:16:19.538 "trsvcid": "47210" 00:16:19.538 }, 00:16:19.538 "auth": { 00:16:19.538 "state": "completed", 00:16:19.538 "digest": "sha384", 00:16:19.538 "dhgroup": "ffdhe2048" 00:16:19.538 } 00:16:19.538 } 00:16:19.538 ]' 00:16:19.538 00:17:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:19.538 00:17:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:19.538 00:17:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:19.538 00:17:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:19.538 00:17:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:19.797 00:17:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:19.797 00:17:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:19.797 00:17:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:19.797 00:17:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:N2IyYmQzYmZiMDU3YjVmYjMyNDkyMTA2ZTA4MDNjMmNjYTlmMWQzNDViZTU5NzFiAN8c7w==: --dhchap-ctrl-secret DHHC-1:03:OGFjZmI2ZGZjYTg5OGYyZTIxM2MyZDg1MjhlNWJlODA1ZGEwMjJhMzgwYTBhOGFjMDcxMWI1ZTI2YzQwYjlkYf2tkOw=: 00:16:20.364 00:17:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:20.364 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:20.364 00:17:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:20.364 00:17:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:20.364 00:17:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.364 00:17:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:20.364 00:17:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:20.364 00:17:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:20.364 00:17:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:20.623 00:17:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:16:20.623 00:17:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:20.623 00:17:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:20.623 00:17:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:20.623 00:17:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:20.623 00:17:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:20.623 00:17:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:20.623 00:17:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:20.623 00:17:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.623 00:17:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:20.623 00:17:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:20.623 00:17:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:20.881 00:16:20.881 00:17:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:20.881 00:17:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:20.881 00:17:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:21.138 00:17:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:21.138 00:17:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:21.138 00:17:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:21.138 00:17:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.138 00:17:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:21.138 00:17:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:21.138 { 00:16:21.138 "cntlid": 59, 00:16:21.138 "qid": 0, 00:16:21.138 "state": "enabled", 00:16:21.138 "thread": "nvmf_tgt_poll_group_000", 00:16:21.138 "listen_address": { 00:16:21.138 "trtype": "TCP", 00:16:21.138 "adrfam": "IPv4", 00:16:21.138 "traddr": "10.0.0.2", 00:16:21.138 "trsvcid": "4420" 00:16:21.138 }, 00:16:21.138 "peer_address": { 00:16:21.138 "trtype": "TCP", 00:16:21.138 "adrfam": "IPv4", 00:16:21.138 "traddr": "10.0.0.1", 00:16:21.138 "trsvcid": "47234" 00:16:21.138 }, 00:16:21.138 "auth": { 00:16:21.138 "state": "completed", 00:16:21.138 "digest": "sha384", 00:16:21.138 "dhgroup": "ffdhe2048" 00:16:21.138 } 00:16:21.138 } 00:16:21.138 ]' 00:16:21.138 00:17:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:21.138 00:17:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:21.138 00:17:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:21.138 00:17:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:21.138 00:17:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:21.138 00:17:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:21.138 00:17:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:21.138 00:17:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:21.396 00:17:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZWU3ZjkwYTI2MGY1ZGMwYWY5MTFiZDAzZjkwZTVhZDGjnqBT: --dhchap-ctrl-secret DHHC-1:02:YmJmNzhmZGM5MmQ0YjA5N2RmZWNlNDk1OTE4ZjEzZjZjZTIzM2YxMmEwMzhiZDEzb47+Ng==: 00:16:21.960 00:17:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:21.960 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:21.960 00:17:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:21.960 00:17:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:21.960 00:17:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.960 00:17:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:21.960 00:17:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:21.960 00:17:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:21.960 00:17:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:22.218 00:17:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:16:22.218 00:17:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:22.218 00:17:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:22.218 00:17:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:22.218 00:17:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:22.218 00:17:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:22.218 00:17:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:22.218 00:17:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:22.218 00:17:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.218 00:17:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:22.218 00:17:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:22.218 00:17:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:22.218 00:16:22.476 00:17:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:22.476 00:17:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:22.476 00:17:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:22.476 00:17:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:22.476 00:17:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:22.476 00:17:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:22.476 00:17:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.476 00:17:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:22.476 00:17:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:22.476 { 00:16:22.476 "cntlid": 61, 00:16:22.476 "qid": 0, 00:16:22.476 "state": "enabled", 00:16:22.476 "thread": "nvmf_tgt_poll_group_000", 00:16:22.476 "listen_address": { 00:16:22.476 "trtype": "TCP", 00:16:22.476 "adrfam": "IPv4", 00:16:22.476 "traddr": "10.0.0.2", 00:16:22.476 "trsvcid": "4420" 00:16:22.476 }, 00:16:22.476 "peer_address": { 00:16:22.476 "trtype": "TCP", 00:16:22.476 "adrfam": "IPv4", 00:16:22.476 "traddr": "10.0.0.1", 00:16:22.476 "trsvcid": "47266" 00:16:22.476 }, 00:16:22.476 "auth": { 00:16:22.476 "state": "completed", 00:16:22.476 "digest": "sha384", 00:16:22.476 "dhgroup": "ffdhe2048" 00:16:22.476 } 00:16:22.476 } 00:16:22.476 ]' 00:16:22.476 00:17:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:22.476 00:17:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:22.476 00:17:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:22.733 00:17:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:22.733 00:17:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:22.733 00:17:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:22.733 00:17:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:22.733 00:17:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:22.733 00:17:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZDk3MmY4MGNhMTQzNzNhOWJmZDRmMGU2Y2QxMjY2ZjEzOTcxYTM4ZDgzOGY5NzQ3C29SFA==: --dhchap-ctrl-secret DHHC-1:01:ZDRiN2I5OTFlOTRjNDlmMmM2YzEwNzcwMTU0MjE3OGQEbxAq: 00:16:23.297 00:17:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:23.297 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:23.297 00:17:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:23.297 00:17:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:23.297 00:17:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.298 00:17:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:23.298 00:17:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:23.298 00:17:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:23.298 00:17:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:23.555 00:17:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:16:23.555 00:17:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:23.555 00:17:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:23.555 00:17:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:23.555 00:17:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:23.556 00:17:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:23.556 00:17:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:23.556 00:17:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:23.556 00:17:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.556 00:17:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:23.556 00:17:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:23.556 00:17:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:23.813 00:16:23.813 00:17:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:23.813 00:17:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:23.813 00:17:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:24.071 00:17:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:24.071 00:17:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:24.071 00:17:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:24.071 00:17:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.071 00:17:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:24.071 00:17:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:24.071 { 00:16:24.071 "cntlid": 63, 00:16:24.071 "qid": 0, 00:16:24.071 "state": "enabled", 00:16:24.071 "thread": "nvmf_tgt_poll_group_000", 00:16:24.071 "listen_address": { 00:16:24.071 "trtype": "TCP", 00:16:24.071 "adrfam": "IPv4", 00:16:24.071 "traddr": "10.0.0.2", 00:16:24.071 "trsvcid": "4420" 00:16:24.071 }, 00:16:24.071 "peer_address": { 00:16:24.071 "trtype": "TCP", 00:16:24.071 "adrfam": "IPv4", 00:16:24.071 "traddr": "10.0.0.1", 00:16:24.071 "trsvcid": "47284" 00:16:24.071 }, 00:16:24.071 "auth": { 00:16:24.071 "state": "completed", 00:16:24.071 "digest": "sha384", 00:16:24.071 "dhgroup": "ffdhe2048" 00:16:24.071 } 00:16:24.071 } 00:16:24.071 ]' 00:16:24.071 00:17:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:24.071 00:17:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:24.071 00:17:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:24.071 00:17:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:24.071 00:17:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:24.071 00:17:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:24.071 00:17:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:24.071 00:17:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:24.329 00:17:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NTRlYzQ3MGNmODNmMDk4NDk4MzgzMmZkOWI0NjE4ZWE1YjRiNjEzMTI5NjA3YmVhOTg2MDZjNGZmZTlhYjRjMcBoLk4=: 00:16:24.907 00:17:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:24.907 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:24.907 00:17:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:24.907 00:17:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:24.907 00:17:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.907 00:17:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:24.907 00:17:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:24.907 00:17:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:24.907 00:17:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:24.907 00:17:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:25.164 00:17:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:16:25.164 00:17:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:25.164 00:17:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:25.164 00:17:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:25.164 00:17:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:25.164 00:17:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:25.164 00:17:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:25.164 00:17:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:25.164 00:17:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.164 00:17:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:25.164 00:17:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:25.164 00:17:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:25.422 00:16:25.422 00:17:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:25.422 00:17:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:25.422 00:17:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:25.422 00:17:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:25.422 00:17:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:25.422 00:17:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:25.422 00:17:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.422 00:17:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:25.422 00:17:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:25.422 { 00:16:25.422 "cntlid": 65, 00:16:25.422 "qid": 0, 00:16:25.422 "state": "enabled", 00:16:25.422 "thread": "nvmf_tgt_poll_group_000", 00:16:25.422 "listen_address": { 00:16:25.422 "trtype": "TCP", 00:16:25.422 "adrfam": "IPv4", 00:16:25.422 "traddr": "10.0.0.2", 00:16:25.422 "trsvcid": "4420" 00:16:25.422 }, 00:16:25.422 "peer_address": { 00:16:25.422 "trtype": "TCP", 00:16:25.422 "adrfam": "IPv4", 00:16:25.422 "traddr": "10.0.0.1", 00:16:25.422 "trsvcid": "47302" 00:16:25.422 }, 00:16:25.422 "auth": { 00:16:25.422 "state": "completed", 00:16:25.422 "digest": "sha384", 00:16:25.422 "dhgroup": "ffdhe3072" 00:16:25.422 } 00:16:25.422 } 00:16:25.422 ]' 00:16:25.422 00:17:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:25.678 00:17:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:25.678 00:17:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:25.678 00:17:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:25.678 00:17:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:25.678 00:17:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:25.678 00:17:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:25.678 00:17:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:25.935 00:17:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:N2IyYmQzYmZiMDU3YjVmYjMyNDkyMTA2ZTA4MDNjMmNjYTlmMWQzNDViZTU5NzFiAN8c7w==: --dhchap-ctrl-secret DHHC-1:03:OGFjZmI2ZGZjYTg5OGYyZTIxM2MyZDg1MjhlNWJlODA1ZGEwMjJhMzgwYTBhOGFjMDcxMWI1ZTI2YzQwYjlkYf2tkOw=: 00:16:26.499 00:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:26.499 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:26.499 00:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:26.499 00:17:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:26.499 00:17:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.499 00:17:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:26.499 00:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:26.499 00:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:26.499 00:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:26.499 00:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:16:26.499 00:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:26.499 00:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:26.499 00:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:26.499 00:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:26.499 00:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:26.499 00:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:26.499 00:17:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:26.499 00:17:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.499 00:17:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:26.499 00:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:26.499 00:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:26.756 00:16:26.756 00:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:26.756 00:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:26.756 00:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:27.013 00:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:27.013 00:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:27.013 00:17:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:27.013 00:17:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.013 00:17:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:27.013 00:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:27.013 { 00:16:27.013 "cntlid": 67, 00:16:27.013 "qid": 0, 00:16:27.013 "state": "enabled", 00:16:27.013 "thread": "nvmf_tgt_poll_group_000", 00:16:27.013 "listen_address": { 00:16:27.013 "trtype": "TCP", 00:16:27.013 "adrfam": "IPv4", 00:16:27.013 "traddr": "10.0.0.2", 00:16:27.013 "trsvcid": "4420" 00:16:27.013 }, 00:16:27.013 "peer_address": { 00:16:27.013 "trtype": "TCP", 00:16:27.014 "adrfam": "IPv4", 00:16:27.014 "traddr": "10.0.0.1", 00:16:27.014 "trsvcid": "47340" 00:16:27.014 }, 00:16:27.014 "auth": { 00:16:27.014 "state": "completed", 00:16:27.014 "digest": "sha384", 00:16:27.014 "dhgroup": "ffdhe3072" 00:16:27.014 } 00:16:27.014 } 00:16:27.014 ]' 00:16:27.014 00:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:27.014 00:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:27.014 00:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:27.271 00:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:27.271 00:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:27.271 00:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:27.271 00:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:27.271 00:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:27.271 00:17:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZWU3ZjkwYTI2MGY1ZGMwYWY5MTFiZDAzZjkwZTVhZDGjnqBT: --dhchap-ctrl-secret DHHC-1:02:YmJmNzhmZGM5MmQ0YjA5N2RmZWNlNDk1OTE4ZjEzZjZjZTIzM2YxMmEwMzhiZDEzb47+Ng==: 00:16:27.835 00:17:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:27.835 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:27.835 00:17:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:27.835 00:17:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:27.835 00:17:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.835 00:17:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:27.835 00:17:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:27.835 00:17:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:27.835 00:17:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:28.092 00:17:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:16:28.092 00:17:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:28.092 00:17:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:28.092 00:17:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:28.092 00:17:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:28.092 00:17:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:28.092 00:17:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:28.092 00:17:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:28.092 00:17:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.092 00:17:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:28.092 00:17:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:28.092 00:17:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:28.349 00:16:28.349 00:17:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:28.349 00:17:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:28.349 00:17:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:28.605 00:17:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.605 00:17:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:28.605 00:17:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:28.605 00:17:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.605 00:17:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:28.605 00:17:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:28.605 { 00:16:28.605 "cntlid": 69, 00:16:28.605 "qid": 0, 00:16:28.605 "state": "enabled", 00:16:28.605 "thread": "nvmf_tgt_poll_group_000", 00:16:28.605 "listen_address": { 00:16:28.605 "trtype": "TCP", 00:16:28.605 "adrfam": "IPv4", 00:16:28.605 "traddr": "10.0.0.2", 00:16:28.605 "trsvcid": "4420" 00:16:28.605 }, 00:16:28.605 "peer_address": { 00:16:28.605 "trtype": "TCP", 00:16:28.605 "adrfam": "IPv4", 00:16:28.605 "traddr": "10.0.0.1", 00:16:28.605 "trsvcid": "59184" 00:16:28.605 }, 00:16:28.605 "auth": { 00:16:28.605 "state": "completed", 00:16:28.605 "digest": "sha384", 00:16:28.605 "dhgroup": "ffdhe3072" 00:16:28.605 } 00:16:28.605 } 00:16:28.605 ]' 00:16:28.605 00:17:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:28.605 00:17:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:28.605 00:17:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:28.605 00:17:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:28.605 00:17:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:28.605 00:17:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:28.605 00:17:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:28.606 00:17:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:28.887 00:17:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZDk3MmY4MGNhMTQzNzNhOWJmZDRmMGU2Y2QxMjY2ZjEzOTcxYTM4ZDgzOGY5NzQ3C29SFA==: --dhchap-ctrl-secret DHHC-1:01:ZDRiN2I5OTFlOTRjNDlmMmM2YzEwNzcwMTU0MjE3OGQEbxAq: 00:16:29.454 00:17:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:29.454 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:29.454 00:17:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:29.454 00:17:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:29.454 00:17:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.454 00:17:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:29.454 00:17:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:29.454 00:17:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:29.454 00:17:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:29.762 00:17:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:16:29.762 00:17:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:29.762 00:17:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:29.762 00:17:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:29.762 00:17:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:29.762 00:17:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:29.762 00:17:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:29.762 00:17:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:29.762 00:17:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.762 00:17:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:29.762 00:17:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:29.762 00:17:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:29.762 00:16:29.762 00:17:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:29.762 00:17:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:29.762 00:17:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:30.020 00:17:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:30.020 00:17:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:30.020 00:17:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:30.020 00:17:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.020 00:17:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:30.020 00:17:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:30.020 { 00:16:30.020 "cntlid": 71, 00:16:30.020 "qid": 0, 00:16:30.020 "state": "enabled", 00:16:30.020 "thread": "nvmf_tgt_poll_group_000", 00:16:30.020 "listen_address": { 00:16:30.020 "trtype": "TCP", 00:16:30.020 "adrfam": "IPv4", 00:16:30.020 "traddr": "10.0.0.2", 00:16:30.020 "trsvcid": "4420" 00:16:30.020 }, 00:16:30.020 "peer_address": { 00:16:30.020 "trtype": "TCP", 00:16:30.020 "adrfam": "IPv4", 00:16:30.020 "traddr": "10.0.0.1", 00:16:30.020 "trsvcid": "59204" 00:16:30.020 }, 00:16:30.020 "auth": { 00:16:30.020 "state": "completed", 00:16:30.020 "digest": "sha384", 00:16:30.020 "dhgroup": "ffdhe3072" 00:16:30.020 } 00:16:30.020 } 00:16:30.020 ]' 00:16:30.020 00:17:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:30.020 00:17:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:30.020 00:17:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:30.020 00:17:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:30.020 00:17:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:30.279 00:17:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:30.279 00:17:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:30.279 00:17:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:30.279 00:17:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NTRlYzQ3MGNmODNmMDk4NDk4MzgzMmZkOWI0NjE4ZWE1YjRiNjEzMTI5NjA3YmVhOTg2MDZjNGZmZTlhYjRjMcBoLk4=: 00:16:30.847 00:17:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:30.847 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:30.847 00:17:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:30.847 00:17:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:30.847 00:17:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.847 00:17:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:30.847 00:17:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:30.847 00:17:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:30.847 00:17:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:30.847 00:17:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:31.107 00:17:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:16:31.107 00:17:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:31.107 00:17:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:31.107 00:17:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:31.107 00:17:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:31.107 00:17:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:31.107 00:17:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:31.107 00:17:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:31.107 00:17:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.107 00:17:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:31.107 00:17:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:31.107 00:17:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:31.365 00:16:31.365 00:17:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:31.365 00:17:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:31.365 00:17:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:31.622 00:17:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.622 00:17:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:31.622 00:17:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:31.622 00:17:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.622 00:17:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:31.622 00:17:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:31.622 { 00:16:31.622 "cntlid": 73, 00:16:31.622 "qid": 0, 00:16:31.622 "state": "enabled", 00:16:31.622 "thread": "nvmf_tgt_poll_group_000", 00:16:31.622 "listen_address": { 00:16:31.622 "trtype": "TCP", 00:16:31.622 "adrfam": "IPv4", 00:16:31.622 "traddr": "10.0.0.2", 00:16:31.622 "trsvcid": "4420" 00:16:31.622 }, 00:16:31.622 "peer_address": { 00:16:31.622 "trtype": "TCP", 00:16:31.622 "adrfam": "IPv4", 00:16:31.622 "traddr": "10.0.0.1", 00:16:31.622 "trsvcid": "59234" 00:16:31.622 }, 00:16:31.622 "auth": { 00:16:31.622 "state": "completed", 00:16:31.622 "digest": "sha384", 00:16:31.622 "dhgroup": "ffdhe4096" 00:16:31.622 } 00:16:31.622 } 00:16:31.622 ]' 00:16:31.622 00:17:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:31.622 00:17:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:31.622 00:17:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:31.622 00:17:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:31.622 00:17:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:31.622 00:17:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:31.622 00:17:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:31.622 00:17:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:31.880 00:17:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:N2IyYmQzYmZiMDU3YjVmYjMyNDkyMTA2ZTA4MDNjMmNjYTlmMWQzNDViZTU5NzFiAN8c7w==: --dhchap-ctrl-secret DHHC-1:03:OGFjZmI2ZGZjYTg5OGYyZTIxM2MyZDg1MjhlNWJlODA1ZGEwMjJhMzgwYTBhOGFjMDcxMWI1ZTI2YzQwYjlkYf2tkOw=: 00:16:32.446 00:17:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:32.446 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:32.446 00:17:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:32.446 00:17:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:32.446 00:17:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.446 00:17:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:32.446 00:17:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:32.446 00:17:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:32.446 00:17:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:32.705 00:17:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:16:32.705 00:17:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:32.705 00:17:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:32.705 00:17:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:32.705 00:17:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:32.705 00:17:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:32.705 00:17:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:32.705 00:17:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:32.705 00:17:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.705 00:17:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:32.705 00:17:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:32.705 00:17:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:32.962 00:16:32.962 00:17:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:32.962 00:17:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:32.962 00:17:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:32.962 00:17:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:32.962 00:17:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:32.962 00:17:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:32.962 00:17:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.962 00:17:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:32.962 00:17:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:32.962 { 00:16:32.962 "cntlid": 75, 00:16:32.962 "qid": 0, 00:16:32.962 "state": "enabled", 00:16:32.962 "thread": "nvmf_tgt_poll_group_000", 00:16:32.962 "listen_address": { 00:16:32.962 "trtype": "TCP", 00:16:32.962 "adrfam": "IPv4", 00:16:32.962 "traddr": "10.0.0.2", 00:16:32.962 "trsvcid": "4420" 00:16:32.962 }, 00:16:32.962 "peer_address": { 00:16:32.962 "trtype": "TCP", 00:16:32.962 "adrfam": "IPv4", 00:16:32.962 "traddr": "10.0.0.1", 00:16:32.962 "trsvcid": "59262" 00:16:32.962 }, 00:16:32.962 "auth": { 00:16:32.962 "state": "completed", 00:16:32.962 "digest": "sha384", 00:16:32.962 "dhgroup": "ffdhe4096" 00:16:32.962 } 00:16:32.962 } 00:16:32.962 ]' 00:16:32.962 00:17:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:33.220 00:17:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:33.220 00:17:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:33.220 00:17:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:33.220 00:17:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:33.220 00:17:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:33.220 00:17:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:33.220 00:17:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:33.478 00:17:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZWU3ZjkwYTI2MGY1ZGMwYWY5MTFiZDAzZjkwZTVhZDGjnqBT: --dhchap-ctrl-secret DHHC-1:02:YmJmNzhmZGM5MmQ0YjA5N2RmZWNlNDk1OTE4ZjEzZjZjZTIzM2YxMmEwMzhiZDEzb47+Ng==: 00:16:34.046 00:17:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:34.046 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:34.046 00:17:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:34.046 00:17:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:34.046 00:17:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.046 00:17:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:34.046 00:17:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:34.046 00:17:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:34.046 00:17:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:34.046 00:17:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:16:34.046 00:17:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:34.046 00:17:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:34.046 00:17:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:34.046 00:17:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:34.046 00:17:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:34.046 00:17:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:34.046 00:17:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:34.046 00:17:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.046 00:17:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:34.046 00:17:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:34.046 00:17:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:34.304 00:16:34.304 00:17:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:34.304 00:17:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:34.304 00:17:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:34.563 00:17:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.563 00:17:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:34.563 00:17:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:34.563 00:17:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.563 00:17:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:34.563 00:17:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:34.563 { 00:16:34.563 "cntlid": 77, 00:16:34.563 "qid": 0, 00:16:34.563 "state": "enabled", 00:16:34.563 "thread": "nvmf_tgt_poll_group_000", 00:16:34.563 "listen_address": { 00:16:34.563 "trtype": "TCP", 00:16:34.563 "adrfam": "IPv4", 00:16:34.563 "traddr": "10.0.0.2", 00:16:34.563 "trsvcid": "4420" 00:16:34.563 }, 00:16:34.563 "peer_address": { 00:16:34.563 "trtype": "TCP", 00:16:34.563 "adrfam": "IPv4", 00:16:34.563 "traddr": "10.0.0.1", 00:16:34.563 "trsvcid": "59306" 00:16:34.563 }, 00:16:34.563 "auth": { 00:16:34.563 "state": "completed", 00:16:34.563 "digest": "sha384", 00:16:34.563 "dhgroup": "ffdhe4096" 00:16:34.563 } 00:16:34.563 } 00:16:34.563 ]' 00:16:34.563 00:17:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:34.563 00:17:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:34.563 00:17:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:34.563 00:17:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:34.563 00:17:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:34.563 00:17:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:34.563 00:17:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:34.563 00:17:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:34.822 00:17:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZDk3MmY4MGNhMTQzNzNhOWJmZDRmMGU2Y2QxMjY2ZjEzOTcxYTM4ZDgzOGY5NzQ3C29SFA==: --dhchap-ctrl-secret DHHC-1:01:ZDRiN2I5OTFlOTRjNDlmMmM2YzEwNzcwMTU0MjE3OGQEbxAq: 00:16:35.390 00:17:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:35.390 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:35.390 00:17:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:35.390 00:17:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:35.390 00:17:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.390 00:17:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:35.390 00:17:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:35.390 00:17:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:35.390 00:17:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:35.649 00:17:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:16:35.649 00:17:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:35.649 00:17:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:35.649 00:17:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:35.649 00:17:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:35.649 00:17:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:35.649 00:17:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:35.649 00:17:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:35.649 00:17:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.649 00:17:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:35.649 00:17:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:35.649 00:17:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:35.908 00:16:35.908 00:17:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:35.908 00:17:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:35.908 00:17:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:36.167 00:17:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:36.167 00:17:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:36.167 00:17:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:36.167 00:17:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.167 00:17:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:36.167 00:17:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:36.167 { 00:16:36.167 "cntlid": 79, 00:16:36.167 "qid": 0, 00:16:36.167 "state": "enabled", 00:16:36.167 "thread": "nvmf_tgt_poll_group_000", 00:16:36.167 "listen_address": { 00:16:36.167 "trtype": "TCP", 00:16:36.167 "adrfam": "IPv4", 00:16:36.167 "traddr": "10.0.0.2", 00:16:36.167 "trsvcid": "4420" 00:16:36.167 }, 00:16:36.167 "peer_address": { 00:16:36.167 "trtype": "TCP", 00:16:36.167 "adrfam": "IPv4", 00:16:36.167 "traddr": "10.0.0.1", 00:16:36.167 "trsvcid": "59318" 00:16:36.167 }, 00:16:36.167 "auth": { 00:16:36.167 "state": "completed", 00:16:36.167 "digest": "sha384", 00:16:36.167 "dhgroup": "ffdhe4096" 00:16:36.167 } 00:16:36.167 } 00:16:36.167 ]' 00:16:36.167 00:17:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:36.167 00:17:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:36.167 00:17:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:36.167 00:17:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:36.167 00:17:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:36.167 00:17:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:36.167 00:17:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:36.167 00:17:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:36.426 00:17:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NTRlYzQ3MGNmODNmMDk4NDk4MzgzMmZkOWI0NjE4ZWE1YjRiNjEzMTI5NjA3YmVhOTg2MDZjNGZmZTlhYjRjMcBoLk4=: 00:16:36.994 00:17:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:36.995 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:36.995 00:17:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:36.995 00:17:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:36.995 00:17:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.995 00:17:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:36.995 00:17:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:36.995 00:17:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:36.995 00:17:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:36.995 00:17:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:37.254 00:17:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:16:37.254 00:17:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:37.254 00:17:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:37.254 00:17:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:37.254 00:17:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:37.254 00:17:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:37.254 00:17:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:37.254 00:17:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:37.254 00:17:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.254 00:17:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:37.254 00:17:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:37.254 00:17:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:37.513 00:16:37.513 00:17:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:37.513 00:17:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:37.513 00:17:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:37.772 00:17:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:37.773 00:17:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:37.773 00:17:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:37.773 00:17:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.773 00:17:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:37.773 00:17:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:37.773 { 00:16:37.773 "cntlid": 81, 00:16:37.773 "qid": 0, 00:16:37.773 "state": "enabled", 00:16:37.773 "thread": "nvmf_tgt_poll_group_000", 00:16:37.773 "listen_address": { 00:16:37.773 "trtype": "TCP", 00:16:37.773 "adrfam": "IPv4", 00:16:37.773 "traddr": "10.0.0.2", 00:16:37.773 "trsvcid": "4420" 00:16:37.773 }, 00:16:37.773 "peer_address": { 00:16:37.773 "trtype": "TCP", 00:16:37.773 "adrfam": "IPv4", 00:16:37.773 "traddr": "10.0.0.1", 00:16:37.773 "trsvcid": "59330" 00:16:37.773 }, 00:16:37.773 "auth": { 00:16:37.773 "state": "completed", 00:16:37.773 "digest": "sha384", 00:16:37.773 "dhgroup": "ffdhe6144" 00:16:37.773 } 00:16:37.773 } 00:16:37.773 ]' 00:16:37.773 00:17:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:37.773 00:17:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:37.773 00:17:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:37.773 00:17:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:37.773 00:17:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:37.773 00:17:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:37.773 00:17:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:37.773 00:17:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:38.032 00:17:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:N2IyYmQzYmZiMDU3YjVmYjMyNDkyMTA2ZTA4MDNjMmNjYTlmMWQzNDViZTU5NzFiAN8c7w==: --dhchap-ctrl-secret DHHC-1:03:OGFjZmI2ZGZjYTg5OGYyZTIxM2MyZDg1MjhlNWJlODA1ZGEwMjJhMzgwYTBhOGFjMDcxMWI1ZTI2YzQwYjlkYf2tkOw=: 00:16:38.600 00:17:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:38.600 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:38.600 00:17:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:38.600 00:17:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:38.600 00:17:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.600 00:17:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:38.600 00:17:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:38.600 00:17:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:38.600 00:17:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:38.859 00:17:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:16:38.859 00:17:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:38.859 00:17:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:38.859 00:17:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:38.859 00:17:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:38.859 00:17:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:38.859 00:17:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:38.859 00:17:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:38.859 00:17:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.859 00:17:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:38.859 00:17:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:38.859 00:17:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:39.116 00:16:39.116 00:17:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:39.116 00:17:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:39.116 00:17:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:39.374 00:17:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:39.374 00:17:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:39.374 00:17:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:39.374 00:17:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.374 00:17:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:39.374 00:17:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:39.375 { 00:16:39.375 "cntlid": 83, 00:16:39.375 "qid": 0, 00:16:39.375 "state": "enabled", 00:16:39.375 "thread": "nvmf_tgt_poll_group_000", 00:16:39.375 "listen_address": { 00:16:39.375 "trtype": "TCP", 00:16:39.375 "adrfam": "IPv4", 00:16:39.375 "traddr": "10.0.0.2", 00:16:39.375 "trsvcid": "4420" 00:16:39.375 }, 00:16:39.375 "peer_address": { 00:16:39.375 "trtype": "TCP", 00:16:39.375 "adrfam": "IPv4", 00:16:39.375 "traddr": "10.0.0.1", 00:16:39.375 "trsvcid": "44024" 00:16:39.375 }, 00:16:39.375 "auth": { 00:16:39.375 "state": "completed", 00:16:39.375 "digest": "sha384", 00:16:39.375 "dhgroup": "ffdhe6144" 00:16:39.375 } 00:16:39.375 } 00:16:39.375 ]' 00:16:39.375 00:17:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:39.375 00:17:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:39.375 00:17:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:39.375 00:17:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:39.375 00:17:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:39.375 00:17:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:39.375 00:17:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:39.375 00:17:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:39.633 00:17:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZWU3ZjkwYTI2MGY1ZGMwYWY5MTFiZDAzZjkwZTVhZDGjnqBT: --dhchap-ctrl-secret DHHC-1:02:YmJmNzhmZGM5MmQ0YjA5N2RmZWNlNDk1OTE4ZjEzZjZjZTIzM2YxMmEwMzhiZDEzb47+Ng==: 00:16:40.200 00:17:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:40.200 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:40.200 00:17:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:40.200 00:17:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:40.200 00:17:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.200 00:17:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:40.200 00:17:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:40.200 00:17:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:40.200 00:17:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:40.458 00:17:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:16:40.459 00:17:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:40.459 00:17:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:40.459 00:17:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:40.459 00:17:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:40.459 00:17:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:40.459 00:17:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:40.459 00:17:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:40.459 00:17:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.459 00:17:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:40.459 00:17:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:40.459 00:17:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:40.717 00:16:40.717 00:17:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:40.717 00:17:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:40.717 00:17:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:40.976 00:17:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:40.976 00:17:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:40.976 00:17:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:40.976 00:17:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.976 00:17:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:40.976 00:17:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:40.976 { 00:16:40.976 "cntlid": 85, 00:16:40.976 "qid": 0, 00:16:40.976 "state": "enabled", 00:16:40.976 "thread": "nvmf_tgt_poll_group_000", 00:16:40.976 "listen_address": { 00:16:40.976 "trtype": "TCP", 00:16:40.976 "adrfam": "IPv4", 00:16:40.976 "traddr": "10.0.0.2", 00:16:40.976 "trsvcid": "4420" 00:16:40.976 }, 00:16:40.976 "peer_address": { 00:16:40.976 "trtype": "TCP", 00:16:40.976 "adrfam": "IPv4", 00:16:40.976 "traddr": "10.0.0.1", 00:16:40.976 "trsvcid": "44050" 00:16:40.976 }, 00:16:40.976 "auth": { 00:16:40.976 "state": "completed", 00:16:40.976 "digest": "sha384", 00:16:40.976 "dhgroup": "ffdhe6144" 00:16:40.976 } 00:16:40.976 } 00:16:40.976 ]' 00:16:40.976 00:17:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:40.976 00:17:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:40.976 00:17:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:40.976 00:17:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:40.976 00:17:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:40.976 00:17:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:40.976 00:17:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:40.976 00:17:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:41.235 00:17:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZDk3MmY4MGNhMTQzNzNhOWJmZDRmMGU2Y2QxMjY2ZjEzOTcxYTM4ZDgzOGY5NzQ3C29SFA==: --dhchap-ctrl-secret DHHC-1:01:ZDRiN2I5OTFlOTRjNDlmMmM2YzEwNzcwMTU0MjE3OGQEbxAq: 00:16:41.803 00:18:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:41.803 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:41.803 00:18:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:41.803 00:18:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:41.803 00:18:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.803 00:18:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:41.803 00:18:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:41.803 00:18:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:41.803 00:18:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:41.803 00:18:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:16:41.803 00:18:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:41.803 00:18:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:41.803 00:18:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:41.803 00:18:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:41.803 00:18:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:41.803 00:18:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:41.803 00:18:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:41.803 00:18:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.803 00:18:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:41.803 00:18:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:41.803 00:18:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:42.371 00:16:42.371 00:18:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:42.371 00:18:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:42.371 00:18:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:42.371 00:18:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.371 00:18:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:42.371 00:18:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:42.371 00:18:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.371 00:18:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:42.371 00:18:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:42.371 { 00:16:42.371 "cntlid": 87, 00:16:42.371 "qid": 0, 00:16:42.371 "state": "enabled", 00:16:42.371 "thread": "nvmf_tgt_poll_group_000", 00:16:42.371 "listen_address": { 00:16:42.371 "trtype": "TCP", 00:16:42.371 "adrfam": "IPv4", 00:16:42.371 "traddr": "10.0.0.2", 00:16:42.371 "trsvcid": "4420" 00:16:42.371 }, 00:16:42.371 "peer_address": { 00:16:42.371 "trtype": "TCP", 00:16:42.371 "adrfam": "IPv4", 00:16:42.371 "traddr": "10.0.0.1", 00:16:42.371 "trsvcid": "44076" 00:16:42.371 }, 00:16:42.371 "auth": { 00:16:42.371 "state": "completed", 00:16:42.371 "digest": "sha384", 00:16:42.371 "dhgroup": "ffdhe6144" 00:16:42.371 } 00:16:42.371 } 00:16:42.371 ]' 00:16:42.371 00:18:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:42.371 00:18:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:42.371 00:18:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:42.630 00:18:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:42.630 00:18:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:42.630 00:18:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:42.630 00:18:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:42.630 00:18:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:42.630 00:18:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NTRlYzQ3MGNmODNmMDk4NDk4MzgzMmZkOWI0NjE4ZWE1YjRiNjEzMTI5NjA3YmVhOTg2MDZjNGZmZTlhYjRjMcBoLk4=: 00:16:43.198 00:18:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:43.198 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:43.198 00:18:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:43.198 00:18:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:43.198 00:18:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.198 00:18:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:43.198 00:18:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:43.198 00:18:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:43.198 00:18:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:43.198 00:18:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:43.457 00:18:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:16:43.457 00:18:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:43.457 00:18:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:43.457 00:18:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:43.457 00:18:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:43.457 00:18:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:43.457 00:18:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:43.457 00:18:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:43.457 00:18:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.457 00:18:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:43.457 00:18:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:43.457 00:18:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:44.025 00:16:44.025 00:18:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:44.025 00:18:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:44.025 00:18:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:44.025 00:18:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.025 00:18:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:44.025 00:18:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:44.025 00:18:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.025 00:18:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:44.025 00:18:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:44.025 { 00:16:44.025 "cntlid": 89, 00:16:44.025 "qid": 0, 00:16:44.025 "state": "enabled", 00:16:44.025 "thread": "nvmf_tgt_poll_group_000", 00:16:44.025 "listen_address": { 00:16:44.025 "trtype": "TCP", 00:16:44.025 "adrfam": "IPv4", 00:16:44.025 "traddr": "10.0.0.2", 00:16:44.025 "trsvcid": "4420" 00:16:44.025 }, 00:16:44.025 "peer_address": { 00:16:44.025 "trtype": "TCP", 00:16:44.025 "adrfam": "IPv4", 00:16:44.025 "traddr": "10.0.0.1", 00:16:44.025 "trsvcid": "44098" 00:16:44.025 }, 00:16:44.025 "auth": { 00:16:44.025 "state": "completed", 00:16:44.025 "digest": "sha384", 00:16:44.025 "dhgroup": "ffdhe8192" 00:16:44.025 } 00:16:44.025 } 00:16:44.025 ]' 00:16:44.025 00:18:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:44.284 00:18:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:44.284 00:18:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:44.284 00:18:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:44.284 00:18:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:44.284 00:18:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:44.284 00:18:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:44.284 00:18:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:44.553 00:18:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:N2IyYmQzYmZiMDU3YjVmYjMyNDkyMTA2ZTA4MDNjMmNjYTlmMWQzNDViZTU5NzFiAN8c7w==: --dhchap-ctrl-secret DHHC-1:03:OGFjZmI2ZGZjYTg5OGYyZTIxM2MyZDg1MjhlNWJlODA1ZGEwMjJhMzgwYTBhOGFjMDcxMWI1ZTI2YzQwYjlkYf2tkOw=: 00:16:45.122 00:18:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:45.122 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:45.122 00:18:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:45.122 00:18:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:45.122 00:18:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.122 00:18:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:45.122 00:18:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:45.122 00:18:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:45.122 00:18:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:45.122 00:18:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:16:45.122 00:18:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:45.122 00:18:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:45.122 00:18:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:45.122 00:18:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:45.122 00:18:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:45.122 00:18:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:45.122 00:18:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:45.122 00:18:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.122 00:18:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:45.122 00:18:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:45.122 00:18:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:45.690 00:16:45.690 00:18:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:45.690 00:18:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:45.690 00:18:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:45.949 00:18:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.949 00:18:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:45.949 00:18:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:45.949 00:18:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.949 00:18:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:45.949 00:18:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:45.949 { 00:16:45.949 "cntlid": 91, 00:16:45.949 "qid": 0, 00:16:45.949 "state": "enabled", 00:16:45.949 "thread": "nvmf_tgt_poll_group_000", 00:16:45.949 "listen_address": { 00:16:45.949 "trtype": "TCP", 00:16:45.949 "adrfam": "IPv4", 00:16:45.949 "traddr": "10.0.0.2", 00:16:45.949 "trsvcid": "4420" 00:16:45.949 }, 00:16:45.949 "peer_address": { 00:16:45.949 "trtype": "TCP", 00:16:45.949 "adrfam": "IPv4", 00:16:45.949 "traddr": "10.0.0.1", 00:16:45.949 "trsvcid": "44126" 00:16:45.949 }, 00:16:45.949 "auth": { 00:16:45.949 "state": "completed", 00:16:45.949 "digest": "sha384", 00:16:45.949 "dhgroup": "ffdhe8192" 00:16:45.949 } 00:16:45.949 } 00:16:45.949 ]' 00:16:45.949 00:18:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:45.949 00:18:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:45.949 00:18:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:45.949 00:18:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:45.949 00:18:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:45.949 00:18:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:45.949 00:18:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:45.949 00:18:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:46.219 00:18:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZWU3ZjkwYTI2MGY1ZGMwYWY5MTFiZDAzZjkwZTVhZDGjnqBT: --dhchap-ctrl-secret DHHC-1:02:YmJmNzhmZGM5MmQ0YjA5N2RmZWNlNDk1OTE4ZjEzZjZjZTIzM2YxMmEwMzhiZDEzb47+Ng==: 00:16:46.843 00:18:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:46.843 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:46.843 00:18:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:46.843 00:18:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:46.843 00:18:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.843 00:18:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:46.843 00:18:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:46.843 00:18:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:46.843 00:18:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:46.843 00:18:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:16:46.843 00:18:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:46.843 00:18:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:46.843 00:18:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:46.843 00:18:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:46.843 00:18:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:46.843 00:18:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:46.843 00:18:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:46.843 00:18:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.843 00:18:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:46.843 00:18:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:46.843 00:18:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:47.429 00:16:47.429 00:18:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:47.429 00:18:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:47.429 00:18:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:47.688 00:18:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.688 00:18:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:47.688 00:18:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:47.688 00:18:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.688 00:18:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:47.688 00:18:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:47.688 { 00:16:47.688 "cntlid": 93, 00:16:47.688 "qid": 0, 00:16:47.688 "state": "enabled", 00:16:47.688 "thread": "nvmf_tgt_poll_group_000", 00:16:47.688 "listen_address": { 00:16:47.688 "trtype": "TCP", 00:16:47.688 "adrfam": "IPv4", 00:16:47.688 "traddr": "10.0.0.2", 00:16:47.688 "trsvcid": "4420" 00:16:47.688 }, 00:16:47.688 "peer_address": { 00:16:47.688 "trtype": "TCP", 00:16:47.688 "adrfam": "IPv4", 00:16:47.688 "traddr": "10.0.0.1", 00:16:47.688 "trsvcid": "44148" 00:16:47.688 }, 00:16:47.688 "auth": { 00:16:47.688 "state": "completed", 00:16:47.688 "digest": "sha384", 00:16:47.688 "dhgroup": "ffdhe8192" 00:16:47.688 } 00:16:47.688 } 00:16:47.688 ]' 00:16:47.688 00:18:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:47.688 00:18:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:47.688 00:18:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:47.688 00:18:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:47.688 00:18:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:47.688 00:18:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:47.688 00:18:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:47.688 00:18:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:47.948 00:18:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZDk3MmY4MGNhMTQzNzNhOWJmZDRmMGU2Y2QxMjY2ZjEzOTcxYTM4ZDgzOGY5NzQ3C29SFA==: --dhchap-ctrl-secret DHHC-1:01:ZDRiN2I5OTFlOTRjNDlmMmM2YzEwNzcwMTU0MjE3OGQEbxAq: 00:16:48.517 00:18:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:48.517 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:48.517 00:18:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:48.517 00:18:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:48.517 00:18:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.517 00:18:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:48.517 00:18:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:48.517 00:18:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:48.517 00:18:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:48.517 00:18:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:16:48.517 00:18:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:48.517 00:18:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:48.517 00:18:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:48.517 00:18:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:48.517 00:18:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:48.517 00:18:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:48.517 00:18:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:48.517 00:18:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.777 00:18:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:48.777 00:18:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:48.777 00:18:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:49.036 00:16:49.036 00:18:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:49.036 00:18:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:49.036 00:18:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:49.295 00:18:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.295 00:18:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:49.295 00:18:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:49.295 00:18:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.296 00:18:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:49.296 00:18:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:49.296 { 00:16:49.296 "cntlid": 95, 00:16:49.296 "qid": 0, 00:16:49.296 "state": "enabled", 00:16:49.296 "thread": "nvmf_tgt_poll_group_000", 00:16:49.296 "listen_address": { 00:16:49.296 "trtype": "TCP", 00:16:49.296 "adrfam": "IPv4", 00:16:49.296 "traddr": "10.0.0.2", 00:16:49.296 "trsvcid": "4420" 00:16:49.296 }, 00:16:49.296 "peer_address": { 00:16:49.296 "trtype": "TCP", 00:16:49.296 "adrfam": "IPv4", 00:16:49.296 "traddr": "10.0.0.1", 00:16:49.296 "trsvcid": "55610" 00:16:49.296 }, 00:16:49.296 "auth": { 00:16:49.296 "state": "completed", 00:16:49.296 "digest": "sha384", 00:16:49.296 "dhgroup": "ffdhe8192" 00:16:49.296 } 00:16:49.296 } 00:16:49.296 ]' 00:16:49.296 00:18:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:49.296 00:18:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:49.296 00:18:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:49.296 00:18:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:49.296 00:18:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:49.555 00:18:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:49.555 00:18:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:49.555 00:18:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:49.555 00:18:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NTRlYzQ3MGNmODNmMDk4NDk4MzgzMmZkOWI0NjE4ZWE1YjRiNjEzMTI5NjA3YmVhOTg2MDZjNGZmZTlhYjRjMcBoLk4=: 00:16:50.124 00:18:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:50.124 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:50.124 00:18:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:50.124 00:18:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:50.124 00:18:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.124 00:18:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:50.124 00:18:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:16:50.124 00:18:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:50.124 00:18:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:50.124 00:18:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:50.124 00:18:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:50.384 00:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:16:50.384 00:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:50.384 00:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:50.384 00:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:50.384 00:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:50.384 00:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:50.384 00:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:50.384 00:18:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:50.384 00:18:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.384 00:18:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:50.384 00:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:50.384 00:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:50.642 00:16:50.642 00:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:50.642 00:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:50.642 00:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:50.902 00:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.902 00:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:50.902 00:18:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:50.902 00:18:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.902 00:18:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:50.902 00:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:50.902 { 00:16:50.902 "cntlid": 97, 00:16:50.902 "qid": 0, 00:16:50.902 "state": "enabled", 00:16:50.902 "thread": "nvmf_tgt_poll_group_000", 00:16:50.902 "listen_address": { 00:16:50.902 "trtype": "TCP", 00:16:50.902 "adrfam": "IPv4", 00:16:50.902 "traddr": "10.0.0.2", 00:16:50.902 "trsvcid": "4420" 00:16:50.902 }, 00:16:50.902 "peer_address": { 00:16:50.902 "trtype": "TCP", 00:16:50.902 "adrfam": "IPv4", 00:16:50.902 "traddr": "10.0.0.1", 00:16:50.902 "trsvcid": "55638" 00:16:50.902 }, 00:16:50.902 "auth": { 00:16:50.902 "state": "completed", 00:16:50.902 "digest": "sha512", 00:16:50.902 "dhgroup": "null" 00:16:50.902 } 00:16:50.902 } 00:16:50.902 ]' 00:16:50.902 00:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:50.902 00:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:50.902 00:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:50.902 00:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:50.902 00:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:50.902 00:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:50.902 00:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:50.902 00:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:51.162 00:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:N2IyYmQzYmZiMDU3YjVmYjMyNDkyMTA2ZTA4MDNjMmNjYTlmMWQzNDViZTU5NzFiAN8c7w==: --dhchap-ctrl-secret DHHC-1:03:OGFjZmI2ZGZjYTg5OGYyZTIxM2MyZDg1MjhlNWJlODA1ZGEwMjJhMzgwYTBhOGFjMDcxMWI1ZTI2YzQwYjlkYf2tkOw=: 00:16:51.730 00:18:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:51.730 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:51.730 00:18:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:51.730 00:18:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:51.730 00:18:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.730 00:18:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:51.730 00:18:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:51.730 00:18:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:51.730 00:18:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:51.989 00:18:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:16:51.989 00:18:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:51.989 00:18:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:51.989 00:18:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:51.989 00:18:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:51.989 00:18:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:51.989 00:18:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:51.989 00:18:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:51.989 00:18:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.989 00:18:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:51.989 00:18:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:51.989 00:18:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:52.248 00:16:52.248 00:18:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:52.248 00:18:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:52.248 00:18:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:52.248 00:18:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.248 00:18:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:52.248 00:18:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:52.248 00:18:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.248 00:18:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:52.248 00:18:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:52.248 { 00:16:52.248 "cntlid": 99, 00:16:52.248 "qid": 0, 00:16:52.248 "state": "enabled", 00:16:52.248 "thread": "nvmf_tgt_poll_group_000", 00:16:52.248 "listen_address": { 00:16:52.248 "trtype": "TCP", 00:16:52.248 "adrfam": "IPv4", 00:16:52.248 "traddr": "10.0.0.2", 00:16:52.248 "trsvcid": "4420" 00:16:52.248 }, 00:16:52.248 "peer_address": { 00:16:52.248 "trtype": "TCP", 00:16:52.248 "adrfam": "IPv4", 00:16:52.248 "traddr": "10.0.0.1", 00:16:52.248 "trsvcid": "55664" 00:16:52.248 }, 00:16:52.248 "auth": { 00:16:52.248 "state": "completed", 00:16:52.248 "digest": "sha512", 00:16:52.248 "dhgroup": "null" 00:16:52.248 } 00:16:52.248 } 00:16:52.248 ]' 00:16:52.248 00:18:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:52.516 00:18:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:52.516 00:18:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:52.516 00:18:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:52.516 00:18:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:52.516 00:18:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:52.516 00:18:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:52.516 00:18:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:52.774 00:18:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZWU3ZjkwYTI2MGY1ZGMwYWY5MTFiZDAzZjkwZTVhZDGjnqBT: --dhchap-ctrl-secret DHHC-1:02:YmJmNzhmZGM5MmQ0YjA5N2RmZWNlNDk1OTE4ZjEzZjZjZTIzM2YxMmEwMzhiZDEzb47+Ng==: 00:16:53.339 00:18:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:53.339 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:53.339 00:18:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:53.339 00:18:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:53.339 00:18:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.339 00:18:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:53.339 00:18:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:53.339 00:18:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:53.339 00:18:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:53.339 00:18:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:16:53.339 00:18:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:53.339 00:18:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:53.339 00:18:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:53.340 00:18:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:53.340 00:18:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:53.340 00:18:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:53.340 00:18:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:53.340 00:18:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.340 00:18:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:53.340 00:18:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:53.340 00:18:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:53.597 00:16:53.597 00:18:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:53.597 00:18:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:53.597 00:18:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:53.854 00:18:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.854 00:18:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:53.854 00:18:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:53.854 00:18:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.854 00:18:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:53.854 00:18:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:53.854 { 00:16:53.854 "cntlid": 101, 00:16:53.854 "qid": 0, 00:16:53.854 "state": "enabled", 00:16:53.854 "thread": "nvmf_tgt_poll_group_000", 00:16:53.854 "listen_address": { 00:16:53.854 "trtype": "TCP", 00:16:53.854 "adrfam": "IPv4", 00:16:53.854 "traddr": "10.0.0.2", 00:16:53.854 "trsvcid": "4420" 00:16:53.854 }, 00:16:53.854 "peer_address": { 00:16:53.854 "trtype": "TCP", 00:16:53.854 "adrfam": "IPv4", 00:16:53.854 "traddr": "10.0.0.1", 00:16:53.854 "trsvcid": "55694" 00:16:53.854 }, 00:16:53.854 "auth": { 00:16:53.854 "state": "completed", 00:16:53.854 "digest": "sha512", 00:16:53.854 "dhgroup": "null" 00:16:53.854 } 00:16:53.854 } 00:16:53.854 ]' 00:16:53.855 00:18:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:53.855 00:18:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:53.855 00:18:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:53.855 00:18:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:53.855 00:18:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:53.855 00:18:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:53.855 00:18:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:53.855 00:18:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:54.111 00:18:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZDk3MmY4MGNhMTQzNzNhOWJmZDRmMGU2Y2QxMjY2ZjEzOTcxYTM4ZDgzOGY5NzQ3C29SFA==: --dhchap-ctrl-secret DHHC-1:01:ZDRiN2I5OTFlOTRjNDlmMmM2YzEwNzcwMTU0MjE3OGQEbxAq: 00:16:54.676 00:18:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:54.676 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:54.676 00:18:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:54.676 00:18:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:54.676 00:18:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.676 00:18:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:54.676 00:18:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:54.676 00:18:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:54.676 00:18:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:54.933 00:18:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:16:54.933 00:18:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:54.933 00:18:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:54.933 00:18:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:54.933 00:18:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:54.933 00:18:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:54.933 00:18:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:54.933 00:18:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:54.933 00:18:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.933 00:18:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:54.933 00:18:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:54.933 00:18:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:55.190 00:16:55.190 00:18:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:55.190 00:18:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:55.190 00:18:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:55.190 00:18:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.190 00:18:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:55.190 00:18:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:55.190 00:18:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.190 00:18:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:55.447 00:18:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:55.447 { 00:16:55.447 "cntlid": 103, 00:16:55.447 "qid": 0, 00:16:55.447 "state": "enabled", 00:16:55.447 "thread": "nvmf_tgt_poll_group_000", 00:16:55.447 "listen_address": { 00:16:55.447 "trtype": "TCP", 00:16:55.447 "adrfam": "IPv4", 00:16:55.447 "traddr": "10.0.0.2", 00:16:55.447 "trsvcid": "4420" 00:16:55.447 }, 00:16:55.447 "peer_address": { 00:16:55.447 "trtype": "TCP", 00:16:55.447 "adrfam": "IPv4", 00:16:55.447 "traddr": "10.0.0.1", 00:16:55.447 "trsvcid": "55720" 00:16:55.447 }, 00:16:55.447 "auth": { 00:16:55.447 "state": "completed", 00:16:55.447 "digest": "sha512", 00:16:55.447 "dhgroup": "null" 00:16:55.447 } 00:16:55.447 } 00:16:55.447 ]' 00:16:55.447 00:18:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:55.447 00:18:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:55.447 00:18:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:55.447 00:18:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:55.447 00:18:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:55.447 00:18:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:55.447 00:18:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:55.447 00:18:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:55.703 00:18:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NTRlYzQ3MGNmODNmMDk4NDk4MzgzMmZkOWI0NjE4ZWE1YjRiNjEzMTI5NjA3YmVhOTg2MDZjNGZmZTlhYjRjMcBoLk4=: 00:16:56.268 00:18:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:56.268 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:56.268 00:18:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:56.268 00:18:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:56.268 00:18:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.268 00:18:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:56.268 00:18:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:56.268 00:18:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:56.268 00:18:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:56.268 00:18:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:56.268 00:18:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:16:56.268 00:18:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:56.268 00:18:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:56.268 00:18:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:56.268 00:18:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:56.268 00:18:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:56.268 00:18:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:56.268 00:18:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:56.268 00:18:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.268 00:18:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:56.269 00:18:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:56.269 00:18:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:56.527 00:16:56.527 00:18:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:56.527 00:18:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:56.527 00:18:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:56.785 00:18:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.785 00:18:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:56.785 00:18:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:56.785 00:18:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.785 00:18:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:56.785 00:18:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:56.785 { 00:16:56.785 "cntlid": 105, 00:16:56.785 "qid": 0, 00:16:56.785 "state": "enabled", 00:16:56.785 "thread": "nvmf_tgt_poll_group_000", 00:16:56.785 "listen_address": { 00:16:56.785 "trtype": "TCP", 00:16:56.785 "adrfam": "IPv4", 00:16:56.785 "traddr": "10.0.0.2", 00:16:56.785 "trsvcid": "4420" 00:16:56.785 }, 00:16:56.785 "peer_address": { 00:16:56.785 "trtype": "TCP", 00:16:56.785 "adrfam": "IPv4", 00:16:56.785 "traddr": "10.0.0.1", 00:16:56.785 "trsvcid": "55752" 00:16:56.785 }, 00:16:56.785 "auth": { 00:16:56.785 "state": "completed", 00:16:56.785 "digest": "sha512", 00:16:56.785 "dhgroup": "ffdhe2048" 00:16:56.785 } 00:16:56.785 } 00:16:56.785 ]' 00:16:56.785 00:18:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:56.785 00:18:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:56.785 00:18:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:56.785 00:18:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:56.785 00:18:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:56.785 00:18:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:56.785 00:18:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:56.785 00:18:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:57.044 00:18:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:N2IyYmQzYmZiMDU3YjVmYjMyNDkyMTA2ZTA4MDNjMmNjYTlmMWQzNDViZTU5NzFiAN8c7w==: --dhchap-ctrl-secret DHHC-1:03:OGFjZmI2ZGZjYTg5OGYyZTIxM2MyZDg1MjhlNWJlODA1ZGEwMjJhMzgwYTBhOGFjMDcxMWI1ZTI2YzQwYjlkYf2tkOw=: 00:16:57.611 00:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:57.611 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:57.611 00:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:57.611 00:18:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:57.611 00:18:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.611 00:18:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:57.611 00:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:57.611 00:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:57.611 00:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:57.870 00:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:16:57.870 00:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:57.870 00:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:57.870 00:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:57.870 00:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:57.871 00:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:57.871 00:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:57.871 00:18:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:57.871 00:18:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.871 00:18:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:57.871 00:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:57.871 00:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:58.130 00:16:58.130 00:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:58.130 00:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:58.130 00:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:58.130 00:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.130 00:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:58.130 00:18:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:58.130 00:18:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.130 00:18:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:58.130 00:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:58.130 { 00:16:58.130 "cntlid": 107, 00:16:58.130 "qid": 0, 00:16:58.130 "state": "enabled", 00:16:58.130 "thread": "nvmf_tgt_poll_group_000", 00:16:58.130 "listen_address": { 00:16:58.130 "trtype": "TCP", 00:16:58.130 "adrfam": "IPv4", 00:16:58.130 "traddr": "10.0.0.2", 00:16:58.130 "trsvcid": "4420" 00:16:58.130 }, 00:16:58.130 "peer_address": { 00:16:58.130 "trtype": "TCP", 00:16:58.130 "adrfam": "IPv4", 00:16:58.130 "traddr": "10.0.0.1", 00:16:58.130 "trsvcid": "55768" 00:16:58.130 }, 00:16:58.130 "auth": { 00:16:58.130 "state": "completed", 00:16:58.130 "digest": "sha512", 00:16:58.130 "dhgroup": "ffdhe2048" 00:16:58.130 } 00:16:58.130 } 00:16:58.130 ]' 00:16:58.389 00:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:58.389 00:18:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:58.389 00:18:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:58.389 00:18:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:58.389 00:18:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:58.389 00:18:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:58.389 00:18:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:58.389 00:18:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:58.649 00:18:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZWU3ZjkwYTI2MGY1ZGMwYWY5MTFiZDAzZjkwZTVhZDGjnqBT: --dhchap-ctrl-secret DHHC-1:02:YmJmNzhmZGM5MmQ0YjA5N2RmZWNlNDk1OTE4ZjEzZjZjZTIzM2YxMmEwMzhiZDEzb47+Ng==: 00:16:59.218 00:18:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:59.218 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:59.218 00:18:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:59.218 00:18:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:59.218 00:18:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.218 00:18:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:59.218 00:18:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:59.218 00:18:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:59.218 00:18:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:59.218 00:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:16:59.218 00:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:59.218 00:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:59.218 00:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:59.218 00:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:59.218 00:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:59.218 00:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:59.218 00:18:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:59.218 00:18:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.218 00:18:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:59.218 00:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:59.218 00:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:59.476 00:16:59.476 00:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:59.476 00:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:59.476 00:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:59.734 00:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.734 00:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:59.734 00:18:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:59.734 00:18:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.734 00:18:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:59.734 00:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:59.734 { 00:16:59.734 "cntlid": 109, 00:16:59.734 "qid": 0, 00:16:59.734 "state": "enabled", 00:16:59.734 "thread": "nvmf_tgt_poll_group_000", 00:16:59.734 "listen_address": { 00:16:59.734 "trtype": "TCP", 00:16:59.734 "adrfam": "IPv4", 00:16:59.734 "traddr": "10.0.0.2", 00:16:59.734 "trsvcid": "4420" 00:16:59.734 }, 00:16:59.734 "peer_address": { 00:16:59.734 "trtype": "TCP", 00:16:59.734 "adrfam": "IPv4", 00:16:59.734 "traddr": "10.0.0.1", 00:16:59.734 "trsvcid": "47098" 00:16:59.734 }, 00:16:59.734 "auth": { 00:16:59.734 "state": "completed", 00:16:59.734 "digest": "sha512", 00:16:59.735 "dhgroup": "ffdhe2048" 00:16:59.735 } 00:16:59.735 } 00:16:59.735 ]' 00:16:59.735 00:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:59.735 00:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:59.735 00:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:59.735 00:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:59.735 00:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:59.735 00:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:59.735 00:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:59.735 00:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:59.993 00:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZDk3MmY4MGNhMTQzNzNhOWJmZDRmMGU2Y2QxMjY2ZjEzOTcxYTM4ZDgzOGY5NzQ3C29SFA==: --dhchap-ctrl-secret DHHC-1:01:ZDRiN2I5OTFlOTRjNDlmMmM2YzEwNzcwMTU0MjE3OGQEbxAq: 00:17:00.561 00:18:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:00.561 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:00.561 00:18:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:00.561 00:18:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:00.561 00:18:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.561 00:18:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:00.561 00:18:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:00.561 00:18:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:00.561 00:18:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:00.821 00:18:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:17:00.821 00:18:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:00.821 00:18:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:00.821 00:18:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:00.821 00:18:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:00.821 00:18:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:00.821 00:18:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:00.821 00:18:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:00.821 00:18:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.821 00:18:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:00.821 00:18:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:00.821 00:18:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:01.080 00:17:01.080 00:18:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:01.080 00:18:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:01.080 00:18:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:01.339 00:18:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.339 00:18:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:01.339 00:18:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:01.339 00:18:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.339 00:18:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:01.339 00:18:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:01.339 { 00:17:01.339 "cntlid": 111, 00:17:01.339 "qid": 0, 00:17:01.339 "state": "enabled", 00:17:01.339 "thread": "nvmf_tgt_poll_group_000", 00:17:01.339 "listen_address": { 00:17:01.339 "trtype": "TCP", 00:17:01.339 "adrfam": "IPv4", 00:17:01.339 "traddr": "10.0.0.2", 00:17:01.339 "trsvcid": "4420" 00:17:01.339 }, 00:17:01.339 "peer_address": { 00:17:01.339 "trtype": "TCP", 00:17:01.339 "adrfam": "IPv4", 00:17:01.339 "traddr": "10.0.0.1", 00:17:01.339 "trsvcid": "47130" 00:17:01.339 }, 00:17:01.339 "auth": { 00:17:01.339 "state": "completed", 00:17:01.339 "digest": "sha512", 00:17:01.339 "dhgroup": "ffdhe2048" 00:17:01.339 } 00:17:01.339 } 00:17:01.339 ]' 00:17:01.339 00:18:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:01.339 00:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:01.339 00:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:01.339 00:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:01.339 00:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:01.339 00:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:01.339 00:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:01.339 00:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:01.598 00:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NTRlYzQ3MGNmODNmMDk4NDk4MzgzMmZkOWI0NjE4ZWE1YjRiNjEzMTI5NjA3YmVhOTg2MDZjNGZmZTlhYjRjMcBoLk4=: 00:17:02.166 00:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:02.166 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:02.166 00:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:02.166 00:18:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:02.166 00:18:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.166 00:18:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:02.166 00:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:02.166 00:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:02.166 00:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:02.166 00:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:02.426 00:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:17:02.426 00:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:02.426 00:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:02.426 00:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:02.426 00:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:02.426 00:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:02.426 00:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:02.426 00:18:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:02.426 00:18:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.426 00:18:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:02.426 00:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:02.426 00:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:02.685 00:17:02.685 00:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:02.685 00:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:02.685 00:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:02.685 00:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.685 00:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:02.685 00:18:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:02.685 00:18:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.685 00:18:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:02.685 00:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:02.685 { 00:17:02.685 "cntlid": 113, 00:17:02.685 "qid": 0, 00:17:02.685 "state": "enabled", 00:17:02.685 "thread": "nvmf_tgt_poll_group_000", 00:17:02.685 "listen_address": { 00:17:02.685 "trtype": "TCP", 00:17:02.685 "adrfam": "IPv4", 00:17:02.685 "traddr": "10.0.0.2", 00:17:02.685 "trsvcid": "4420" 00:17:02.685 }, 00:17:02.685 "peer_address": { 00:17:02.685 "trtype": "TCP", 00:17:02.685 "adrfam": "IPv4", 00:17:02.685 "traddr": "10.0.0.1", 00:17:02.685 "trsvcid": "47170" 00:17:02.685 }, 00:17:02.685 "auth": { 00:17:02.685 "state": "completed", 00:17:02.685 "digest": "sha512", 00:17:02.685 "dhgroup": "ffdhe3072" 00:17:02.685 } 00:17:02.685 } 00:17:02.685 ]' 00:17:02.685 00:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:02.685 00:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:02.685 00:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:03.020 00:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:03.020 00:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:03.020 00:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:03.020 00:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:03.020 00:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:03.020 00:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:N2IyYmQzYmZiMDU3YjVmYjMyNDkyMTA2ZTA4MDNjMmNjYTlmMWQzNDViZTU5NzFiAN8c7w==: --dhchap-ctrl-secret DHHC-1:03:OGFjZmI2ZGZjYTg5OGYyZTIxM2MyZDg1MjhlNWJlODA1ZGEwMjJhMzgwYTBhOGFjMDcxMWI1ZTI2YzQwYjlkYf2tkOw=: 00:17:03.588 00:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:03.588 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:03.588 00:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:03.588 00:18:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:03.588 00:18:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.588 00:18:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:03.588 00:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:03.588 00:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:03.588 00:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:03.846 00:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:17:03.846 00:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:03.846 00:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:03.846 00:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:03.846 00:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:03.846 00:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:03.846 00:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:03.846 00:18:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:03.846 00:18:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.846 00:18:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:03.846 00:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:03.846 00:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:04.106 00:17:04.106 00:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:04.106 00:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:04.106 00:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:04.106 00:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.106 00:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:04.106 00:18:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:04.106 00:18:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.365 00:18:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:04.365 00:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:04.365 { 00:17:04.365 "cntlid": 115, 00:17:04.365 "qid": 0, 00:17:04.365 "state": "enabled", 00:17:04.365 "thread": "nvmf_tgt_poll_group_000", 00:17:04.365 "listen_address": { 00:17:04.365 "trtype": "TCP", 00:17:04.365 "adrfam": "IPv4", 00:17:04.365 "traddr": "10.0.0.2", 00:17:04.365 "trsvcid": "4420" 00:17:04.365 }, 00:17:04.365 "peer_address": { 00:17:04.365 "trtype": "TCP", 00:17:04.365 "adrfam": "IPv4", 00:17:04.365 "traddr": "10.0.0.1", 00:17:04.365 "trsvcid": "47188" 00:17:04.365 }, 00:17:04.365 "auth": { 00:17:04.365 "state": "completed", 00:17:04.365 "digest": "sha512", 00:17:04.365 "dhgroup": "ffdhe3072" 00:17:04.365 } 00:17:04.365 } 00:17:04.365 ]' 00:17:04.365 00:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:04.365 00:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:04.365 00:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:04.365 00:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:04.365 00:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:04.365 00:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:04.365 00:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:04.365 00:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:04.624 00:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZWU3ZjkwYTI2MGY1ZGMwYWY5MTFiZDAzZjkwZTVhZDGjnqBT: --dhchap-ctrl-secret DHHC-1:02:YmJmNzhmZGM5MmQ0YjA5N2RmZWNlNDk1OTE4ZjEzZjZjZTIzM2YxMmEwMzhiZDEzb47+Ng==: 00:17:05.190 00:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:05.190 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:05.190 00:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:05.190 00:18:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:05.190 00:18:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.190 00:18:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:05.190 00:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:05.190 00:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:05.190 00:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:05.190 00:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:17:05.190 00:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:05.190 00:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:05.190 00:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:05.190 00:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:05.190 00:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:05.190 00:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:05.190 00:18:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:05.190 00:18:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.190 00:18:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:05.190 00:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:05.190 00:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:05.449 00:17:05.449 00:18:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:05.449 00:18:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:05.449 00:18:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:05.707 00:18:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.707 00:18:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:05.707 00:18:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:05.707 00:18:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.707 00:18:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:05.707 00:18:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:05.707 { 00:17:05.707 "cntlid": 117, 00:17:05.707 "qid": 0, 00:17:05.707 "state": "enabled", 00:17:05.707 "thread": "nvmf_tgt_poll_group_000", 00:17:05.707 "listen_address": { 00:17:05.707 "trtype": "TCP", 00:17:05.707 "adrfam": "IPv4", 00:17:05.707 "traddr": "10.0.0.2", 00:17:05.707 "trsvcid": "4420" 00:17:05.707 }, 00:17:05.707 "peer_address": { 00:17:05.707 "trtype": "TCP", 00:17:05.707 "adrfam": "IPv4", 00:17:05.707 "traddr": "10.0.0.1", 00:17:05.707 "trsvcid": "47216" 00:17:05.707 }, 00:17:05.707 "auth": { 00:17:05.707 "state": "completed", 00:17:05.707 "digest": "sha512", 00:17:05.707 "dhgroup": "ffdhe3072" 00:17:05.707 } 00:17:05.707 } 00:17:05.707 ]' 00:17:05.707 00:18:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:05.707 00:18:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:05.707 00:18:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:05.707 00:18:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:05.707 00:18:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:05.707 00:18:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:05.707 00:18:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:05.707 00:18:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:05.967 00:18:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZDk3MmY4MGNhMTQzNzNhOWJmZDRmMGU2Y2QxMjY2ZjEzOTcxYTM4ZDgzOGY5NzQ3C29SFA==: --dhchap-ctrl-secret DHHC-1:01:ZDRiN2I5OTFlOTRjNDlmMmM2YzEwNzcwMTU0MjE3OGQEbxAq: 00:17:06.534 00:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:06.534 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:06.534 00:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:06.534 00:18:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:06.534 00:18:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.534 00:18:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:06.534 00:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:06.534 00:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:06.534 00:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:06.793 00:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:17:06.793 00:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:06.793 00:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:06.793 00:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:06.793 00:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:06.793 00:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:06.793 00:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:06.793 00:18:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:06.793 00:18:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.793 00:18:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:06.793 00:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:06.793 00:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:07.053 00:17:07.053 00:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:07.053 00:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:07.053 00:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:07.053 00:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.053 00:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:07.053 00:18:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:07.053 00:18:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.313 00:18:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:07.313 00:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:07.313 { 00:17:07.313 "cntlid": 119, 00:17:07.313 "qid": 0, 00:17:07.313 "state": "enabled", 00:17:07.313 "thread": "nvmf_tgt_poll_group_000", 00:17:07.313 "listen_address": { 00:17:07.313 "trtype": "TCP", 00:17:07.313 "adrfam": "IPv4", 00:17:07.313 "traddr": "10.0.0.2", 00:17:07.313 "trsvcid": "4420" 00:17:07.313 }, 00:17:07.313 "peer_address": { 00:17:07.313 "trtype": "TCP", 00:17:07.313 "adrfam": "IPv4", 00:17:07.313 "traddr": "10.0.0.1", 00:17:07.313 "trsvcid": "47250" 00:17:07.313 }, 00:17:07.313 "auth": { 00:17:07.313 "state": "completed", 00:17:07.313 "digest": "sha512", 00:17:07.313 "dhgroup": "ffdhe3072" 00:17:07.313 } 00:17:07.313 } 00:17:07.313 ]' 00:17:07.313 00:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:07.313 00:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:07.313 00:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:07.313 00:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:07.313 00:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:07.313 00:18:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:07.313 00:18:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:07.313 00:18:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:07.572 00:18:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NTRlYzQ3MGNmODNmMDk4NDk4MzgzMmZkOWI0NjE4ZWE1YjRiNjEzMTI5NjA3YmVhOTg2MDZjNGZmZTlhYjRjMcBoLk4=: 00:17:08.141 00:18:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:08.141 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:08.141 00:18:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:08.141 00:18:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:08.141 00:18:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.141 00:18:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:08.141 00:18:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:08.142 00:18:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:08.142 00:18:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:08.142 00:18:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:08.142 00:18:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:17:08.142 00:18:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:08.142 00:18:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:08.142 00:18:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:08.142 00:18:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:08.142 00:18:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:08.142 00:18:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:08.142 00:18:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:08.142 00:18:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.142 00:18:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:08.142 00:18:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:08.142 00:18:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:08.401 00:17:08.401 00:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:08.401 00:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:08.401 00:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:08.660 00:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.660 00:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:08.660 00:18:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:08.660 00:18:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.660 00:18:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:08.660 00:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:08.660 { 00:17:08.660 "cntlid": 121, 00:17:08.660 "qid": 0, 00:17:08.660 "state": "enabled", 00:17:08.660 "thread": "nvmf_tgt_poll_group_000", 00:17:08.660 "listen_address": { 00:17:08.660 "trtype": "TCP", 00:17:08.660 "adrfam": "IPv4", 00:17:08.660 "traddr": "10.0.0.2", 00:17:08.660 "trsvcid": "4420" 00:17:08.660 }, 00:17:08.660 "peer_address": { 00:17:08.660 "trtype": "TCP", 00:17:08.660 "adrfam": "IPv4", 00:17:08.660 "traddr": "10.0.0.1", 00:17:08.660 "trsvcid": "59804" 00:17:08.660 }, 00:17:08.660 "auth": { 00:17:08.660 "state": "completed", 00:17:08.660 "digest": "sha512", 00:17:08.660 "dhgroup": "ffdhe4096" 00:17:08.660 } 00:17:08.660 } 00:17:08.660 ]' 00:17:08.660 00:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:08.660 00:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:08.660 00:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:08.919 00:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:08.919 00:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:08.919 00:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:08.919 00:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:08.919 00:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:08.919 00:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:N2IyYmQzYmZiMDU3YjVmYjMyNDkyMTA2ZTA4MDNjMmNjYTlmMWQzNDViZTU5NzFiAN8c7w==: --dhchap-ctrl-secret DHHC-1:03:OGFjZmI2ZGZjYTg5OGYyZTIxM2MyZDg1MjhlNWJlODA1ZGEwMjJhMzgwYTBhOGFjMDcxMWI1ZTI2YzQwYjlkYf2tkOw=: 00:17:09.488 00:18:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:09.488 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:09.488 00:18:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:09.488 00:18:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:09.488 00:18:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.488 00:18:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:09.488 00:18:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:09.488 00:18:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:09.488 00:18:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:09.746 00:18:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:17:09.746 00:18:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:09.746 00:18:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:09.746 00:18:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:09.746 00:18:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:09.746 00:18:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:09.746 00:18:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:09.746 00:18:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:09.746 00:18:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.746 00:18:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:09.746 00:18:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:09.746 00:18:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:10.005 00:17:10.005 00:18:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:10.005 00:18:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:10.005 00:18:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:10.262 00:18:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.262 00:18:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:10.262 00:18:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:10.262 00:18:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.262 00:18:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:10.262 00:18:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:10.262 { 00:17:10.262 "cntlid": 123, 00:17:10.262 "qid": 0, 00:17:10.262 "state": "enabled", 00:17:10.262 "thread": "nvmf_tgt_poll_group_000", 00:17:10.262 "listen_address": { 00:17:10.262 "trtype": "TCP", 00:17:10.262 "adrfam": "IPv4", 00:17:10.262 "traddr": "10.0.0.2", 00:17:10.262 "trsvcid": "4420" 00:17:10.262 }, 00:17:10.262 "peer_address": { 00:17:10.262 "trtype": "TCP", 00:17:10.262 "adrfam": "IPv4", 00:17:10.262 "traddr": "10.0.0.1", 00:17:10.262 "trsvcid": "59828" 00:17:10.262 }, 00:17:10.262 "auth": { 00:17:10.262 "state": "completed", 00:17:10.262 "digest": "sha512", 00:17:10.262 "dhgroup": "ffdhe4096" 00:17:10.262 } 00:17:10.262 } 00:17:10.262 ]' 00:17:10.262 00:18:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:10.262 00:18:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:10.262 00:18:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:10.262 00:18:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:10.262 00:18:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:10.262 00:18:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:10.262 00:18:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:10.262 00:18:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:10.520 00:18:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZWU3ZjkwYTI2MGY1ZGMwYWY5MTFiZDAzZjkwZTVhZDGjnqBT: --dhchap-ctrl-secret DHHC-1:02:YmJmNzhmZGM5MmQ0YjA5N2RmZWNlNDk1OTE4ZjEzZjZjZTIzM2YxMmEwMzhiZDEzb47+Ng==: 00:17:11.087 00:18:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:11.087 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:11.087 00:18:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:11.087 00:18:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:11.087 00:18:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.087 00:18:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:11.087 00:18:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:11.087 00:18:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:11.087 00:18:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:11.345 00:18:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:17:11.345 00:18:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:11.345 00:18:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:11.345 00:18:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:11.345 00:18:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:11.345 00:18:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:11.345 00:18:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:11.345 00:18:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:11.345 00:18:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.345 00:18:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:11.345 00:18:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:11.345 00:18:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:11.603 00:17:11.603 00:18:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:11.603 00:18:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:11.603 00:18:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:11.861 00:18:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.861 00:18:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:11.861 00:18:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:11.861 00:18:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.861 00:18:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:11.861 00:18:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:11.861 { 00:17:11.861 "cntlid": 125, 00:17:11.861 "qid": 0, 00:17:11.861 "state": "enabled", 00:17:11.861 "thread": "nvmf_tgt_poll_group_000", 00:17:11.861 "listen_address": { 00:17:11.861 "trtype": "TCP", 00:17:11.861 "adrfam": "IPv4", 00:17:11.861 "traddr": "10.0.0.2", 00:17:11.861 "trsvcid": "4420" 00:17:11.861 }, 00:17:11.861 "peer_address": { 00:17:11.861 "trtype": "TCP", 00:17:11.861 "adrfam": "IPv4", 00:17:11.861 "traddr": "10.0.0.1", 00:17:11.861 "trsvcid": "59854" 00:17:11.861 }, 00:17:11.861 "auth": { 00:17:11.861 "state": "completed", 00:17:11.861 "digest": "sha512", 00:17:11.861 "dhgroup": "ffdhe4096" 00:17:11.861 } 00:17:11.861 } 00:17:11.861 ]' 00:17:11.861 00:18:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:11.861 00:18:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:11.861 00:18:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:11.861 00:18:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:11.861 00:18:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:11.861 00:18:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:11.861 00:18:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:11.861 00:18:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:12.118 00:18:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZDk3MmY4MGNhMTQzNzNhOWJmZDRmMGU2Y2QxMjY2ZjEzOTcxYTM4ZDgzOGY5NzQ3C29SFA==: --dhchap-ctrl-secret DHHC-1:01:ZDRiN2I5OTFlOTRjNDlmMmM2YzEwNzcwMTU0MjE3OGQEbxAq: 00:17:12.685 00:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:12.685 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:12.685 00:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:12.685 00:18:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:12.685 00:18:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.685 00:18:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:12.685 00:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:12.685 00:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:12.685 00:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:12.685 00:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:17:12.685 00:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:12.685 00:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:12.685 00:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:12.685 00:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:12.685 00:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:12.685 00:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:12.685 00:18:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:12.685 00:18:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.685 00:18:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:12.685 00:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:12.685 00:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:12.944 00:17:12.944 00:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:12.944 00:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:12.944 00:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:13.201 00:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.202 00:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:13.202 00:18:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:13.202 00:18:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.202 00:18:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:13.202 00:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:13.202 { 00:17:13.202 "cntlid": 127, 00:17:13.202 "qid": 0, 00:17:13.202 "state": "enabled", 00:17:13.202 "thread": "nvmf_tgt_poll_group_000", 00:17:13.202 "listen_address": { 00:17:13.202 "trtype": "TCP", 00:17:13.202 "adrfam": "IPv4", 00:17:13.202 "traddr": "10.0.0.2", 00:17:13.202 "trsvcid": "4420" 00:17:13.202 }, 00:17:13.202 "peer_address": { 00:17:13.202 "trtype": "TCP", 00:17:13.202 "adrfam": "IPv4", 00:17:13.202 "traddr": "10.0.0.1", 00:17:13.202 "trsvcid": "59888" 00:17:13.202 }, 00:17:13.202 "auth": { 00:17:13.202 "state": "completed", 00:17:13.202 "digest": "sha512", 00:17:13.202 "dhgroup": "ffdhe4096" 00:17:13.202 } 00:17:13.202 } 00:17:13.202 ]' 00:17:13.202 00:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:13.202 00:18:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:13.202 00:18:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:13.202 00:18:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:13.202 00:18:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:13.460 00:18:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:13.460 00:18:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:13.460 00:18:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:13.460 00:18:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NTRlYzQ3MGNmODNmMDk4NDk4MzgzMmZkOWI0NjE4ZWE1YjRiNjEzMTI5NjA3YmVhOTg2MDZjNGZmZTlhYjRjMcBoLk4=: 00:17:14.027 00:18:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:14.027 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:14.027 00:18:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:14.027 00:18:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:14.027 00:18:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.027 00:18:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:14.027 00:18:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:14.027 00:18:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:14.027 00:18:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:14.027 00:18:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:14.286 00:18:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:17:14.287 00:18:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:14.287 00:18:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:14.287 00:18:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:14.287 00:18:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:14.287 00:18:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:14.287 00:18:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:14.287 00:18:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:14.287 00:18:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.287 00:18:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:14.287 00:18:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:14.287 00:18:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:14.549 00:17:14.549 00:18:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:14.549 00:18:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:14.549 00:18:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:14.810 00:18:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.810 00:18:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:14.810 00:18:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:14.810 00:18:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.810 00:18:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:14.810 00:18:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:14.810 { 00:17:14.810 "cntlid": 129, 00:17:14.810 "qid": 0, 00:17:14.810 "state": "enabled", 00:17:14.810 "thread": "nvmf_tgt_poll_group_000", 00:17:14.810 "listen_address": { 00:17:14.810 "trtype": "TCP", 00:17:14.810 "adrfam": "IPv4", 00:17:14.810 "traddr": "10.0.0.2", 00:17:14.810 "trsvcid": "4420" 00:17:14.810 }, 00:17:14.810 "peer_address": { 00:17:14.810 "trtype": "TCP", 00:17:14.810 "adrfam": "IPv4", 00:17:14.810 "traddr": "10.0.0.1", 00:17:14.810 "trsvcid": "59902" 00:17:14.810 }, 00:17:14.810 "auth": { 00:17:14.810 "state": "completed", 00:17:14.810 "digest": "sha512", 00:17:14.810 "dhgroup": "ffdhe6144" 00:17:14.810 } 00:17:14.810 } 00:17:14.810 ]' 00:17:14.810 00:18:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:14.810 00:18:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:14.810 00:18:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:14.810 00:18:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:14.810 00:18:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:14.810 00:18:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:14.810 00:18:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:14.810 00:18:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:15.068 00:18:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:N2IyYmQzYmZiMDU3YjVmYjMyNDkyMTA2ZTA4MDNjMmNjYTlmMWQzNDViZTU5NzFiAN8c7w==: --dhchap-ctrl-secret DHHC-1:03:OGFjZmI2ZGZjYTg5OGYyZTIxM2MyZDg1MjhlNWJlODA1ZGEwMjJhMzgwYTBhOGFjMDcxMWI1ZTI2YzQwYjlkYf2tkOw=: 00:17:15.636 00:18:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:15.636 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:15.636 00:18:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:15.636 00:18:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:15.636 00:18:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.636 00:18:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:15.636 00:18:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:15.636 00:18:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:15.636 00:18:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:15.895 00:18:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:17:15.895 00:18:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:15.895 00:18:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:15.895 00:18:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:15.895 00:18:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:15.895 00:18:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:15.895 00:18:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:15.895 00:18:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:15.895 00:18:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.895 00:18:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:15.895 00:18:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:15.895 00:18:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:16.154 00:17:16.154 00:18:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:16.154 00:18:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:16.154 00:18:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:16.413 00:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.413 00:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:16.413 00:18:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:16.413 00:18:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.413 00:18:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:16.413 00:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:16.413 { 00:17:16.413 "cntlid": 131, 00:17:16.413 "qid": 0, 00:17:16.413 "state": "enabled", 00:17:16.413 "thread": "nvmf_tgt_poll_group_000", 00:17:16.413 "listen_address": { 00:17:16.413 "trtype": "TCP", 00:17:16.413 "adrfam": "IPv4", 00:17:16.413 "traddr": "10.0.0.2", 00:17:16.413 "trsvcid": "4420" 00:17:16.413 }, 00:17:16.413 "peer_address": { 00:17:16.413 "trtype": "TCP", 00:17:16.413 "adrfam": "IPv4", 00:17:16.413 "traddr": "10.0.0.1", 00:17:16.413 "trsvcid": "59922" 00:17:16.413 }, 00:17:16.413 "auth": { 00:17:16.413 "state": "completed", 00:17:16.413 "digest": "sha512", 00:17:16.413 "dhgroup": "ffdhe6144" 00:17:16.413 } 00:17:16.413 } 00:17:16.413 ]' 00:17:16.413 00:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:16.413 00:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:16.413 00:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:16.413 00:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:16.413 00:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:16.413 00:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:16.413 00:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:16.413 00:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:16.672 00:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZWU3ZjkwYTI2MGY1ZGMwYWY5MTFiZDAzZjkwZTVhZDGjnqBT: --dhchap-ctrl-secret DHHC-1:02:YmJmNzhmZGM5MmQ0YjA5N2RmZWNlNDk1OTE4ZjEzZjZjZTIzM2YxMmEwMzhiZDEzb47+Ng==: 00:17:17.239 00:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:17.239 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:17.240 00:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:17.240 00:18:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:17.240 00:18:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.240 00:18:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:17.240 00:18:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:17.240 00:18:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:17.240 00:18:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:17.499 00:18:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:17:17.499 00:18:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:17.499 00:18:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:17.499 00:18:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:17.499 00:18:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:17.499 00:18:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:17.499 00:18:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:17.499 00:18:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:17.499 00:18:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.499 00:18:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:17.499 00:18:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:17.499 00:18:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:17.757 00:17:17.757 00:18:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:17.757 00:18:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:17.757 00:18:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:18.016 00:18:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.016 00:18:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:18.016 00:18:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:18.016 00:18:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.016 00:18:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:18.016 00:18:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:18.016 { 00:17:18.016 "cntlid": 133, 00:17:18.016 "qid": 0, 00:17:18.016 "state": "enabled", 00:17:18.016 "thread": "nvmf_tgt_poll_group_000", 00:17:18.016 "listen_address": { 00:17:18.016 "trtype": "TCP", 00:17:18.016 "adrfam": "IPv4", 00:17:18.016 "traddr": "10.0.0.2", 00:17:18.016 "trsvcid": "4420" 00:17:18.016 }, 00:17:18.016 "peer_address": { 00:17:18.016 "trtype": "TCP", 00:17:18.016 "adrfam": "IPv4", 00:17:18.016 "traddr": "10.0.0.1", 00:17:18.016 "trsvcid": "59936" 00:17:18.016 }, 00:17:18.016 "auth": { 00:17:18.016 "state": "completed", 00:17:18.016 "digest": "sha512", 00:17:18.016 "dhgroup": "ffdhe6144" 00:17:18.016 } 00:17:18.016 } 00:17:18.016 ]' 00:17:18.016 00:18:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:18.016 00:18:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:18.016 00:18:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:18.016 00:18:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:18.016 00:18:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:18.016 00:18:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:18.016 00:18:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:18.016 00:18:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:18.275 00:18:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZDk3MmY4MGNhMTQzNzNhOWJmZDRmMGU2Y2QxMjY2ZjEzOTcxYTM4ZDgzOGY5NzQ3C29SFA==: --dhchap-ctrl-secret DHHC-1:01:ZDRiN2I5OTFlOTRjNDlmMmM2YzEwNzcwMTU0MjE3OGQEbxAq: 00:17:18.844 00:18:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:18.844 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:18.844 00:18:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:18.844 00:18:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:18.844 00:18:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.844 00:18:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:18.844 00:18:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:18.844 00:18:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:18.844 00:18:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:19.103 00:18:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:17:19.103 00:18:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:19.103 00:18:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:19.103 00:18:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:19.103 00:18:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:19.103 00:18:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:19.103 00:18:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:19.103 00:18:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:19.103 00:18:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.103 00:18:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:19.103 00:18:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:19.103 00:18:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:19.362 00:17:19.362 00:18:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:19.362 00:18:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:19.362 00:18:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:19.650 00:18:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.650 00:18:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:19.650 00:18:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:19.650 00:18:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.650 00:18:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:19.650 00:18:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:19.650 { 00:17:19.650 "cntlid": 135, 00:17:19.650 "qid": 0, 00:17:19.650 "state": "enabled", 00:17:19.650 "thread": "nvmf_tgt_poll_group_000", 00:17:19.650 "listen_address": { 00:17:19.650 "trtype": "TCP", 00:17:19.650 "adrfam": "IPv4", 00:17:19.650 "traddr": "10.0.0.2", 00:17:19.650 "trsvcid": "4420" 00:17:19.650 }, 00:17:19.650 "peer_address": { 00:17:19.650 "trtype": "TCP", 00:17:19.650 "adrfam": "IPv4", 00:17:19.650 "traddr": "10.0.0.1", 00:17:19.650 "trsvcid": "35134" 00:17:19.650 }, 00:17:19.650 "auth": { 00:17:19.650 "state": "completed", 00:17:19.650 "digest": "sha512", 00:17:19.650 "dhgroup": "ffdhe6144" 00:17:19.650 } 00:17:19.650 } 00:17:19.650 ]' 00:17:19.650 00:18:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:19.650 00:18:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:19.650 00:18:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:19.650 00:18:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:19.650 00:18:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:19.650 00:18:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:19.650 00:18:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:19.650 00:18:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:19.944 00:18:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NTRlYzQ3MGNmODNmMDk4NDk4MzgzMmZkOWI0NjE4ZWE1YjRiNjEzMTI5NjA3YmVhOTg2MDZjNGZmZTlhYjRjMcBoLk4=: 00:17:20.513 00:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:20.513 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:20.513 00:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:20.513 00:18:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:20.513 00:18:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.513 00:18:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:20.513 00:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:20.513 00:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:20.513 00:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:20.513 00:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:20.772 00:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:17:20.772 00:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:20.772 00:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:20.772 00:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:20.772 00:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:20.772 00:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:20.772 00:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:20.772 00:18:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:20.772 00:18:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.772 00:18:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:20.772 00:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:20.772 00:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:21.030 00:17:21.289 00:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:21.289 00:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:21.289 00:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:21.289 00:18:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.289 00:18:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:21.289 00:18:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:21.289 00:18:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.289 00:18:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:21.289 00:18:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:21.289 { 00:17:21.289 "cntlid": 137, 00:17:21.289 "qid": 0, 00:17:21.289 "state": "enabled", 00:17:21.289 "thread": "nvmf_tgt_poll_group_000", 00:17:21.289 "listen_address": { 00:17:21.289 "trtype": "TCP", 00:17:21.289 "adrfam": "IPv4", 00:17:21.289 "traddr": "10.0.0.2", 00:17:21.289 "trsvcid": "4420" 00:17:21.289 }, 00:17:21.289 "peer_address": { 00:17:21.289 "trtype": "TCP", 00:17:21.289 "adrfam": "IPv4", 00:17:21.289 "traddr": "10.0.0.1", 00:17:21.289 "trsvcid": "35150" 00:17:21.289 }, 00:17:21.289 "auth": { 00:17:21.289 "state": "completed", 00:17:21.289 "digest": "sha512", 00:17:21.289 "dhgroup": "ffdhe8192" 00:17:21.289 } 00:17:21.289 } 00:17:21.289 ]' 00:17:21.289 00:18:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:21.289 00:18:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:21.289 00:18:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:21.548 00:18:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:21.549 00:18:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:21.549 00:18:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:21.549 00:18:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:21.549 00:18:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:21.549 00:18:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:N2IyYmQzYmZiMDU3YjVmYjMyNDkyMTA2ZTA4MDNjMmNjYTlmMWQzNDViZTU5NzFiAN8c7w==: --dhchap-ctrl-secret DHHC-1:03:OGFjZmI2ZGZjYTg5OGYyZTIxM2MyZDg1MjhlNWJlODA1ZGEwMjJhMzgwYTBhOGFjMDcxMWI1ZTI2YzQwYjlkYf2tkOw=: 00:17:22.116 00:18:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:22.116 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:22.116 00:18:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:22.116 00:18:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:22.116 00:18:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.116 00:18:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:22.116 00:18:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:22.116 00:18:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:22.116 00:18:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:22.374 00:18:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:17:22.374 00:18:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:22.374 00:18:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:22.374 00:18:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:22.374 00:18:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:22.374 00:18:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:22.374 00:18:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:22.374 00:18:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:22.374 00:18:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.375 00:18:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:22.375 00:18:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:22.375 00:18:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:22.943 00:17:22.943 00:18:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:22.943 00:18:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:22.943 00:18:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:22.943 00:18:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.943 00:18:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:22.943 00:18:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:22.943 00:18:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.202 00:18:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:23.202 00:18:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:23.202 { 00:17:23.202 "cntlid": 139, 00:17:23.202 "qid": 0, 00:17:23.202 "state": "enabled", 00:17:23.202 "thread": "nvmf_tgt_poll_group_000", 00:17:23.202 "listen_address": { 00:17:23.202 "trtype": "TCP", 00:17:23.202 "adrfam": "IPv4", 00:17:23.202 "traddr": "10.0.0.2", 00:17:23.202 "trsvcid": "4420" 00:17:23.202 }, 00:17:23.202 "peer_address": { 00:17:23.202 "trtype": "TCP", 00:17:23.202 "adrfam": "IPv4", 00:17:23.202 "traddr": "10.0.0.1", 00:17:23.202 "trsvcid": "35176" 00:17:23.202 }, 00:17:23.202 "auth": { 00:17:23.202 "state": "completed", 00:17:23.202 "digest": "sha512", 00:17:23.202 "dhgroup": "ffdhe8192" 00:17:23.202 } 00:17:23.202 } 00:17:23.202 ]' 00:17:23.202 00:18:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:23.202 00:18:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:23.202 00:18:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:23.202 00:18:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:23.202 00:18:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:23.202 00:18:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:23.202 00:18:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:23.202 00:18:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:23.461 00:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZWU3ZjkwYTI2MGY1ZGMwYWY5MTFiZDAzZjkwZTVhZDGjnqBT: --dhchap-ctrl-secret DHHC-1:02:YmJmNzhmZGM5MmQ0YjA5N2RmZWNlNDk1OTE4ZjEzZjZjZTIzM2YxMmEwMzhiZDEzb47+Ng==: 00:17:24.027 00:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:24.027 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:24.027 00:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:24.027 00:18:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:24.027 00:18:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.027 00:18:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:24.027 00:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:24.027 00:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:24.027 00:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:24.027 00:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:17:24.027 00:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:24.027 00:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:24.027 00:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:24.027 00:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:24.027 00:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:24.027 00:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:24.027 00:18:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:24.027 00:18:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.027 00:18:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:24.027 00:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:24.027 00:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:24.602 00:17:24.602 00:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:24.602 00:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:24.602 00:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:24.865 00:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.865 00:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:24.865 00:18:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:24.865 00:18:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.865 00:18:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:24.865 00:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:24.865 { 00:17:24.865 "cntlid": 141, 00:17:24.865 "qid": 0, 00:17:24.865 "state": "enabled", 00:17:24.865 "thread": "nvmf_tgt_poll_group_000", 00:17:24.865 "listen_address": { 00:17:24.865 "trtype": "TCP", 00:17:24.865 "adrfam": "IPv4", 00:17:24.865 "traddr": "10.0.0.2", 00:17:24.865 "trsvcid": "4420" 00:17:24.865 }, 00:17:24.865 "peer_address": { 00:17:24.865 "trtype": "TCP", 00:17:24.865 "adrfam": "IPv4", 00:17:24.865 "traddr": "10.0.0.1", 00:17:24.865 "trsvcid": "35196" 00:17:24.865 }, 00:17:24.865 "auth": { 00:17:24.865 "state": "completed", 00:17:24.865 "digest": "sha512", 00:17:24.865 "dhgroup": "ffdhe8192" 00:17:24.865 } 00:17:24.865 } 00:17:24.865 ]' 00:17:24.865 00:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:24.865 00:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:24.865 00:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:24.865 00:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:24.865 00:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:24.865 00:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:24.865 00:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:24.865 00:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:25.123 00:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZDk3MmY4MGNhMTQzNzNhOWJmZDRmMGU2Y2QxMjY2ZjEzOTcxYTM4ZDgzOGY5NzQ3C29SFA==: --dhchap-ctrl-secret DHHC-1:01:ZDRiN2I5OTFlOTRjNDlmMmM2YzEwNzcwMTU0MjE3OGQEbxAq: 00:17:25.691 00:18:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:25.691 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:25.691 00:18:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:25.691 00:18:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:25.691 00:18:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.691 00:18:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:25.691 00:18:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:25.691 00:18:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:25.691 00:18:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:25.691 00:18:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:17:25.691 00:18:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:25.691 00:18:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:25.691 00:18:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:25.691 00:18:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:25.691 00:18:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:25.691 00:18:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:25.691 00:18:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:25.691 00:18:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.691 00:18:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:25.691 00:18:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:25.691 00:18:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:26.260 00:17:26.260 00:18:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:26.260 00:18:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:26.260 00:18:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:26.519 00:18:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.519 00:18:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:26.519 00:18:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:26.519 00:18:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.519 00:18:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:26.519 00:18:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:26.519 { 00:17:26.519 "cntlid": 143, 00:17:26.519 "qid": 0, 00:17:26.519 "state": "enabled", 00:17:26.519 "thread": "nvmf_tgt_poll_group_000", 00:17:26.519 "listen_address": { 00:17:26.519 "trtype": "TCP", 00:17:26.519 "adrfam": "IPv4", 00:17:26.519 "traddr": "10.0.0.2", 00:17:26.519 "trsvcid": "4420" 00:17:26.519 }, 00:17:26.519 "peer_address": { 00:17:26.519 "trtype": "TCP", 00:17:26.519 "adrfam": "IPv4", 00:17:26.519 "traddr": "10.0.0.1", 00:17:26.519 "trsvcid": "35214" 00:17:26.519 }, 00:17:26.519 "auth": { 00:17:26.519 "state": "completed", 00:17:26.519 "digest": "sha512", 00:17:26.519 "dhgroup": "ffdhe8192" 00:17:26.519 } 00:17:26.519 } 00:17:26.519 ]' 00:17:26.519 00:18:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:26.520 00:18:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:26.520 00:18:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:26.520 00:18:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:26.520 00:18:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:26.520 00:18:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:26.520 00:18:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:26.520 00:18:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:26.779 00:18:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NTRlYzQ3MGNmODNmMDk4NDk4MzgzMmZkOWI0NjE4ZWE1YjRiNjEzMTI5NjA3YmVhOTg2MDZjNGZmZTlhYjRjMcBoLk4=: 00:17:27.346 00:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:27.346 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:27.346 00:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:27.346 00:18:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:27.346 00:18:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.346 00:18:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:27.346 00:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:17:27.346 00:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:17:27.346 00:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:17:27.346 00:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:27.346 00:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:27.346 00:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:27.605 00:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:17:27.605 00:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:27.605 00:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:27.605 00:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:27.605 00:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:27.605 00:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:27.605 00:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:27.605 00:18:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:27.605 00:18:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.605 00:18:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:27.605 00:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:27.605 00:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:28.174 00:17:28.174 00:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:28.174 00:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:28.174 00:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:28.174 00:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.174 00:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:28.174 00:18:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:28.174 00:18:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.174 00:18:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:28.174 00:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:28.174 { 00:17:28.174 "cntlid": 145, 00:17:28.174 "qid": 0, 00:17:28.174 "state": "enabled", 00:17:28.174 "thread": "nvmf_tgt_poll_group_000", 00:17:28.174 "listen_address": { 00:17:28.174 "trtype": "TCP", 00:17:28.174 "adrfam": "IPv4", 00:17:28.174 "traddr": "10.0.0.2", 00:17:28.174 "trsvcid": "4420" 00:17:28.174 }, 00:17:28.174 "peer_address": { 00:17:28.174 "trtype": "TCP", 00:17:28.174 "adrfam": "IPv4", 00:17:28.174 "traddr": "10.0.0.1", 00:17:28.174 "trsvcid": "35246" 00:17:28.174 }, 00:17:28.174 "auth": { 00:17:28.174 "state": "completed", 00:17:28.174 "digest": "sha512", 00:17:28.174 "dhgroup": "ffdhe8192" 00:17:28.174 } 00:17:28.174 } 00:17:28.174 ]' 00:17:28.174 00:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:28.174 00:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:28.174 00:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:28.174 00:18:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:28.174 00:18:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:28.433 00:18:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:28.433 00:18:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:28.433 00:18:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:28.434 00:18:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:N2IyYmQzYmZiMDU3YjVmYjMyNDkyMTA2ZTA4MDNjMmNjYTlmMWQzNDViZTU5NzFiAN8c7w==: --dhchap-ctrl-secret DHHC-1:03:OGFjZmI2ZGZjYTg5OGYyZTIxM2MyZDg1MjhlNWJlODA1ZGEwMjJhMzgwYTBhOGFjMDcxMWI1ZTI2YzQwYjlkYf2tkOw=: 00:17:29.002 00:18:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:29.002 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:29.002 00:18:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:29.002 00:18:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:29.002 00:18:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.002 00:18:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:29.002 00:18:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:17:29.002 00:18:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:29.002 00:18:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.002 00:18:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:29.002 00:18:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:29.002 00:18:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@642 -- # local es=0 00:17:29.002 00:18:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@644 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:29.002 00:18:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@630 -- # local arg=hostrpc 00:17:29.002 00:18:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:17:29.002 00:18:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@634 -- # type -t hostrpc 00:17:29.002 00:18:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:17:29.002 00:18:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@645 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:29.002 00:18:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:29.570 request: 00:17:29.570 { 00:17:29.571 "name": "nvme0", 00:17:29.571 "trtype": "tcp", 00:17:29.571 "traddr": "10.0.0.2", 00:17:29.571 "adrfam": "ipv4", 00:17:29.571 "trsvcid": "4420", 00:17:29.571 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:29.571 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:29.571 "prchk_reftag": false, 00:17:29.571 "prchk_guard": false, 00:17:29.571 "hdgst": false, 00:17:29.571 "ddgst": false, 00:17:29.571 "dhchap_key": "key2", 00:17:29.571 "method": "bdev_nvme_attach_controller", 00:17:29.571 "req_id": 1 00:17:29.571 } 00:17:29.571 Got JSON-RPC error response 00:17:29.571 response: 00:17:29.571 { 00:17:29.571 "code": -5, 00:17:29.571 "message": "Input/output error" 00:17:29.571 } 00:17:29.571 00:18:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@645 -- # es=1 00:17:29.571 00:18:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:17:29.571 00:18:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:17:29.571 00:18:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:17:29.571 00:18:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:29.571 00:18:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:29.571 00:18:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.571 00:18:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:29.571 00:18:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:29.571 00:18:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:29.571 00:18:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.571 00:18:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:29.571 00:18:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:29.571 00:18:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@642 -- # local es=0 00:17:29.571 00:18:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@644 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:29.571 00:18:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@630 -- # local arg=hostrpc 00:17:29.571 00:18:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:17:29.571 00:18:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@634 -- # type -t hostrpc 00:17:29.571 00:18:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:17:29.571 00:18:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@645 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:29.571 00:18:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:30.140 request: 00:17:30.140 { 00:17:30.140 "name": "nvme0", 00:17:30.140 "trtype": "tcp", 00:17:30.140 "traddr": "10.0.0.2", 00:17:30.140 "adrfam": "ipv4", 00:17:30.140 "trsvcid": "4420", 00:17:30.140 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:30.140 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:30.140 "prchk_reftag": false, 00:17:30.140 "prchk_guard": false, 00:17:30.140 "hdgst": false, 00:17:30.140 "ddgst": false, 00:17:30.140 "dhchap_key": "key1", 00:17:30.140 "dhchap_ctrlr_key": "ckey2", 00:17:30.140 "method": "bdev_nvme_attach_controller", 00:17:30.140 "req_id": 1 00:17:30.140 } 00:17:30.140 Got JSON-RPC error response 00:17:30.140 response: 00:17:30.140 { 00:17:30.140 "code": -5, 00:17:30.140 "message": "Input/output error" 00:17:30.140 } 00:17:30.140 00:18:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@645 -- # es=1 00:17:30.140 00:18:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:17:30.140 00:18:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:17:30.140 00:18:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:17:30.140 00:18:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:30.140 00:18:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:30.140 00:18:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.140 00:18:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:30.140 00:18:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:17:30.140 00:18:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:30.140 00:18:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.140 00:18:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:30.140 00:18:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:30.140 00:18:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@642 -- # local es=0 00:17:30.140 00:18:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@644 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:30.140 00:18:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@630 -- # local arg=hostrpc 00:17:30.140 00:18:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:17:30.140 00:18:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@634 -- # type -t hostrpc 00:17:30.140 00:18:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:17:30.140 00:18:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@645 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:30.140 00:18:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:30.398 request: 00:17:30.398 { 00:17:30.398 "name": "nvme0", 00:17:30.398 "trtype": "tcp", 00:17:30.398 "traddr": "10.0.0.2", 00:17:30.398 "adrfam": "ipv4", 00:17:30.398 "trsvcid": "4420", 00:17:30.398 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:30.398 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:30.398 "prchk_reftag": false, 00:17:30.398 "prchk_guard": false, 00:17:30.398 "hdgst": false, 00:17:30.398 "ddgst": false, 00:17:30.398 "dhchap_key": "key1", 00:17:30.398 "dhchap_ctrlr_key": "ckey1", 00:17:30.398 "method": "bdev_nvme_attach_controller", 00:17:30.398 "req_id": 1 00:17:30.398 } 00:17:30.399 Got JSON-RPC error response 00:17:30.399 response: 00:17:30.399 { 00:17:30.399 "code": -5, 00:17:30.399 "message": "Input/output error" 00:17:30.399 } 00:17:30.399 00:18:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@645 -- # es=1 00:17:30.399 00:18:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:17:30.399 00:18:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:17:30.399 00:18:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:17:30.399 00:18:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:30.399 00:18:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:30.399 00:18:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.399 00:18:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:30.399 00:18:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 1503004 00:17:30.399 00:18:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@942 -- # '[' -z 1503004 ']' 00:17:30.399 00:18:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@946 -- # kill -0 1503004 00:17:30.399 00:18:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@947 -- # uname 00:17:30.399 00:18:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:17:30.399 00:18:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1503004 00:17:30.399 00:18:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:17:30.399 00:18:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:17:30.399 00:18:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1503004' 00:17:30.399 killing process with pid 1503004 00:17:30.399 00:18:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@961 -- # kill 1503004 00:17:30.399 00:18:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # wait 1503004 00:17:30.657 00:18:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:17:30.657 00:18:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:30.657 00:18:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:30.657 00:18:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.657 00:18:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1523914 00:17:30.657 00:18:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1523914 00:17:30.657 00:18:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:17:30.657 00:18:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@823 -- # '[' -z 1523914 ']' 00:17:30.657 00:18:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:30.657 00:18:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@828 -- # local max_retries=100 00:17:30.657 00:18:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:30.657 00:18:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # xtrace_disable 00:17:30.657 00:18:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.592 00:18:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:17:31.592 00:18:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # return 0 00:17:31.593 00:18:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:31.593 00:18:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:31.593 00:18:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.593 00:18:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:31.593 00:18:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:31.593 00:18:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 1523914 00:17:31.593 00:18:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@823 -- # '[' -z 1523914 ']' 00:17:31.593 00:18:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:31.593 00:18:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@828 -- # local max_retries=100 00:17:31.593 00:18:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:31.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:31.593 00:18:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # xtrace_disable 00:17:31.593 00:18:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.851 00:18:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:17:31.851 00:18:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # return 0 00:17:31.851 00:18:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:17:31.851 00:18:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:31.851 00:18:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.851 00:18:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:31.851 00:18:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:17:31.851 00:18:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:31.851 00:18:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:31.851 00:18:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:31.851 00:18:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:31.851 00:18:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:31.851 00:18:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:31.851 00:18:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:31.851 00:18:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.851 00:18:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:31.851 00:18:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:31.851 00:18:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:32.418 00:17:32.418 00:18:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:32.418 00:18:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:32.418 00:18:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:32.418 00:18:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.418 00:18:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:32.418 00:18:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:32.418 00:18:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.418 00:18:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:32.418 00:18:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:32.418 { 00:17:32.418 "cntlid": 1, 00:17:32.418 "qid": 0, 00:17:32.418 "state": "enabled", 00:17:32.418 "thread": "nvmf_tgt_poll_group_000", 00:17:32.418 "listen_address": { 00:17:32.418 "trtype": "TCP", 00:17:32.418 "adrfam": "IPv4", 00:17:32.418 "traddr": "10.0.0.2", 00:17:32.418 "trsvcid": "4420" 00:17:32.418 }, 00:17:32.418 "peer_address": { 00:17:32.418 "trtype": "TCP", 00:17:32.418 "adrfam": "IPv4", 00:17:32.418 "traddr": "10.0.0.1", 00:17:32.418 "trsvcid": "54264" 00:17:32.418 }, 00:17:32.418 "auth": { 00:17:32.418 "state": "completed", 00:17:32.419 "digest": "sha512", 00:17:32.419 "dhgroup": "ffdhe8192" 00:17:32.419 } 00:17:32.419 } 00:17:32.419 ]' 00:17:32.677 00:18:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:32.677 00:18:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:32.677 00:18:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:32.677 00:18:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:32.677 00:18:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:32.677 00:18:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:32.677 00:18:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:32.677 00:18:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:32.936 00:18:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NTRlYzQ3MGNmODNmMDk4NDk4MzgzMmZkOWI0NjE4ZWE1YjRiNjEzMTI5NjA3YmVhOTg2MDZjNGZmZTlhYjRjMcBoLk4=: 00:17:33.507 00:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:33.507 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:33.507 00:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:33.507 00:18:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:33.507 00:18:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.507 00:18:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:33.507 00:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:33.507 00:18:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:33.507 00:18:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.507 00:18:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:33.507 00:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:17:33.507 00:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:17:33.507 00:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:33.507 00:18:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@642 -- # local es=0 00:17:33.507 00:18:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@644 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:33.507 00:18:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@630 -- # local arg=hostrpc 00:17:33.507 00:18:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:17:33.507 00:18:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@634 -- # type -t hostrpc 00:17:33.507 00:18:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:17:33.507 00:18:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@645 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:33.507 00:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:33.764 request: 00:17:33.764 { 00:17:33.764 "name": "nvme0", 00:17:33.764 "trtype": "tcp", 00:17:33.764 "traddr": "10.0.0.2", 00:17:33.764 "adrfam": "ipv4", 00:17:33.764 "trsvcid": "4420", 00:17:33.764 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:33.764 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:33.764 "prchk_reftag": false, 00:17:33.764 "prchk_guard": false, 00:17:33.764 "hdgst": false, 00:17:33.764 "ddgst": false, 00:17:33.764 "dhchap_key": "key3", 00:17:33.764 "method": "bdev_nvme_attach_controller", 00:17:33.764 "req_id": 1 00:17:33.764 } 00:17:33.764 Got JSON-RPC error response 00:17:33.764 response: 00:17:33.764 { 00:17:33.764 "code": -5, 00:17:33.764 "message": "Input/output error" 00:17:33.764 } 00:17:33.764 00:18:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@645 -- # es=1 00:17:33.764 00:18:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:17:33.764 00:18:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:17:33.764 00:18:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:17:33.764 00:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:17:33.764 00:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:17:33.764 00:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:33.764 00:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:34.023 00:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:34.023 00:18:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@642 -- # local es=0 00:17:34.023 00:18:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@644 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:34.023 00:18:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@630 -- # local arg=hostrpc 00:17:34.023 00:18:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:17:34.023 00:18:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@634 -- # type -t hostrpc 00:17:34.023 00:18:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:17:34.023 00:18:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@645 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:34.023 00:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:34.023 request: 00:17:34.023 { 00:17:34.023 "name": "nvme0", 00:17:34.023 "trtype": "tcp", 00:17:34.023 "traddr": "10.0.0.2", 00:17:34.023 "adrfam": "ipv4", 00:17:34.023 "trsvcid": "4420", 00:17:34.023 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:34.023 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:34.023 "prchk_reftag": false, 00:17:34.023 "prchk_guard": false, 00:17:34.023 "hdgst": false, 00:17:34.023 "ddgst": false, 00:17:34.023 "dhchap_key": "key3", 00:17:34.023 "method": "bdev_nvme_attach_controller", 00:17:34.023 "req_id": 1 00:17:34.023 } 00:17:34.023 Got JSON-RPC error response 00:17:34.023 response: 00:17:34.023 { 00:17:34.023 "code": -5, 00:17:34.023 "message": "Input/output error" 00:17:34.023 } 00:17:34.281 00:18:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@645 -- # es=1 00:17:34.281 00:18:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:17:34.281 00:18:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:17:34.281 00:18:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:17:34.281 00:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:17:34.281 00:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:17:34.281 00:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:17:34.281 00:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:34.281 00:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:34.281 00:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:34.281 00:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:34.281 00:18:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:34.281 00:18:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.281 00:18:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:34.281 00:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:34.281 00:18:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:34.281 00:18:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.281 00:18:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:34.281 00:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:34.281 00:18:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@642 -- # local es=0 00:17:34.281 00:18:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@644 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:34.281 00:18:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@630 -- # local arg=hostrpc 00:17:34.281 00:18:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:17:34.281 00:18:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@634 -- # type -t hostrpc 00:17:34.282 00:18:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:17:34.282 00:18:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@645 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:34.282 00:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:34.546 request: 00:17:34.546 { 00:17:34.547 "name": "nvme0", 00:17:34.547 "trtype": "tcp", 00:17:34.547 "traddr": "10.0.0.2", 00:17:34.547 "adrfam": "ipv4", 00:17:34.547 "trsvcid": "4420", 00:17:34.547 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:34.547 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:34.547 "prchk_reftag": false, 00:17:34.547 "prchk_guard": false, 00:17:34.547 "hdgst": false, 00:17:34.547 "ddgst": false, 00:17:34.547 "dhchap_key": "key0", 00:17:34.547 "dhchap_ctrlr_key": "key1", 00:17:34.547 "method": "bdev_nvme_attach_controller", 00:17:34.547 "req_id": 1 00:17:34.547 } 00:17:34.547 Got JSON-RPC error response 00:17:34.547 response: 00:17:34.547 { 00:17:34.547 "code": -5, 00:17:34.547 "message": "Input/output error" 00:17:34.547 } 00:17:34.547 00:18:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@645 -- # es=1 00:17:34.547 00:18:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:17:34.547 00:18:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:17:34.547 00:18:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:17:34.547 00:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:34.547 00:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:34.806 00:17:34.806 00:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:17:34.806 00:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:34.806 00:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:17:35.063 00:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.063 00:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:35.063 00:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:35.063 00:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:17:35.063 00:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:17:35.064 00:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 1503246 00:17:35.064 00:18:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@942 -- # '[' -z 1503246 ']' 00:17:35.064 00:18:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@946 -- # kill -0 1503246 00:17:35.064 00:18:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@947 -- # uname 00:17:35.064 00:18:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:17:35.064 00:18:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1503246 00:17:35.064 00:18:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:17:35.064 00:18:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:17:35.064 00:18:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1503246' 00:17:35.064 killing process with pid 1503246 00:17:35.064 00:18:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@961 -- # kill 1503246 00:17:35.064 00:18:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # wait 1503246 00:17:35.629 00:18:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:17:35.629 00:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:35.629 00:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:17:35.629 00:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:35.629 00:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:17:35.629 00:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:35.629 00:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:35.629 rmmod nvme_tcp 00:17:35.629 rmmod nvme_fabrics 00:17:35.629 rmmod nvme_keyring 00:17:35.629 00:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:35.629 00:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:17:35.629 00:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:17:35.629 00:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 1523914 ']' 00:17:35.629 00:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 1523914 00:17:35.629 00:18:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@942 -- # '[' -z 1523914 ']' 00:17:35.629 00:18:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@946 -- # kill -0 1523914 00:17:35.629 00:18:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@947 -- # uname 00:17:35.629 00:18:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:17:35.629 00:18:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1523914 00:17:35.629 00:18:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:17:35.629 00:18:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:17:35.629 00:18:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1523914' 00:17:35.629 killing process with pid 1523914 00:17:35.629 00:18:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@961 -- # kill 1523914 00:17:35.629 00:18:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # wait 1523914 00:17:35.888 00:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:35.888 00:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:35.888 00:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:35.888 00:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:35.888 00:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:35.888 00:18:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:35.888 00:18:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:35.888 00:18:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:37.832 00:18:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:37.832 00:18:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.2WA /tmp/spdk.key-sha256.bH9 /tmp/spdk.key-sha384.ZdC /tmp/spdk.key-sha512.YWs /tmp/spdk.key-sha512.uMA /tmp/spdk.key-sha384.X6F /tmp/spdk.key-sha256.kfl '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:17:37.832 00:17:37.832 real 2m10.397s 00:17:37.832 user 4m59.554s 00:17:37.832 sys 0m20.095s 00:17:37.832 00:18:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1118 -- # xtrace_disable 00:17:37.832 00:18:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.832 ************************************ 00:17:37.832 END TEST nvmf_auth_target 00:17:37.832 ************************************ 00:17:37.832 00:18:56 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:17:37.832 00:18:56 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:17:37.832 00:18:56 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:37.832 00:18:56 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 4 -le 1 ']' 00:17:37.832 00:18:56 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:17:37.832 00:18:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:37.832 ************************************ 00:17:37.832 START TEST nvmf_bdevio_no_huge 00:17:37.832 ************************************ 00:17:37.832 00:18:56 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:38.092 * Looking for test storage... 00:17:38.092 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:38.092 00:18:56 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:38.092 00:18:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:17:38.092 00:18:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:38.092 00:18:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:38.092 00:18:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:38.092 00:18:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:38.092 00:18:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:38.092 00:18:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:38.092 00:18:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:38.092 00:18:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:38.092 00:18:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:38.092 00:18:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:38.092 00:18:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:38.092 00:18:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:17:38.092 00:18:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:38.092 00:18:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:38.092 00:18:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:38.092 00:18:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:38.092 00:18:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:38.092 00:18:56 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:38.092 00:18:56 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:38.092 00:18:56 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:38.092 00:18:56 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.092 00:18:56 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.092 00:18:56 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.092 00:18:56 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:17:38.092 00:18:56 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.092 00:18:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:17:38.092 00:18:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:38.092 00:18:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:38.092 00:18:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:38.092 00:18:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:38.092 00:18:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:38.092 00:18:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:38.092 00:18:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:38.092 00:18:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:38.092 00:18:56 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:38.092 00:18:56 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:38.092 00:18:56 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:17:38.092 00:18:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:38.092 00:18:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:38.092 00:18:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:38.092 00:18:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:38.092 00:18:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:38.092 00:18:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:38.092 00:18:56 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:38.092 00:18:56 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:38.092 00:18:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:38.092 00:18:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:38.092 00:18:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:17:38.092 00:18:56 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:42.290 00:19:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:42.290 00:19:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:17:42.290 00:19:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:42.290 00:19:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:42.290 00:19:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:42.290 00:19:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:42.290 00:19:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:42.290 00:19:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:17:42.290 00:19:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:42.290 00:19:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:17:42.290 00:19:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:17:42.290 00:19:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:17:42.290 00:19:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:17:42.290 00:19:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:17:42.290 00:19:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:17:42.290 00:19:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:42.290 00:19:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:42.290 00:19:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:42.290 00:19:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:42.290 00:19:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:42.290 00:19:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:42.290 00:19:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:42.290 00:19:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:42.290 00:19:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:42.290 00:19:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:42.290 00:19:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:42.290 00:19:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:42.290 00:19:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:42.290 00:19:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:42.290 00:19:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:42.290 00:19:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:42.290 00:19:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:42.290 00:19:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:42.290 00:19:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:42.290 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:42.290 00:19:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:42.290 00:19:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:42.290 00:19:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:42.290 00:19:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:42.290 00:19:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:42.290 00:19:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:42.290 00:19:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:42.290 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:42.290 00:19:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:42.290 00:19:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:42.290 00:19:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:42.290 00:19:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:42.290 00:19:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:42.290 00:19:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:42.290 00:19:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:42.290 00:19:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:42.290 00:19:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:42.290 00:19:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:42.290 00:19:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:42.290 00:19:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:42.290 00:19:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:42.290 00:19:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:42.290 00:19:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:42.290 00:19:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:42.290 Found net devices under 0000:86:00.0: cvl_0_0 00:17:42.290 00:19:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:42.290 00:19:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:42.290 00:19:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:42.290 00:19:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:42.290 00:19:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:42.290 00:19:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:42.290 00:19:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:42.290 00:19:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:42.290 00:19:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:42.290 Found net devices under 0000:86:00.1: cvl_0_1 00:17:42.290 00:19:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:42.290 00:19:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:42.290 00:19:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:17:42.290 00:19:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:42.290 00:19:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:42.290 00:19:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:42.290 00:19:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:42.290 00:19:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:42.290 00:19:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:42.290 00:19:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:42.290 00:19:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:42.290 00:19:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:42.290 00:19:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:42.290 00:19:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:42.290 00:19:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:42.290 00:19:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:42.290 00:19:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:42.290 00:19:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:42.290 00:19:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:42.551 00:19:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:42.551 00:19:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:42.551 00:19:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:42.551 00:19:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:42.551 00:19:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:42.551 00:19:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:42.551 00:19:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:42.551 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:42.551 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.267 ms 00:17:42.551 00:17:42.551 --- 10.0.0.2 ping statistics --- 00:17:42.551 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:42.551 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:17:42.551 00:19:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:42.551 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:42.551 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.267 ms 00:17:42.551 00:17:42.551 --- 10.0.0.1 ping statistics --- 00:17:42.551 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:42.551 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:17:42.551 00:19:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:42.551 00:19:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:17:42.551 00:19:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:42.551 00:19:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:42.551 00:19:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:42.551 00:19:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:42.551 00:19:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:42.551 00:19:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:42.551 00:19:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:42.551 00:19:01 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:42.551 00:19:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:42.551 00:19:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:42.551 00:19:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:42.551 00:19:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=1528171 00:17:42.551 00:19:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 1528171 00:17:42.551 00:19:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@823 -- # '[' -z 1528171 ']' 00:17:42.551 00:19:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:42.551 00:19:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@828 -- # local max_retries=100 00:17:42.551 00:19:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:42.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:42.551 00:19:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@832 -- # xtrace_disable 00:17:42.551 00:19:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:42.551 00:19:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:17:42.811 [2024-07-16 00:19:01.437758] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:17:42.811 [2024-07-16 00:19:01.437804] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:17:42.811 [2024-07-16 00:19:01.500335] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:42.811 [2024-07-16 00:19:01.585100] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:42.811 [2024-07-16 00:19:01.585136] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:42.811 [2024-07-16 00:19:01.585143] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:42.811 [2024-07-16 00:19:01.585149] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:42.811 [2024-07-16 00:19:01.585153] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:42.811 [2024-07-16 00:19:01.585268] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:17:42.811 [2024-07-16 00:19:01.585377] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:17:42.811 [2024-07-16 00:19:01.585483] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:42.811 [2024-07-16 00:19:01.585483] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:17:43.379 00:19:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:17:43.379 00:19:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@856 -- # return 0 00:17:43.379 00:19:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:43.379 00:19:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:43.379 00:19:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:43.639 00:19:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:43.640 00:19:02 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:43.640 00:19:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:43.640 00:19:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:43.640 [2024-07-16 00:19:02.272134] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:43.640 00:19:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:43.640 00:19:02 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:43.640 00:19:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:43.640 00:19:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:43.640 Malloc0 00:17:43.640 00:19:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:43.640 00:19:02 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:43.640 00:19:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:43.640 00:19:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:43.640 00:19:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:43.640 00:19:02 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:43.640 00:19:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:43.640 00:19:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:43.640 00:19:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:43.640 00:19:02 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:43.640 00:19:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:43.640 00:19:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:43.640 [2024-07-16 00:19:02.312405] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:43.640 00:19:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:43.640 00:19:02 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:17:43.640 00:19:02 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:43.640 00:19:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:17:43.640 00:19:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:17:43.640 00:19:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:43.640 00:19:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:43.640 { 00:17:43.640 "params": { 00:17:43.640 "name": "Nvme$subsystem", 00:17:43.640 "trtype": "$TEST_TRANSPORT", 00:17:43.640 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:43.640 "adrfam": "ipv4", 00:17:43.640 "trsvcid": "$NVMF_PORT", 00:17:43.640 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:43.640 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:43.640 "hdgst": ${hdgst:-false}, 00:17:43.640 "ddgst": ${ddgst:-false} 00:17:43.640 }, 00:17:43.640 "method": "bdev_nvme_attach_controller" 00:17:43.640 } 00:17:43.640 EOF 00:17:43.640 )") 00:17:43.640 00:19:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:17:43.640 00:19:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:17:43.640 00:19:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:17:43.640 00:19:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:43.640 "params": { 00:17:43.640 "name": "Nvme1", 00:17:43.640 "trtype": "tcp", 00:17:43.640 "traddr": "10.0.0.2", 00:17:43.640 "adrfam": "ipv4", 00:17:43.640 "trsvcid": "4420", 00:17:43.640 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:43.640 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:43.640 "hdgst": false, 00:17:43.640 "ddgst": false 00:17:43.640 }, 00:17:43.640 "method": "bdev_nvme_attach_controller" 00:17:43.640 }' 00:17:43.640 [2024-07-16 00:19:02.361569] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:17:43.640 [2024-07-16 00:19:02.361616] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1528244 ] 00:17:43.640 [2024-07-16 00:19:02.420829] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:43.900 [2024-07-16 00:19:02.509877] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:43.900 [2024-07-16 00:19:02.509973] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:43.900 [2024-07-16 00:19:02.509973] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:43.900 I/O targets: 00:17:43.900 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:43.900 00:17:43.900 00:17:43.900 CUnit - A unit testing framework for C - Version 2.1-3 00:17:43.900 http://cunit.sourceforge.net/ 00:17:43.900 00:17:43.900 00:17:43.900 Suite: bdevio tests on: Nvme1n1 00:17:43.900 Test: blockdev write read block ...passed 00:17:43.900 Test: blockdev write zeroes read block ...passed 00:17:44.159 Test: blockdev write zeroes read no split ...passed 00:17:44.159 Test: blockdev write zeroes read split ...passed 00:17:44.159 Test: blockdev write zeroes read split partial ...passed 00:17:44.159 Test: blockdev reset ...[2024-07-16 00:19:02.867677] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:44.159 [2024-07-16 00:19:02.867739] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a8300 (9): Bad file descriptor 00:17:44.159 [2024-07-16 00:19:02.924839] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:44.159 passed 00:17:44.159 Test: blockdev write read 8 blocks ...passed 00:17:44.159 Test: blockdev write read size > 128k ...passed 00:17:44.159 Test: blockdev write read invalid size ...passed 00:17:44.159 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:44.159 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:44.159 Test: blockdev write read max offset ...passed 00:17:44.419 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:44.419 Test: blockdev writev readv 8 blocks ...passed 00:17:44.419 Test: blockdev writev readv 30 x 1block ...passed 00:17:44.419 Test: blockdev writev readv block ...passed 00:17:44.419 Test: blockdev writev readv size > 128k ...passed 00:17:44.419 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:44.419 Test: blockdev comparev and writev ...[2024-07-16 00:19:03.140518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:44.419 [2024-07-16 00:19:03.140547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:44.419 [2024-07-16 00:19:03.140561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:44.419 [2024-07-16 00:19:03.140573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:44.419 [2024-07-16 00:19:03.140871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:44.419 [2024-07-16 00:19:03.140882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:44.419 [2024-07-16 00:19:03.140895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:44.419 [2024-07-16 00:19:03.140902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:44.419 [2024-07-16 00:19:03.141188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:44.419 [2024-07-16 00:19:03.141199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:44.419 [2024-07-16 00:19:03.141210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:44.419 [2024-07-16 00:19:03.141217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:44.419 [2024-07-16 00:19:03.141515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:44.419 [2024-07-16 00:19:03.141528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:44.419 [2024-07-16 00:19:03.141539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:44.419 [2024-07-16 00:19:03.141547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:44.419 passed 00:17:44.419 Test: blockdev nvme passthru rw ...passed 00:17:44.419 Test: blockdev nvme passthru vendor specific ...[2024-07-16 00:19:03.223718] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:44.419 [2024-07-16 00:19:03.223739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:44.419 [2024-07-16 00:19:03.223901] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:44.419 [2024-07-16 00:19:03.223911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:44.419 [2024-07-16 00:19:03.224068] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:44.419 [2024-07-16 00:19:03.224078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:44.419 [2024-07-16 00:19:03.224237] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:44.419 [2024-07-16 00:19:03.224247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:44.419 passed 00:17:44.419 Test: blockdev nvme admin passthru ...passed 00:17:44.678 Test: blockdev copy ...passed 00:17:44.678 00:17:44.678 Run Summary: Type Total Ran Passed Failed Inactive 00:17:44.678 suites 1 1 n/a 0 0 00:17:44.678 tests 23 23 23 0 0 00:17:44.678 asserts 152 152 152 0 n/a 00:17:44.678 00:17:44.678 Elapsed time = 1.250 seconds 00:17:44.938 00:19:03 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:44.938 00:19:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:44.938 00:19:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:44.938 00:19:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:44.938 00:19:03 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:44.938 00:19:03 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:17:44.938 00:19:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:44.938 00:19:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:17:44.938 00:19:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:44.938 00:19:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:17:44.938 00:19:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:44.938 00:19:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:44.938 rmmod nvme_tcp 00:17:44.938 rmmod nvme_fabrics 00:17:44.938 rmmod nvme_keyring 00:17:44.938 00:19:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:44.938 00:19:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:17:44.938 00:19:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:17:44.938 00:19:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 1528171 ']' 00:17:44.938 00:19:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 1528171 00:17:44.938 00:19:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@942 -- # '[' -z 1528171 ']' 00:17:44.938 00:19:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@946 -- # kill -0 1528171 00:17:44.938 00:19:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@947 -- # uname 00:17:44.938 00:19:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:17:44.938 00:19:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1528171 00:17:44.938 00:19:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # process_name=reactor_3 00:17:44.938 00:19:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # '[' reactor_3 = sudo ']' 00:17:44.938 00:19:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1528171' 00:17:44.938 killing process with pid 1528171 00:17:44.938 00:19:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@961 -- # kill 1528171 00:17:44.938 00:19:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # wait 1528171 00:17:45.197 00:19:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:45.197 00:19:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:45.197 00:19:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:45.197 00:19:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:45.197 00:19:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:45.197 00:19:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:45.197 00:19:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:45.197 00:19:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:47.737 00:19:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:47.737 00:17:47.737 real 0m9.393s 00:17:47.737 user 0m12.518s 00:17:47.737 sys 0m4.352s 00:17:47.737 00:19:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1118 -- # xtrace_disable 00:17:47.737 00:19:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:47.737 ************************************ 00:17:47.737 END TEST nvmf_bdevio_no_huge 00:17:47.737 ************************************ 00:17:47.737 00:19:06 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:17:47.737 00:19:06 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:47.737 00:19:06 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:17:47.737 00:19:06 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:17:47.737 00:19:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:47.737 ************************************ 00:17:47.737 START TEST nvmf_tls 00:17:47.737 ************************************ 00:17:47.737 00:19:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:47.737 * Looking for test storage... 00:17:47.737 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:47.737 00:19:06 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:47.737 00:19:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:17:47.737 00:19:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:47.737 00:19:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:47.737 00:19:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:47.737 00:19:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:47.737 00:19:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:47.737 00:19:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:47.737 00:19:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:47.737 00:19:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:47.737 00:19:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:47.737 00:19:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:47.737 00:19:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:47.737 00:19:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:17:47.737 00:19:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:47.737 00:19:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:47.737 00:19:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:47.737 00:19:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:47.737 00:19:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:47.737 00:19:06 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:47.737 00:19:06 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:47.737 00:19:06 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:47.737 00:19:06 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:47.737 00:19:06 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:47.737 00:19:06 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:47.737 00:19:06 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:17:47.737 00:19:06 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:47.737 00:19:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:17:47.737 00:19:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:47.737 00:19:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:47.737 00:19:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:47.737 00:19:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:47.737 00:19:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:47.737 00:19:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:47.737 00:19:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:47.737 00:19:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:47.737 00:19:06 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:47.737 00:19:06 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:17:47.737 00:19:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:47.737 00:19:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:47.737 00:19:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:47.737 00:19:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:47.737 00:19:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:47.737 00:19:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:47.737 00:19:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:47.737 00:19:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:47.737 00:19:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:47.737 00:19:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:47.737 00:19:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:17:47.738 00:19:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:51.931 00:19:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:51.931 00:19:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:17:51.931 00:19:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:51.931 00:19:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:51.931 00:19:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:51.931 00:19:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:51.931 00:19:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:51.931 00:19:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:17:51.931 00:19:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:51.931 00:19:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:17:51.931 00:19:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:17:51.931 00:19:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:17:51.931 00:19:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:17:51.931 00:19:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:17:51.931 00:19:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:17:51.931 00:19:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:51.931 00:19:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:51.931 00:19:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:51.931 00:19:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:51.931 00:19:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:51.931 00:19:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:51.931 00:19:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:51.931 00:19:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:51.931 00:19:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:51.931 00:19:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:51.931 00:19:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:51.931 00:19:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:51.931 00:19:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:51.931 00:19:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:51.931 00:19:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:51.931 00:19:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:51.931 00:19:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:51.931 00:19:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:51.932 00:19:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:51.932 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:51.932 00:19:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:51.932 00:19:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:51.932 00:19:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:51.932 00:19:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:51.932 00:19:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:51.932 00:19:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:51.932 00:19:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:51.932 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:51.932 00:19:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:51.932 00:19:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:51.932 00:19:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:51.932 00:19:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:51.932 00:19:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:51.932 00:19:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:51.932 00:19:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:51.932 00:19:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:51.932 00:19:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:51.932 00:19:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:51.932 00:19:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:51.932 00:19:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:51.932 00:19:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:51.932 00:19:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:51.932 00:19:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:51.932 00:19:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:51.932 Found net devices under 0000:86:00.0: cvl_0_0 00:17:51.932 00:19:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:51.932 00:19:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:51.932 00:19:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:51.932 00:19:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:51.932 00:19:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:51.932 00:19:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:51.932 00:19:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:51.932 00:19:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:51.932 00:19:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:51.932 Found net devices under 0000:86:00.1: cvl_0_1 00:17:51.932 00:19:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:51.932 00:19:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:51.932 00:19:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:17:51.932 00:19:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:51.932 00:19:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:51.932 00:19:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:51.932 00:19:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:51.932 00:19:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:51.932 00:19:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:51.932 00:19:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:51.932 00:19:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:51.932 00:19:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:51.932 00:19:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:51.932 00:19:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:51.932 00:19:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:51.932 00:19:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:51.932 00:19:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:51.932 00:19:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:51.932 00:19:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:52.191 00:19:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:52.191 00:19:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:52.191 00:19:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:52.191 00:19:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:52.191 00:19:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:52.191 00:19:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:52.191 00:19:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:52.191 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:52.191 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.274 ms 00:17:52.191 00:17:52.191 --- 10.0.0.2 ping statistics --- 00:17:52.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:52.191 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:17:52.191 00:19:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:52.191 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:52.191 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:17:52.191 00:17:52.191 --- 10.0.0.1 ping statistics --- 00:17:52.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:52.191 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:17:52.191 00:19:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:52.191 00:19:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:17:52.191 00:19:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:52.191 00:19:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:52.191 00:19:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:52.191 00:19:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:52.191 00:19:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:52.191 00:19:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:52.191 00:19:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:52.191 00:19:10 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:17:52.191 00:19:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:52.191 00:19:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:52.191 00:19:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:52.191 00:19:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1531948 00:17:52.191 00:19:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1531948 00:17:52.191 00:19:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@823 -- # '[' -z 1531948 ']' 00:17:52.191 00:19:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:52.191 00:19:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # local max_retries=100 00:17:52.191 00:19:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:52.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:52.191 00:19:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # xtrace_disable 00:17:52.191 00:19:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:52.191 00:19:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:17:52.191 [2024-07-16 00:19:11.026385] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:17:52.191 [2024-07-16 00:19:11.026431] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:52.450 [2024-07-16 00:19:11.090142] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:52.450 [2024-07-16 00:19:11.186520] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:52.450 [2024-07-16 00:19:11.186552] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:52.450 [2024-07-16 00:19:11.186559] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:52.450 [2024-07-16 00:19:11.186566] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:52.450 [2024-07-16 00:19:11.186571] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:52.450 [2024-07-16 00:19:11.186591] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:53.019 00:19:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:17:53.019 00:19:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # return 0 00:17:53.019 00:19:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:53.019 00:19:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:53.019 00:19:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:53.278 00:19:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:53.278 00:19:11 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:17:53.278 00:19:11 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:17:53.278 true 00:17:53.278 00:19:12 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:53.278 00:19:12 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:17:53.538 00:19:12 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:17:53.538 00:19:12 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:17:53.538 00:19:12 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:53.797 00:19:12 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:53.797 00:19:12 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:17:53.797 00:19:12 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:17:53.797 00:19:12 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:17:53.797 00:19:12 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:17:54.055 00:19:12 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:54.055 00:19:12 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:17:54.314 00:19:12 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:17:54.314 00:19:12 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:17:54.314 00:19:12 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:54.314 00:19:12 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:17:54.314 00:19:13 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:17:54.314 00:19:13 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:17:54.314 00:19:13 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:17:54.578 00:19:13 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:54.578 00:19:13 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:17:54.835 00:19:13 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:17:54.835 00:19:13 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:17:54.835 00:19:13 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:17:54.835 00:19:13 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:54.835 00:19:13 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:17:55.092 00:19:13 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:17:55.092 00:19:13 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:17:55.092 00:19:13 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:17:55.093 00:19:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:17:55.093 00:19:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:17:55.093 00:19:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:17:55.093 00:19:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:17:55.093 00:19:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:17:55.093 00:19:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:17:55.093 00:19:13 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:55.093 00:19:13 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:17:55.093 00:19:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:17:55.093 00:19:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:17:55.093 00:19:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:17:55.093 00:19:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:17:55.093 00:19:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:17:55.093 00:19:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:17:55.093 00:19:13 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:55.093 00:19:13 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:17:55.093 00:19:13 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.O3TMlue9DI 00:17:55.093 00:19:13 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:17:55.093 00:19:13 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.0UtCKdHTiv 00:17:55.093 00:19:13 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:55.093 00:19:13 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:55.093 00:19:13 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.O3TMlue9DI 00:17:55.093 00:19:13 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.0UtCKdHTiv 00:17:55.093 00:19:13 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:55.351 00:19:14 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:17:55.609 00:19:14 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.O3TMlue9DI 00:17:55.609 00:19:14 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.O3TMlue9DI 00:17:55.609 00:19:14 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:55.866 [2024-07-16 00:19:14.467131] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:55.866 00:19:14 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:55.866 00:19:14 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:56.124 [2024-07-16 00:19:14.812022] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:56.124 [2024-07-16 00:19:14.812210] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:56.124 00:19:14 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:56.382 malloc0 00:17:56.382 00:19:15 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:56.382 00:19:15 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.O3TMlue9DI 00:17:56.641 [2024-07-16 00:19:15.329554] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:56.641 00:19:15 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.O3TMlue9DI 00:18:06.669 Initializing NVMe Controllers 00:18:06.669 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:06.669 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:06.669 Initialization complete. Launching workers. 00:18:06.669 ======================================================== 00:18:06.669 Latency(us) 00:18:06.669 Device Information : IOPS MiB/s Average min max 00:18:06.669 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16397.38 64.05 3903.53 854.02 7166.54 00:18:06.669 ======================================================== 00:18:06.669 Total : 16397.38 64.05 3903.53 854.02 7166.54 00:18:06.669 00:18:06.669 00:19:25 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.O3TMlue9DI 00:18:06.669 00:19:25 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:06.669 00:19:25 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:06.669 00:19:25 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:06.669 00:19:25 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.O3TMlue9DI' 00:18:06.669 00:19:25 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:06.669 00:19:25 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1534299 00:18:06.669 00:19:25 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:06.669 00:19:25 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:06.669 00:19:25 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1534299 /var/tmp/bdevperf.sock 00:18:06.669 00:19:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@823 -- # '[' -z 1534299 ']' 00:18:06.669 00:19:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:06.669 00:19:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # local max_retries=100 00:18:06.670 00:19:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:06.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:06.670 00:19:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # xtrace_disable 00:18:06.670 00:19:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:06.670 [2024-07-16 00:19:25.485723] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:18:06.670 [2024-07-16 00:19:25.485773] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1534299 ] 00:18:06.927 [2024-07-16 00:19:25.535616] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:06.927 [2024-07-16 00:19:25.614452] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:07.493 00:19:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:18:07.493 00:19:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # return 0 00:18:07.493 00:19:26 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.O3TMlue9DI 00:18:07.752 [2024-07-16 00:19:26.437182] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:07.752 [2024-07-16 00:19:26.437251] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:07.752 TLSTESTn1 00:18:07.752 00:19:26 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:08.011 Running I/O for 10 seconds... 00:18:17.985 00:18:17.985 Latency(us) 00:18:17.985 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:17.985 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:17.985 Verification LBA range: start 0x0 length 0x2000 00:18:17.985 TLSTESTn1 : 10.02 5599.96 21.87 0.00 0.00 22812.26 6126.19 65194.07 00:18:17.985 =================================================================================================================== 00:18:17.985 Total : 5599.96 21.87 0.00 0.00 22812.26 6126.19 65194.07 00:18:17.985 0 00:18:17.985 00:19:36 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:17.985 00:19:36 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 1534299 00:18:17.985 00:19:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@942 -- # '[' -z 1534299 ']' 00:18:17.985 00:19:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # kill -0 1534299 00:18:17.985 00:19:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # uname 00:18:17.985 00:19:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:18:17.985 00:19:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1534299 00:18:17.985 00:19:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # process_name=reactor_2 00:18:17.985 00:19:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' reactor_2 = sudo ']' 00:18:17.985 00:19:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1534299' 00:18:17.985 killing process with pid 1534299 00:18:17.985 00:19:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@961 -- # kill 1534299 00:18:17.985 Received shutdown signal, test time was about 10.000000 seconds 00:18:17.985 00:18:17.985 Latency(us) 00:18:17.985 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:17.985 =================================================================================================================== 00:18:17.985 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:17.985 [2024-07-16 00:19:36.726899] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:17.985 00:19:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # wait 1534299 00:18:18.244 00:19:36 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.0UtCKdHTiv 00:18:18.244 00:19:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@642 -- # local es=0 00:18:18.244 00:19:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@644 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.0UtCKdHTiv 00:18:18.244 00:19:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@630 -- # local arg=run_bdevperf 00:18:18.244 00:19:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:18:18.244 00:19:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@634 -- # type -t run_bdevperf 00:18:18.244 00:19:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:18:18.244 00:19:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@645 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.0UtCKdHTiv 00:18:18.244 00:19:36 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:18.244 00:19:36 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:18.244 00:19:36 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:18.244 00:19:36 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.0UtCKdHTiv' 00:18:18.244 00:19:36 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:18.244 00:19:36 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1536135 00:18:18.244 00:19:36 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:18.244 00:19:36 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:18.244 00:19:36 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1536135 /var/tmp/bdevperf.sock 00:18:18.244 00:19:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@823 -- # '[' -z 1536135 ']' 00:18:18.244 00:19:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:18.244 00:19:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # local max_retries=100 00:18:18.244 00:19:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:18.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:18.244 00:19:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # xtrace_disable 00:18:18.244 00:19:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:18.244 [2024-07-16 00:19:36.957257] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:18:18.244 [2024-07-16 00:19:36.957305] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1536135 ] 00:18:18.244 [2024-07-16 00:19:37.007244] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:18.244 [2024-07-16 00:19:37.074538] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:19.181 00:19:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:18:19.181 00:19:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # return 0 00:18:19.181 00:19:37 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.0UtCKdHTiv 00:18:19.181 [2024-07-16 00:19:37.908883] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:19.181 [2024-07-16 00:19:37.908962] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:19.181 [2024-07-16 00:19:37.918948] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:19.181 [2024-07-16 00:19:37.919233] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16b7570 (107): Transport endpoint is not connected 00:18:19.181 [2024-07-16 00:19:37.920229] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16b7570 (9): Bad file descriptor 00:18:19.181 [2024-07-16 00:19:37.921228] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:19.182 [2024-07-16 00:19:37.921239] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:19.182 [2024-07-16 00:19:37.921249] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:19.182 request: 00:18:19.182 { 00:18:19.182 "name": "TLSTEST", 00:18:19.182 "trtype": "tcp", 00:18:19.182 "traddr": "10.0.0.2", 00:18:19.182 "adrfam": "ipv4", 00:18:19.182 "trsvcid": "4420", 00:18:19.182 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:19.182 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:19.182 "prchk_reftag": false, 00:18:19.182 "prchk_guard": false, 00:18:19.182 "hdgst": false, 00:18:19.182 "ddgst": false, 00:18:19.182 "psk": "/tmp/tmp.0UtCKdHTiv", 00:18:19.182 "method": "bdev_nvme_attach_controller", 00:18:19.182 "req_id": 1 00:18:19.182 } 00:18:19.182 Got JSON-RPC error response 00:18:19.182 response: 00:18:19.182 { 00:18:19.182 "code": -5, 00:18:19.182 "message": "Input/output error" 00:18:19.182 } 00:18:19.182 00:19:37 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1536135 00:18:19.182 00:19:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@942 -- # '[' -z 1536135 ']' 00:18:19.182 00:19:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # kill -0 1536135 00:18:19.182 00:19:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # uname 00:18:19.182 00:19:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:18:19.182 00:19:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1536135 00:18:19.182 00:19:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # process_name=reactor_2 00:18:19.182 00:19:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' reactor_2 = sudo ']' 00:18:19.182 00:19:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1536135' 00:18:19.182 killing process with pid 1536135 00:18:19.182 00:19:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@961 -- # kill 1536135 00:18:19.182 Received shutdown signal, test time was about 10.000000 seconds 00:18:19.182 00:18:19.182 Latency(us) 00:18:19.182 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:19.182 =================================================================================================================== 00:18:19.182 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:19.182 [2024-07-16 00:19:37.984437] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:19.182 00:19:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # wait 1536135 00:18:19.441 00:19:38 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:18:19.441 00:19:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@645 -- # es=1 00:18:19.441 00:19:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:18:19.441 00:19:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:18:19.441 00:19:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:18:19.441 00:19:38 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.O3TMlue9DI 00:18:19.441 00:19:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@642 -- # local es=0 00:18:19.441 00:19:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@644 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.O3TMlue9DI 00:18:19.441 00:19:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@630 -- # local arg=run_bdevperf 00:18:19.441 00:19:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:18:19.441 00:19:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@634 -- # type -t run_bdevperf 00:18:19.441 00:19:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:18:19.441 00:19:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@645 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.O3TMlue9DI 00:18:19.441 00:19:38 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:19.441 00:19:38 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:19.441 00:19:38 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:18:19.441 00:19:38 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.O3TMlue9DI' 00:18:19.441 00:19:38 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:19.441 00:19:38 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:19.441 00:19:38 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1536370 00:18:19.441 00:19:38 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:19.441 00:19:38 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1536370 /var/tmp/bdevperf.sock 00:18:19.441 00:19:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@823 -- # '[' -z 1536370 ']' 00:18:19.441 00:19:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:19.441 00:19:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # local max_retries=100 00:18:19.441 00:19:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:19.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:19.441 00:19:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # xtrace_disable 00:18:19.441 00:19:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:19.441 [2024-07-16 00:19:38.199907] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:18:19.441 [2024-07-16 00:19:38.199955] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1536370 ] 00:18:19.441 [2024-07-16 00:19:38.249953] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:19.700 [2024-07-16 00:19:38.318215] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:19.700 00:19:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:18:19.700 00:19:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # return 0 00:18:19.700 00:19:38 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.O3TMlue9DI 00:18:19.700 [2024-07-16 00:19:38.550000] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:19.700 [2024-07-16 00:19:38.550074] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:19.959 [2024-07-16 00:19:38.560499] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:19.959 [2024-07-16 00:19:38.560522] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:19.959 [2024-07-16 00:19:38.560544] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:19.960 [2024-07-16 00:19:38.561361] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc40570 (107): Transport endpoint is not connected 00:18:19.960 [2024-07-16 00:19:38.562356] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc40570 (9): Bad file descriptor 00:18:19.960 [2024-07-16 00:19:38.563360] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:19.960 [2024-07-16 00:19:38.563370] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:19.960 [2024-07-16 00:19:38.563379] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:19.960 request: 00:18:19.960 { 00:18:19.960 "name": "TLSTEST", 00:18:19.960 "trtype": "tcp", 00:18:19.960 "traddr": "10.0.0.2", 00:18:19.960 "adrfam": "ipv4", 00:18:19.960 "trsvcid": "4420", 00:18:19.960 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:19.960 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:19.960 "prchk_reftag": false, 00:18:19.960 "prchk_guard": false, 00:18:19.960 "hdgst": false, 00:18:19.960 "ddgst": false, 00:18:19.960 "psk": "/tmp/tmp.O3TMlue9DI", 00:18:19.960 "method": "bdev_nvme_attach_controller", 00:18:19.960 "req_id": 1 00:18:19.960 } 00:18:19.960 Got JSON-RPC error response 00:18:19.960 response: 00:18:19.960 { 00:18:19.960 "code": -5, 00:18:19.960 "message": "Input/output error" 00:18:19.960 } 00:18:19.960 00:19:38 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1536370 00:18:19.960 00:19:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@942 -- # '[' -z 1536370 ']' 00:18:19.960 00:19:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # kill -0 1536370 00:18:19.960 00:19:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # uname 00:18:19.960 00:19:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:18:19.960 00:19:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1536370 00:18:19.960 00:19:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # process_name=reactor_2 00:18:19.960 00:19:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' reactor_2 = sudo ']' 00:18:19.960 00:19:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1536370' 00:18:19.960 killing process with pid 1536370 00:18:19.960 00:19:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@961 -- # kill 1536370 00:18:19.960 Received shutdown signal, test time was about 10.000000 seconds 00:18:19.960 00:18:19.960 Latency(us) 00:18:19.960 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:19.960 =================================================================================================================== 00:18:19.960 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:19.960 [2024-07-16 00:19:38.629694] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:19.960 00:19:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # wait 1536370 00:18:19.960 00:19:38 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:18:19.960 00:19:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@645 -- # es=1 00:18:19.960 00:19:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:18:19.960 00:19:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:18:19.960 00:19:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:18:19.960 00:19:38 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.O3TMlue9DI 00:18:19.960 00:19:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@642 -- # local es=0 00:18:19.960 00:19:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@644 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.O3TMlue9DI 00:18:19.960 00:19:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@630 -- # local arg=run_bdevperf 00:18:19.960 00:19:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:18:19.960 00:19:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@634 -- # type -t run_bdevperf 00:18:19.960 00:19:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:18:19.960 00:19:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@645 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.O3TMlue9DI 00:18:19.960 00:19:38 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:19.960 00:19:38 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:18:19.960 00:19:38 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:19.960 00:19:38 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.O3TMlue9DI' 00:18:19.960 00:19:38 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:19.960 00:19:38 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1536493 00:18:19.960 00:19:38 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:19.960 00:19:38 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:19.960 00:19:38 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1536493 /var/tmp/bdevperf.sock 00:18:19.960 00:19:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@823 -- # '[' -z 1536493 ']' 00:18:19.960 00:19:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:19.960 00:19:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # local max_retries=100 00:18:19.960 00:19:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:19.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:19.960 00:19:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # xtrace_disable 00:18:19.960 00:19:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:20.219 [2024-07-16 00:19:38.851047] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:18:20.219 [2024-07-16 00:19:38.851096] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1536493 ] 00:18:20.219 [2024-07-16 00:19:38.901702] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:20.219 [2024-07-16 00:19:38.978138] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:21.156 00:19:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:18:21.156 00:19:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # return 0 00:18:21.156 00:19:39 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.O3TMlue9DI 00:18:21.156 [2024-07-16 00:19:39.792751] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:21.156 [2024-07-16 00:19:39.792820] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:21.156 [2024-07-16 00:19:39.803661] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:21.156 [2024-07-16 00:19:39.803682] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:21.156 [2024-07-16 00:19:39.803704] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:21.156 [2024-07-16 00:19:39.804094] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24b3570 (107): Transport endpoint is not connected 00:18:21.156 [2024-07-16 00:19:39.805088] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24b3570 (9): Bad file descriptor 00:18:21.156 [2024-07-16 00:19:39.806089] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:18:21.156 [2024-07-16 00:19:39.806099] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:21.156 [2024-07-16 00:19:39.806108] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:18:21.156 request: 00:18:21.156 { 00:18:21.156 "name": "TLSTEST", 00:18:21.156 "trtype": "tcp", 00:18:21.156 "traddr": "10.0.0.2", 00:18:21.156 "adrfam": "ipv4", 00:18:21.156 "trsvcid": "4420", 00:18:21.156 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:21.156 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:21.156 "prchk_reftag": false, 00:18:21.156 "prchk_guard": false, 00:18:21.156 "hdgst": false, 00:18:21.156 "ddgst": false, 00:18:21.156 "psk": "/tmp/tmp.O3TMlue9DI", 00:18:21.156 "method": "bdev_nvme_attach_controller", 00:18:21.156 "req_id": 1 00:18:21.156 } 00:18:21.156 Got JSON-RPC error response 00:18:21.156 response: 00:18:21.156 { 00:18:21.156 "code": -5, 00:18:21.156 "message": "Input/output error" 00:18:21.156 } 00:18:21.156 00:19:39 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1536493 00:18:21.156 00:19:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@942 -- # '[' -z 1536493 ']' 00:18:21.156 00:19:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # kill -0 1536493 00:18:21.156 00:19:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # uname 00:18:21.156 00:19:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:18:21.156 00:19:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1536493 00:18:21.157 00:19:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # process_name=reactor_2 00:18:21.157 00:19:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' reactor_2 = sudo ']' 00:18:21.157 00:19:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1536493' 00:18:21.157 killing process with pid 1536493 00:18:21.157 00:19:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@961 -- # kill 1536493 00:18:21.157 Received shutdown signal, test time was about 10.000000 seconds 00:18:21.157 00:18:21.157 Latency(us) 00:18:21.157 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:21.157 =================================================================================================================== 00:18:21.157 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:21.157 [2024-07-16 00:19:39.871454] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:21.157 00:19:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # wait 1536493 00:18:21.416 00:19:40 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:18:21.416 00:19:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@645 -- # es=1 00:18:21.416 00:19:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:18:21.416 00:19:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:18:21.416 00:19:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:18:21.416 00:19:40 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:21.416 00:19:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@642 -- # local es=0 00:18:21.416 00:19:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@644 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:21.416 00:19:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@630 -- # local arg=run_bdevperf 00:18:21.416 00:19:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:18:21.416 00:19:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@634 -- # type -t run_bdevperf 00:18:21.416 00:19:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:18:21.416 00:19:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@645 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:21.416 00:19:40 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:21.416 00:19:40 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:21.416 00:19:40 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:21.416 00:19:40 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:18:21.416 00:19:40 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:21.416 00:19:40 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1536641 00:18:21.416 00:19:40 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:21.416 00:19:40 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:21.416 00:19:40 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1536641 /var/tmp/bdevperf.sock 00:18:21.416 00:19:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@823 -- # '[' -z 1536641 ']' 00:18:21.416 00:19:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:21.416 00:19:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # local max_retries=100 00:18:21.416 00:19:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:21.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:21.416 00:19:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # xtrace_disable 00:18:21.416 00:19:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:21.416 [2024-07-16 00:19:40.094856] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:18:21.416 [2024-07-16 00:19:40.094906] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1536641 ] 00:18:21.416 [2024-07-16 00:19:40.145429] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:21.416 [2024-07-16 00:19:40.218051] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:22.350 00:19:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:18:22.350 00:19:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # return 0 00:18:22.350 00:19:40 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:22.350 [2024-07-16 00:19:41.056917] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:22.350 [2024-07-16 00:19:41.058745] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c7af0 (9): Bad file descriptor 00:18:22.350 [2024-07-16 00:19:41.059745] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:22.350 [2024-07-16 00:19:41.059755] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:22.350 [2024-07-16 00:19:41.059763] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:22.350 request: 00:18:22.350 { 00:18:22.350 "name": "TLSTEST", 00:18:22.350 "trtype": "tcp", 00:18:22.350 "traddr": "10.0.0.2", 00:18:22.350 "adrfam": "ipv4", 00:18:22.350 "trsvcid": "4420", 00:18:22.350 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:22.350 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:22.350 "prchk_reftag": false, 00:18:22.350 "prchk_guard": false, 00:18:22.350 "hdgst": false, 00:18:22.350 "ddgst": false, 00:18:22.350 "method": "bdev_nvme_attach_controller", 00:18:22.350 "req_id": 1 00:18:22.350 } 00:18:22.350 Got JSON-RPC error response 00:18:22.350 response: 00:18:22.350 { 00:18:22.350 "code": -5, 00:18:22.350 "message": "Input/output error" 00:18:22.350 } 00:18:22.350 00:19:41 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1536641 00:18:22.350 00:19:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@942 -- # '[' -z 1536641 ']' 00:18:22.350 00:19:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # kill -0 1536641 00:18:22.350 00:19:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # uname 00:18:22.350 00:19:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:18:22.350 00:19:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1536641 00:18:22.350 00:19:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # process_name=reactor_2 00:18:22.350 00:19:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' reactor_2 = sudo ']' 00:18:22.350 00:19:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1536641' 00:18:22.350 killing process with pid 1536641 00:18:22.350 00:19:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@961 -- # kill 1536641 00:18:22.350 Received shutdown signal, test time was about 10.000000 seconds 00:18:22.350 00:18:22.350 Latency(us) 00:18:22.350 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:22.350 =================================================================================================================== 00:18:22.350 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:22.350 00:19:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # wait 1536641 00:18:22.608 00:19:41 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:18:22.608 00:19:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@645 -- # es=1 00:18:22.608 00:19:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:18:22.608 00:19:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:18:22.608 00:19:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:18:22.608 00:19:41 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 1531948 00:18:22.608 00:19:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@942 -- # '[' -z 1531948 ']' 00:18:22.608 00:19:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # kill -0 1531948 00:18:22.608 00:19:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # uname 00:18:22.608 00:19:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:18:22.608 00:19:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1531948 00:18:22.608 00:19:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:18:22.608 00:19:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:18:22.608 00:19:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1531948' 00:18:22.608 killing process with pid 1531948 00:18:22.608 00:19:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@961 -- # kill 1531948 00:18:22.608 [2024-07-16 00:19:41.339464] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:22.608 00:19:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # wait 1531948 00:18:22.902 00:19:41 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:18:22.902 00:19:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:18:22.902 00:19:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:18:22.902 00:19:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:18:22.902 00:19:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:18:22.902 00:19:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:18:22.902 00:19:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:18:22.902 00:19:41 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:22.902 00:19:41 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:18:22.902 00:19:41 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.7kFpAoUVEj 00:18:22.902 00:19:41 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:22.902 00:19:41 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.7kFpAoUVEj 00:18:22.902 00:19:41 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:18:22.902 00:19:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:22.902 00:19:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:22.902 00:19:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:22.902 00:19:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1536993 00:18:22.902 00:19:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1536993 00:18:22.902 00:19:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:22.902 00:19:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@823 -- # '[' -z 1536993 ']' 00:18:22.902 00:19:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:22.902 00:19:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # local max_retries=100 00:18:22.902 00:19:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:22.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:22.902 00:19:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # xtrace_disable 00:18:22.902 00:19:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:22.902 [2024-07-16 00:19:41.635860] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:18:22.902 [2024-07-16 00:19:41.635914] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:22.902 [2024-07-16 00:19:41.694696] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:23.159 [2024-07-16 00:19:41.773944] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:23.159 [2024-07-16 00:19:41.773979] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:23.159 [2024-07-16 00:19:41.773986] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:23.159 [2024-07-16 00:19:41.773992] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:23.160 [2024-07-16 00:19:41.773997] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:23.160 [2024-07-16 00:19:41.774015] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:23.725 00:19:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:18:23.725 00:19:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # return 0 00:18:23.725 00:19:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:23.725 00:19:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:23.725 00:19:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:23.725 00:19:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:23.725 00:19:42 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.7kFpAoUVEj 00:18:23.725 00:19:42 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.7kFpAoUVEj 00:18:23.725 00:19:42 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:23.983 [2024-07-16 00:19:42.611911] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:23.983 00:19:42 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:23.983 00:19:42 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:24.241 [2024-07-16 00:19:42.964814] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:24.241 [2024-07-16 00:19:42.964988] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:24.241 00:19:42 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:24.499 malloc0 00:18:24.499 00:19:43 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:24.499 00:19:43 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.7kFpAoUVEj 00:18:24.758 [2024-07-16 00:19:43.466370] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:24.758 00:19:43 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.7kFpAoUVEj 00:18:24.758 00:19:43 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:24.758 00:19:43 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:24.758 00:19:43 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:24.758 00:19:43 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.7kFpAoUVEj' 00:18:24.758 00:19:43 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:24.758 00:19:43 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:24.758 00:19:43 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1537346 00:18:24.758 00:19:43 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:24.758 00:19:43 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1537346 /var/tmp/bdevperf.sock 00:18:24.758 00:19:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@823 -- # '[' -z 1537346 ']' 00:18:24.758 00:19:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:24.758 00:19:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # local max_retries=100 00:18:24.758 00:19:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:24.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:24.758 00:19:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # xtrace_disable 00:18:24.758 00:19:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:24.758 [2024-07-16 00:19:43.514393] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:18:24.758 [2024-07-16 00:19:43.514440] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1537346 ] 00:18:24.758 [2024-07-16 00:19:43.564866] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:25.016 [2024-07-16 00:19:43.638927] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:25.017 00:19:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:18:25.017 00:19:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # return 0 00:18:25.017 00:19:43 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.7kFpAoUVEj 00:18:25.275 [2024-07-16 00:19:43.870732] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:25.275 [2024-07-16 00:19:43.870805] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:25.275 TLSTESTn1 00:18:25.275 00:19:43 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:25.275 Running I/O for 10 seconds... 00:18:35.251 00:18:35.251 Latency(us) 00:18:35.251 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:35.251 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:35.251 Verification LBA range: start 0x0 length 0x2000 00:18:35.251 TLSTESTn1 : 10.02 5522.31 21.57 0.00 0.00 23140.35 6781.55 46502.07 00:18:35.251 =================================================================================================================== 00:18:35.251 Total : 5522.31 21.57 0.00 0.00 23140.35 6781.55 46502.07 00:18:35.251 0 00:18:35.251 00:19:54 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:35.251 00:19:54 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 1537346 00:18:35.251 00:19:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@942 -- # '[' -z 1537346 ']' 00:18:35.251 00:19:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # kill -0 1537346 00:18:35.251 00:19:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # uname 00:18:35.511 00:19:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:18:35.511 00:19:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1537346 00:18:35.511 00:19:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # process_name=reactor_2 00:18:35.511 00:19:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' reactor_2 = sudo ']' 00:18:35.511 00:19:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1537346' 00:18:35.511 killing process with pid 1537346 00:18:35.511 00:19:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@961 -- # kill 1537346 00:18:35.511 Received shutdown signal, test time was about 10.000000 seconds 00:18:35.511 00:18:35.511 Latency(us) 00:18:35.511 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:35.511 =================================================================================================================== 00:18:35.511 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:35.511 [2024-07-16 00:19:54.146697] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:35.511 00:19:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # wait 1537346 00:18:35.511 00:19:54 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.7kFpAoUVEj 00:18:35.511 00:19:54 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.7kFpAoUVEj 00:18:35.511 00:19:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@642 -- # local es=0 00:18:35.511 00:19:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@644 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.7kFpAoUVEj 00:18:35.511 00:19:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@630 -- # local arg=run_bdevperf 00:18:35.511 00:19:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:18:35.511 00:19:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@634 -- # type -t run_bdevperf 00:18:35.511 00:19:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:18:35.511 00:19:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@645 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.7kFpAoUVEj 00:18:35.511 00:19:54 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:35.511 00:19:54 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:35.511 00:19:54 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:35.511 00:19:54 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.7kFpAoUVEj' 00:18:35.511 00:19:54 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:35.511 00:19:54 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1539164 00:18:35.511 00:19:54 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:35.511 00:19:54 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:35.511 00:19:54 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1539164 /var/tmp/bdevperf.sock 00:18:35.511 00:19:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@823 -- # '[' -z 1539164 ']' 00:18:35.511 00:19:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:35.511 00:19:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # local max_retries=100 00:18:35.511 00:19:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:35.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:35.511 00:19:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # xtrace_disable 00:18:35.511 00:19:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:35.771 [2024-07-16 00:19:54.379402] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:18:35.771 [2024-07-16 00:19:54.379455] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1539164 ] 00:18:35.771 [2024-07-16 00:19:54.429118] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:35.771 [2024-07-16 00:19:54.498405] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:36.339 00:19:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:18:36.339 00:19:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # return 0 00:18:36.339 00:19:55 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.7kFpAoUVEj 00:18:36.632 [2024-07-16 00:19:55.331807] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:36.632 [2024-07-16 00:19:55.331856] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:18:36.632 [2024-07-16 00:19:55.331863] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.7kFpAoUVEj 00:18:36.632 request: 00:18:36.632 { 00:18:36.632 "name": "TLSTEST", 00:18:36.632 "trtype": "tcp", 00:18:36.632 "traddr": "10.0.0.2", 00:18:36.632 "adrfam": "ipv4", 00:18:36.632 "trsvcid": "4420", 00:18:36.632 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:36.632 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:36.632 "prchk_reftag": false, 00:18:36.632 "prchk_guard": false, 00:18:36.632 "hdgst": false, 00:18:36.632 "ddgst": false, 00:18:36.632 "psk": "/tmp/tmp.7kFpAoUVEj", 00:18:36.632 "method": "bdev_nvme_attach_controller", 00:18:36.632 "req_id": 1 00:18:36.632 } 00:18:36.632 Got JSON-RPC error response 00:18:36.632 response: 00:18:36.632 { 00:18:36.632 "code": -1, 00:18:36.632 "message": "Operation not permitted" 00:18:36.632 } 00:18:36.632 00:19:55 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1539164 00:18:36.632 00:19:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@942 -- # '[' -z 1539164 ']' 00:18:36.632 00:19:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # kill -0 1539164 00:18:36.632 00:19:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # uname 00:18:36.632 00:19:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:18:36.632 00:19:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1539164 00:18:36.632 00:19:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # process_name=reactor_2 00:18:36.632 00:19:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' reactor_2 = sudo ']' 00:18:36.632 00:19:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1539164' 00:18:36.632 killing process with pid 1539164 00:18:36.632 00:19:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@961 -- # kill 1539164 00:18:36.632 Received shutdown signal, test time was about 10.000000 seconds 00:18:36.632 00:18:36.632 Latency(us) 00:18:36.632 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:36.632 =================================================================================================================== 00:18:36.632 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:36.632 00:19:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # wait 1539164 00:18:36.891 00:19:55 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:18:36.891 00:19:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@645 -- # es=1 00:18:36.891 00:19:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:18:36.891 00:19:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:18:36.891 00:19:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:18:36.891 00:19:55 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 1536993 00:18:36.891 00:19:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@942 -- # '[' -z 1536993 ']' 00:18:36.891 00:19:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # kill -0 1536993 00:18:36.891 00:19:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # uname 00:18:36.891 00:19:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:18:36.891 00:19:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1536993 00:18:36.891 00:19:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:18:36.891 00:19:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:18:36.891 00:19:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1536993' 00:18:36.891 killing process with pid 1536993 00:18:36.891 00:19:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@961 -- # kill 1536993 00:18:36.891 [2024-07-16 00:19:55.620427] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:36.891 00:19:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # wait 1536993 00:18:37.151 00:19:55 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:18:37.151 00:19:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:37.151 00:19:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:37.151 00:19:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:37.151 00:19:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1539427 00:18:37.151 00:19:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1539427 00:18:37.151 00:19:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:37.151 00:19:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@823 -- # '[' -z 1539427 ']' 00:18:37.151 00:19:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:37.151 00:19:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # local max_retries=100 00:18:37.151 00:19:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:37.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:37.151 00:19:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # xtrace_disable 00:18:37.151 00:19:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:37.151 [2024-07-16 00:19:55.870389] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:18:37.151 [2024-07-16 00:19:55.870435] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:37.151 [2024-07-16 00:19:55.925782] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:37.410 [2024-07-16 00:19:56.004629] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:37.411 [2024-07-16 00:19:56.004661] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:37.411 [2024-07-16 00:19:56.004668] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:37.411 [2024-07-16 00:19:56.004674] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:37.411 [2024-07-16 00:19:56.004679] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:37.411 [2024-07-16 00:19:56.004696] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:37.979 00:19:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:18:37.979 00:19:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # return 0 00:18:37.979 00:19:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:37.979 00:19:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:37.979 00:19:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:37.979 00:19:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:37.979 00:19:56 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.7kFpAoUVEj 00:18:37.979 00:19:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@642 -- # local es=0 00:18:37.979 00:19:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@644 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.7kFpAoUVEj 00:18:37.979 00:19:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@630 -- # local arg=setup_nvmf_tgt 00:18:37.979 00:19:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:18:37.979 00:19:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@634 -- # type -t setup_nvmf_tgt 00:18:37.979 00:19:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:18:37.979 00:19:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@645 -- # setup_nvmf_tgt /tmp/tmp.7kFpAoUVEj 00:18:37.979 00:19:56 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.7kFpAoUVEj 00:18:37.979 00:19:56 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:38.237 [2024-07-16 00:19:56.863468] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:38.237 00:19:56 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:38.237 00:19:57 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:38.495 [2024-07-16 00:19:57.196308] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:38.495 [2024-07-16 00:19:57.196504] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:38.495 00:19:57 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:38.754 malloc0 00:18:38.754 00:19:57 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:38.754 00:19:57 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.7kFpAoUVEj 00:18:39.013 [2024-07-16 00:19:57.709920] tcp.c:3603:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:18:39.013 [2024-07-16 00:19:57.709947] tcp.c:3689:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:18:39.013 [2024-07-16 00:19:57.709970] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:18:39.013 request: 00:18:39.013 { 00:18:39.013 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:39.013 "host": "nqn.2016-06.io.spdk:host1", 00:18:39.013 "psk": "/tmp/tmp.7kFpAoUVEj", 00:18:39.013 "method": "nvmf_subsystem_add_host", 00:18:39.013 "req_id": 1 00:18:39.013 } 00:18:39.013 Got JSON-RPC error response 00:18:39.013 response: 00:18:39.013 { 00:18:39.013 "code": -32603, 00:18:39.013 "message": "Internal error" 00:18:39.013 } 00:18:39.013 00:19:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@645 -- # es=1 00:18:39.013 00:19:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:18:39.013 00:19:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:18:39.013 00:19:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:18:39.013 00:19:57 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 1539427 00:18:39.013 00:19:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@942 -- # '[' -z 1539427 ']' 00:18:39.013 00:19:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # kill -0 1539427 00:18:39.013 00:19:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # uname 00:18:39.013 00:19:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:18:39.013 00:19:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1539427 00:18:39.013 00:19:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:18:39.013 00:19:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:18:39.013 00:19:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1539427' 00:18:39.013 killing process with pid 1539427 00:18:39.013 00:19:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@961 -- # kill 1539427 00:18:39.013 00:19:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # wait 1539427 00:18:39.272 00:19:57 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.7kFpAoUVEj 00:18:39.272 00:19:57 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:18:39.272 00:19:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:39.272 00:19:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:39.272 00:19:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:39.272 00:19:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1539703 00:18:39.272 00:19:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1539703 00:18:39.272 00:19:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:39.272 00:19:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@823 -- # '[' -z 1539703 ']' 00:18:39.272 00:19:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:39.272 00:19:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # local max_retries=100 00:18:39.272 00:19:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:39.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:39.272 00:19:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # xtrace_disable 00:18:39.272 00:19:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:39.272 [2024-07-16 00:19:58.018836] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:18:39.272 [2024-07-16 00:19:58.018880] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:39.272 [2024-07-16 00:19:58.075371] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:39.529 [2024-07-16 00:19:58.154053] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:39.529 [2024-07-16 00:19:58.154085] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:39.529 [2024-07-16 00:19:58.154093] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:39.529 [2024-07-16 00:19:58.154099] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:39.529 [2024-07-16 00:19:58.154104] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:39.529 [2024-07-16 00:19:58.154121] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:40.137 00:19:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:18:40.137 00:19:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # return 0 00:18:40.137 00:19:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:40.137 00:19:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:40.137 00:19:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:40.137 00:19:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:40.137 00:19:58 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.7kFpAoUVEj 00:18:40.137 00:19:58 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.7kFpAoUVEj 00:18:40.137 00:19:58 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:40.395 [2024-07-16 00:19:59.008472] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:40.395 00:19:59 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:40.395 00:19:59 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:40.653 [2024-07-16 00:19:59.345335] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:40.653 [2024-07-16 00:19:59.345522] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:40.653 00:19:59 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:40.911 malloc0 00:18:40.911 00:19:59 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:40.911 00:19:59 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.7kFpAoUVEj 00:18:41.168 [2024-07-16 00:19:59.874910] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:41.168 00:19:59 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=1540138 00:18:41.168 00:19:59 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:41.168 00:19:59 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:41.168 00:19:59 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 1540138 /var/tmp/bdevperf.sock 00:18:41.168 00:19:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@823 -- # '[' -z 1540138 ']' 00:18:41.168 00:19:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:41.168 00:19:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # local max_retries=100 00:18:41.168 00:19:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:41.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:41.168 00:19:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # xtrace_disable 00:18:41.168 00:19:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:41.168 [2024-07-16 00:19:59.936871] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:18:41.168 [2024-07-16 00:19:59.936922] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1540138 ] 00:18:41.169 [2024-07-16 00:19:59.987893] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:41.426 [2024-07-16 00:20:00.070270] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:41.992 00:20:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:18:41.992 00:20:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # return 0 00:18:41.992 00:20:00 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.7kFpAoUVEj 00:18:42.250 [2024-07-16 00:20:00.888384] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:42.250 [2024-07-16 00:20:00.888458] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:42.250 TLSTESTn1 00:18:42.250 00:20:00 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:18:42.509 00:20:01 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:18:42.509 "subsystems": [ 00:18:42.509 { 00:18:42.509 "subsystem": "keyring", 00:18:42.509 "config": [] 00:18:42.509 }, 00:18:42.509 { 00:18:42.509 "subsystem": "iobuf", 00:18:42.509 "config": [ 00:18:42.509 { 00:18:42.509 "method": "iobuf_set_options", 00:18:42.509 "params": { 00:18:42.509 "small_pool_count": 8192, 00:18:42.509 "large_pool_count": 1024, 00:18:42.509 "small_bufsize": 8192, 00:18:42.509 "large_bufsize": 135168 00:18:42.509 } 00:18:42.509 } 00:18:42.509 ] 00:18:42.509 }, 00:18:42.509 { 00:18:42.509 "subsystem": "sock", 00:18:42.509 "config": [ 00:18:42.509 { 00:18:42.509 "method": "sock_set_default_impl", 00:18:42.509 "params": { 00:18:42.509 "impl_name": "posix" 00:18:42.509 } 00:18:42.509 }, 00:18:42.509 { 00:18:42.509 "method": "sock_impl_set_options", 00:18:42.509 "params": { 00:18:42.509 "impl_name": "ssl", 00:18:42.509 "recv_buf_size": 4096, 00:18:42.509 "send_buf_size": 4096, 00:18:42.509 "enable_recv_pipe": true, 00:18:42.509 "enable_quickack": false, 00:18:42.509 "enable_placement_id": 0, 00:18:42.509 "enable_zerocopy_send_server": true, 00:18:42.509 "enable_zerocopy_send_client": false, 00:18:42.509 "zerocopy_threshold": 0, 00:18:42.509 "tls_version": 0, 00:18:42.509 "enable_ktls": false 00:18:42.509 } 00:18:42.509 }, 00:18:42.509 { 00:18:42.509 "method": "sock_impl_set_options", 00:18:42.509 "params": { 00:18:42.509 "impl_name": "posix", 00:18:42.509 "recv_buf_size": 2097152, 00:18:42.509 "send_buf_size": 2097152, 00:18:42.509 "enable_recv_pipe": true, 00:18:42.509 "enable_quickack": false, 00:18:42.509 "enable_placement_id": 0, 00:18:42.509 "enable_zerocopy_send_server": true, 00:18:42.509 "enable_zerocopy_send_client": false, 00:18:42.509 "zerocopy_threshold": 0, 00:18:42.509 "tls_version": 0, 00:18:42.509 "enable_ktls": false 00:18:42.509 } 00:18:42.509 } 00:18:42.509 ] 00:18:42.509 }, 00:18:42.509 { 00:18:42.509 "subsystem": "vmd", 00:18:42.509 "config": [] 00:18:42.509 }, 00:18:42.509 { 00:18:42.509 "subsystem": "accel", 00:18:42.509 "config": [ 00:18:42.509 { 00:18:42.509 "method": "accel_set_options", 00:18:42.509 "params": { 00:18:42.509 "small_cache_size": 128, 00:18:42.509 "large_cache_size": 16, 00:18:42.509 "task_count": 2048, 00:18:42.509 "sequence_count": 2048, 00:18:42.509 "buf_count": 2048 00:18:42.509 } 00:18:42.509 } 00:18:42.509 ] 00:18:42.509 }, 00:18:42.509 { 00:18:42.509 "subsystem": "bdev", 00:18:42.509 "config": [ 00:18:42.509 { 00:18:42.509 "method": "bdev_set_options", 00:18:42.509 "params": { 00:18:42.509 "bdev_io_pool_size": 65535, 00:18:42.509 "bdev_io_cache_size": 256, 00:18:42.509 "bdev_auto_examine": true, 00:18:42.509 "iobuf_small_cache_size": 128, 00:18:42.509 "iobuf_large_cache_size": 16 00:18:42.509 } 00:18:42.509 }, 00:18:42.509 { 00:18:42.509 "method": "bdev_raid_set_options", 00:18:42.509 "params": { 00:18:42.509 "process_window_size_kb": 1024 00:18:42.509 } 00:18:42.509 }, 00:18:42.509 { 00:18:42.509 "method": "bdev_iscsi_set_options", 00:18:42.509 "params": { 00:18:42.509 "timeout_sec": 30 00:18:42.509 } 00:18:42.509 }, 00:18:42.509 { 00:18:42.509 "method": "bdev_nvme_set_options", 00:18:42.509 "params": { 00:18:42.509 "action_on_timeout": "none", 00:18:42.509 "timeout_us": 0, 00:18:42.509 "timeout_admin_us": 0, 00:18:42.509 "keep_alive_timeout_ms": 10000, 00:18:42.509 "arbitration_burst": 0, 00:18:42.509 "low_priority_weight": 0, 00:18:42.509 "medium_priority_weight": 0, 00:18:42.509 "high_priority_weight": 0, 00:18:42.509 "nvme_adminq_poll_period_us": 10000, 00:18:42.509 "nvme_ioq_poll_period_us": 0, 00:18:42.509 "io_queue_requests": 0, 00:18:42.509 "delay_cmd_submit": true, 00:18:42.509 "transport_retry_count": 4, 00:18:42.509 "bdev_retry_count": 3, 00:18:42.509 "transport_ack_timeout": 0, 00:18:42.509 "ctrlr_loss_timeout_sec": 0, 00:18:42.509 "reconnect_delay_sec": 0, 00:18:42.509 "fast_io_fail_timeout_sec": 0, 00:18:42.509 "disable_auto_failback": false, 00:18:42.509 "generate_uuids": false, 00:18:42.509 "transport_tos": 0, 00:18:42.509 "nvme_error_stat": false, 00:18:42.509 "rdma_srq_size": 0, 00:18:42.509 "io_path_stat": false, 00:18:42.509 "allow_accel_sequence": false, 00:18:42.509 "rdma_max_cq_size": 0, 00:18:42.509 "rdma_cm_event_timeout_ms": 0, 00:18:42.509 "dhchap_digests": [ 00:18:42.509 "sha256", 00:18:42.509 "sha384", 00:18:42.509 "sha512" 00:18:42.509 ], 00:18:42.509 "dhchap_dhgroups": [ 00:18:42.509 "null", 00:18:42.509 "ffdhe2048", 00:18:42.509 "ffdhe3072", 00:18:42.509 "ffdhe4096", 00:18:42.509 "ffdhe6144", 00:18:42.509 "ffdhe8192" 00:18:42.509 ] 00:18:42.509 } 00:18:42.509 }, 00:18:42.509 { 00:18:42.509 "method": "bdev_nvme_set_hotplug", 00:18:42.509 "params": { 00:18:42.509 "period_us": 100000, 00:18:42.509 "enable": false 00:18:42.509 } 00:18:42.509 }, 00:18:42.509 { 00:18:42.509 "method": "bdev_malloc_create", 00:18:42.509 "params": { 00:18:42.509 "name": "malloc0", 00:18:42.509 "num_blocks": 8192, 00:18:42.509 "block_size": 4096, 00:18:42.509 "physical_block_size": 4096, 00:18:42.509 "uuid": "22efd8d5-7322-4a9a-bcbb-3b3af07e7850", 00:18:42.509 "optimal_io_boundary": 0 00:18:42.509 } 00:18:42.509 }, 00:18:42.509 { 00:18:42.509 "method": "bdev_wait_for_examine" 00:18:42.509 } 00:18:42.509 ] 00:18:42.509 }, 00:18:42.509 { 00:18:42.509 "subsystem": "nbd", 00:18:42.509 "config": [] 00:18:42.509 }, 00:18:42.509 { 00:18:42.509 "subsystem": "scheduler", 00:18:42.509 "config": [ 00:18:42.509 { 00:18:42.509 "method": "framework_set_scheduler", 00:18:42.509 "params": { 00:18:42.509 "name": "static" 00:18:42.509 } 00:18:42.509 } 00:18:42.509 ] 00:18:42.509 }, 00:18:42.509 { 00:18:42.509 "subsystem": "nvmf", 00:18:42.509 "config": [ 00:18:42.509 { 00:18:42.509 "method": "nvmf_set_config", 00:18:42.509 "params": { 00:18:42.509 "discovery_filter": "match_any", 00:18:42.509 "admin_cmd_passthru": { 00:18:42.509 "identify_ctrlr": false 00:18:42.509 } 00:18:42.509 } 00:18:42.509 }, 00:18:42.509 { 00:18:42.509 "method": "nvmf_set_max_subsystems", 00:18:42.509 "params": { 00:18:42.509 "max_subsystems": 1024 00:18:42.509 } 00:18:42.509 }, 00:18:42.509 { 00:18:42.509 "method": "nvmf_set_crdt", 00:18:42.509 "params": { 00:18:42.509 "crdt1": 0, 00:18:42.509 "crdt2": 0, 00:18:42.509 "crdt3": 0 00:18:42.509 } 00:18:42.509 }, 00:18:42.509 { 00:18:42.510 "method": "nvmf_create_transport", 00:18:42.510 "params": { 00:18:42.510 "trtype": "TCP", 00:18:42.510 "max_queue_depth": 128, 00:18:42.510 "max_io_qpairs_per_ctrlr": 127, 00:18:42.510 "in_capsule_data_size": 4096, 00:18:42.510 "max_io_size": 131072, 00:18:42.510 "io_unit_size": 131072, 00:18:42.510 "max_aq_depth": 128, 00:18:42.510 "num_shared_buffers": 511, 00:18:42.510 "buf_cache_size": 4294967295, 00:18:42.510 "dif_insert_or_strip": false, 00:18:42.510 "zcopy": false, 00:18:42.510 "c2h_success": false, 00:18:42.510 "sock_priority": 0, 00:18:42.510 "abort_timeout_sec": 1, 00:18:42.510 "ack_timeout": 0, 00:18:42.510 "data_wr_pool_size": 0 00:18:42.510 } 00:18:42.510 }, 00:18:42.510 { 00:18:42.510 "method": "nvmf_create_subsystem", 00:18:42.510 "params": { 00:18:42.510 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:42.510 "allow_any_host": false, 00:18:42.510 "serial_number": "SPDK00000000000001", 00:18:42.510 "model_number": "SPDK bdev Controller", 00:18:42.510 "max_namespaces": 10, 00:18:42.510 "min_cntlid": 1, 00:18:42.510 "max_cntlid": 65519, 00:18:42.510 "ana_reporting": false 00:18:42.510 } 00:18:42.510 }, 00:18:42.510 { 00:18:42.510 "method": "nvmf_subsystem_add_host", 00:18:42.510 "params": { 00:18:42.510 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:42.510 "host": "nqn.2016-06.io.spdk:host1", 00:18:42.510 "psk": "/tmp/tmp.7kFpAoUVEj" 00:18:42.510 } 00:18:42.510 }, 00:18:42.510 { 00:18:42.510 "method": "nvmf_subsystem_add_ns", 00:18:42.510 "params": { 00:18:42.510 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:42.510 "namespace": { 00:18:42.510 "nsid": 1, 00:18:42.510 "bdev_name": "malloc0", 00:18:42.510 "nguid": "22EFD8D573224A9ABCBB3B3AF07E7850", 00:18:42.510 "uuid": "22efd8d5-7322-4a9a-bcbb-3b3af07e7850", 00:18:42.510 "no_auto_visible": false 00:18:42.510 } 00:18:42.510 } 00:18:42.510 }, 00:18:42.510 { 00:18:42.510 "method": "nvmf_subsystem_add_listener", 00:18:42.510 "params": { 00:18:42.510 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:42.510 "listen_address": { 00:18:42.510 "trtype": "TCP", 00:18:42.510 "adrfam": "IPv4", 00:18:42.510 "traddr": "10.0.0.2", 00:18:42.510 "trsvcid": "4420" 00:18:42.510 }, 00:18:42.510 "secure_channel": true 00:18:42.510 } 00:18:42.510 } 00:18:42.510 ] 00:18:42.510 } 00:18:42.510 ] 00:18:42.510 }' 00:18:42.510 00:20:01 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:42.768 00:20:01 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:18:42.768 "subsystems": [ 00:18:42.768 { 00:18:42.768 "subsystem": "keyring", 00:18:42.768 "config": [] 00:18:42.768 }, 00:18:42.768 { 00:18:42.768 "subsystem": "iobuf", 00:18:42.768 "config": [ 00:18:42.768 { 00:18:42.768 "method": "iobuf_set_options", 00:18:42.768 "params": { 00:18:42.768 "small_pool_count": 8192, 00:18:42.768 "large_pool_count": 1024, 00:18:42.768 "small_bufsize": 8192, 00:18:42.768 "large_bufsize": 135168 00:18:42.768 } 00:18:42.768 } 00:18:42.768 ] 00:18:42.768 }, 00:18:42.768 { 00:18:42.768 "subsystem": "sock", 00:18:42.768 "config": [ 00:18:42.768 { 00:18:42.768 "method": "sock_set_default_impl", 00:18:42.768 "params": { 00:18:42.768 "impl_name": "posix" 00:18:42.768 } 00:18:42.768 }, 00:18:42.768 { 00:18:42.768 "method": "sock_impl_set_options", 00:18:42.768 "params": { 00:18:42.768 "impl_name": "ssl", 00:18:42.768 "recv_buf_size": 4096, 00:18:42.768 "send_buf_size": 4096, 00:18:42.768 "enable_recv_pipe": true, 00:18:42.768 "enable_quickack": false, 00:18:42.768 "enable_placement_id": 0, 00:18:42.768 "enable_zerocopy_send_server": true, 00:18:42.768 "enable_zerocopy_send_client": false, 00:18:42.768 "zerocopy_threshold": 0, 00:18:42.768 "tls_version": 0, 00:18:42.768 "enable_ktls": false 00:18:42.768 } 00:18:42.768 }, 00:18:42.768 { 00:18:42.768 "method": "sock_impl_set_options", 00:18:42.768 "params": { 00:18:42.768 "impl_name": "posix", 00:18:42.768 "recv_buf_size": 2097152, 00:18:42.768 "send_buf_size": 2097152, 00:18:42.768 "enable_recv_pipe": true, 00:18:42.768 "enable_quickack": false, 00:18:42.768 "enable_placement_id": 0, 00:18:42.768 "enable_zerocopy_send_server": true, 00:18:42.768 "enable_zerocopy_send_client": false, 00:18:42.768 "zerocopy_threshold": 0, 00:18:42.768 "tls_version": 0, 00:18:42.768 "enable_ktls": false 00:18:42.768 } 00:18:42.768 } 00:18:42.768 ] 00:18:42.768 }, 00:18:42.768 { 00:18:42.768 "subsystem": "vmd", 00:18:42.768 "config": [] 00:18:42.768 }, 00:18:42.768 { 00:18:42.768 "subsystem": "accel", 00:18:42.768 "config": [ 00:18:42.768 { 00:18:42.768 "method": "accel_set_options", 00:18:42.768 "params": { 00:18:42.768 "small_cache_size": 128, 00:18:42.768 "large_cache_size": 16, 00:18:42.768 "task_count": 2048, 00:18:42.768 "sequence_count": 2048, 00:18:42.768 "buf_count": 2048 00:18:42.768 } 00:18:42.768 } 00:18:42.768 ] 00:18:42.768 }, 00:18:42.768 { 00:18:42.768 "subsystem": "bdev", 00:18:42.768 "config": [ 00:18:42.768 { 00:18:42.768 "method": "bdev_set_options", 00:18:42.768 "params": { 00:18:42.768 "bdev_io_pool_size": 65535, 00:18:42.768 "bdev_io_cache_size": 256, 00:18:42.768 "bdev_auto_examine": true, 00:18:42.768 "iobuf_small_cache_size": 128, 00:18:42.768 "iobuf_large_cache_size": 16 00:18:42.768 } 00:18:42.768 }, 00:18:42.768 { 00:18:42.768 "method": "bdev_raid_set_options", 00:18:42.768 "params": { 00:18:42.768 "process_window_size_kb": 1024 00:18:42.768 } 00:18:42.768 }, 00:18:42.768 { 00:18:42.768 "method": "bdev_iscsi_set_options", 00:18:42.768 "params": { 00:18:42.768 "timeout_sec": 30 00:18:42.768 } 00:18:42.768 }, 00:18:42.768 { 00:18:42.768 "method": "bdev_nvme_set_options", 00:18:42.768 "params": { 00:18:42.768 "action_on_timeout": "none", 00:18:42.768 "timeout_us": 0, 00:18:42.768 "timeout_admin_us": 0, 00:18:42.768 "keep_alive_timeout_ms": 10000, 00:18:42.768 "arbitration_burst": 0, 00:18:42.768 "low_priority_weight": 0, 00:18:42.768 "medium_priority_weight": 0, 00:18:42.768 "high_priority_weight": 0, 00:18:42.768 "nvme_adminq_poll_period_us": 10000, 00:18:42.768 "nvme_ioq_poll_period_us": 0, 00:18:42.768 "io_queue_requests": 512, 00:18:42.768 "delay_cmd_submit": true, 00:18:42.768 "transport_retry_count": 4, 00:18:42.768 "bdev_retry_count": 3, 00:18:42.768 "transport_ack_timeout": 0, 00:18:42.768 "ctrlr_loss_timeout_sec": 0, 00:18:42.768 "reconnect_delay_sec": 0, 00:18:42.768 "fast_io_fail_timeout_sec": 0, 00:18:42.768 "disable_auto_failback": false, 00:18:42.769 "generate_uuids": false, 00:18:42.769 "transport_tos": 0, 00:18:42.769 "nvme_error_stat": false, 00:18:42.769 "rdma_srq_size": 0, 00:18:42.769 "io_path_stat": false, 00:18:42.769 "allow_accel_sequence": false, 00:18:42.769 "rdma_max_cq_size": 0, 00:18:42.769 "rdma_cm_event_timeout_ms": 0, 00:18:42.769 "dhchap_digests": [ 00:18:42.769 "sha256", 00:18:42.769 "sha384", 00:18:42.769 "sha512" 00:18:42.769 ], 00:18:42.769 "dhchap_dhgroups": [ 00:18:42.769 "null", 00:18:42.769 "ffdhe2048", 00:18:42.769 "ffdhe3072", 00:18:42.769 "ffdhe4096", 00:18:42.769 "ffdhe6144", 00:18:42.769 "ffdhe8192" 00:18:42.769 ] 00:18:42.769 } 00:18:42.769 }, 00:18:42.769 { 00:18:42.769 "method": "bdev_nvme_attach_controller", 00:18:42.769 "params": { 00:18:42.769 "name": "TLSTEST", 00:18:42.769 "trtype": "TCP", 00:18:42.769 "adrfam": "IPv4", 00:18:42.769 "traddr": "10.0.0.2", 00:18:42.769 "trsvcid": "4420", 00:18:42.769 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:42.769 "prchk_reftag": false, 00:18:42.769 "prchk_guard": false, 00:18:42.769 "ctrlr_loss_timeout_sec": 0, 00:18:42.769 "reconnect_delay_sec": 0, 00:18:42.769 "fast_io_fail_timeout_sec": 0, 00:18:42.769 "psk": "/tmp/tmp.7kFpAoUVEj", 00:18:42.769 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:42.769 "hdgst": false, 00:18:42.769 "ddgst": false 00:18:42.769 } 00:18:42.769 }, 00:18:42.769 { 00:18:42.769 "method": "bdev_nvme_set_hotplug", 00:18:42.769 "params": { 00:18:42.769 "period_us": 100000, 00:18:42.769 "enable": false 00:18:42.769 } 00:18:42.769 }, 00:18:42.769 { 00:18:42.769 "method": "bdev_wait_for_examine" 00:18:42.769 } 00:18:42.769 ] 00:18:42.769 }, 00:18:42.769 { 00:18:42.769 "subsystem": "nbd", 00:18:42.769 "config": [] 00:18:42.769 } 00:18:42.769 ] 00:18:42.769 }' 00:18:42.769 00:20:01 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 1540138 00:18:42.769 00:20:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@942 -- # '[' -z 1540138 ']' 00:18:42.769 00:20:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # kill -0 1540138 00:18:42.769 00:20:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # uname 00:18:42.769 00:20:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:18:42.769 00:20:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1540138 00:18:42.769 00:20:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # process_name=reactor_2 00:18:42.769 00:20:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' reactor_2 = sudo ']' 00:18:42.769 00:20:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1540138' 00:18:42.769 killing process with pid 1540138 00:18:42.769 00:20:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@961 -- # kill 1540138 00:18:42.769 Received shutdown signal, test time was about 10.000000 seconds 00:18:42.769 00:18:42.769 Latency(us) 00:18:42.769 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:42.769 =================================================================================================================== 00:18:42.769 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:42.769 [2024-07-16 00:20:01.547715] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:42.769 00:20:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # wait 1540138 00:18:43.027 00:20:01 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 1539703 00:18:43.027 00:20:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@942 -- # '[' -z 1539703 ']' 00:18:43.027 00:20:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # kill -0 1539703 00:18:43.027 00:20:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # uname 00:18:43.027 00:20:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:18:43.027 00:20:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1539703 00:18:43.027 00:20:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:18:43.027 00:20:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:18:43.027 00:20:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1539703' 00:18:43.027 killing process with pid 1539703 00:18:43.027 00:20:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@961 -- # kill 1539703 00:18:43.027 [2024-07-16 00:20:01.771437] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:43.027 00:20:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # wait 1539703 00:18:43.286 00:20:01 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:18:43.286 00:20:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:43.286 00:20:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:43.286 00:20:01 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:18:43.286 "subsystems": [ 00:18:43.286 { 00:18:43.286 "subsystem": "keyring", 00:18:43.286 "config": [] 00:18:43.286 }, 00:18:43.286 { 00:18:43.286 "subsystem": "iobuf", 00:18:43.286 "config": [ 00:18:43.286 { 00:18:43.286 "method": "iobuf_set_options", 00:18:43.286 "params": { 00:18:43.286 "small_pool_count": 8192, 00:18:43.286 "large_pool_count": 1024, 00:18:43.286 "small_bufsize": 8192, 00:18:43.286 "large_bufsize": 135168 00:18:43.286 } 00:18:43.286 } 00:18:43.286 ] 00:18:43.286 }, 00:18:43.286 { 00:18:43.286 "subsystem": "sock", 00:18:43.286 "config": [ 00:18:43.286 { 00:18:43.286 "method": "sock_set_default_impl", 00:18:43.286 "params": { 00:18:43.286 "impl_name": "posix" 00:18:43.286 } 00:18:43.286 }, 00:18:43.286 { 00:18:43.286 "method": "sock_impl_set_options", 00:18:43.286 "params": { 00:18:43.286 "impl_name": "ssl", 00:18:43.286 "recv_buf_size": 4096, 00:18:43.286 "send_buf_size": 4096, 00:18:43.286 "enable_recv_pipe": true, 00:18:43.286 "enable_quickack": false, 00:18:43.286 "enable_placement_id": 0, 00:18:43.286 "enable_zerocopy_send_server": true, 00:18:43.286 "enable_zerocopy_send_client": false, 00:18:43.286 "zerocopy_threshold": 0, 00:18:43.286 "tls_version": 0, 00:18:43.286 "enable_ktls": false 00:18:43.286 } 00:18:43.286 }, 00:18:43.286 { 00:18:43.286 "method": "sock_impl_set_options", 00:18:43.286 "params": { 00:18:43.286 "impl_name": "posix", 00:18:43.286 "recv_buf_size": 2097152, 00:18:43.286 "send_buf_size": 2097152, 00:18:43.286 "enable_recv_pipe": true, 00:18:43.286 "enable_quickack": false, 00:18:43.286 "enable_placement_id": 0, 00:18:43.286 "enable_zerocopy_send_server": true, 00:18:43.286 "enable_zerocopy_send_client": false, 00:18:43.286 "zerocopy_threshold": 0, 00:18:43.286 "tls_version": 0, 00:18:43.286 "enable_ktls": false 00:18:43.286 } 00:18:43.286 } 00:18:43.286 ] 00:18:43.286 }, 00:18:43.286 { 00:18:43.286 "subsystem": "vmd", 00:18:43.286 "config": [] 00:18:43.286 }, 00:18:43.286 { 00:18:43.286 "subsystem": "accel", 00:18:43.286 "config": [ 00:18:43.286 { 00:18:43.286 "method": "accel_set_options", 00:18:43.286 "params": { 00:18:43.286 "small_cache_size": 128, 00:18:43.286 "large_cache_size": 16, 00:18:43.286 "task_count": 2048, 00:18:43.286 "sequence_count": 2048, 00:18:43.286 "buf_count": 2048 00:18:43.286 } 00:18:43.286 } 00:18:43.286 ] 00:18:43.286 }, 00:18:43.286 { 00:18:43.286 "subsystem": "bdev", 00:18:43.286 "config": [ 00:18:43.286 { 00:18:43.286 "method": "bdev_set_options", 00:18:43.286 "params": { 00:18:43.286 "bdev_io_pool_size": 65535, 00:18:43.286 "bdev_io_cache_size": 256, 00:18:43.286 "bdev_auto_examine": true, 00:18:43.286 "iobuf_small_cache_size": 128, 00:18:43.286 "iobuf_large_cache_size": 16 00:18:43.286 } 00:18:43.286 }, 00:18:43.286 { 00:18:43.286 "method": "bdev_raid_set_options", 00:18:43.286 "params": { 00:18:43.286 "process_window_size_kb": 1024 00:18:43.286 } 00:18:43.286 }, 00:18:43.286 { 00:18:43.286 "method": "bdev_iscsi_set_options", 00:18:43.286 "params": { 00:18:43.286 "timeout_sec": 30 00:18:43.286 } 00:18:43.286 }, 00:18:43.286 { 00:18:43.286 "method": "bdev_nvme_set_options", 00:18:43.286 "params": { 00:18:43.286 "action_on_timeout": "none", 00:18:43.286 "timeout_us": 0, 00:18:43.286 "timeout_admin_us": 0, 00:18:43.286 "keep_alive_timeout_ms": 10000, 00:18:43.286 "arbitration_burst": 0, 00:18:43.286 "low_priority_weight": 0, 00:18:43.286 "medium_priority_weight": 0, 00:18:43.286 "high_priority_weight": 0, 00:18:43.286 "nvme_adminq_poll_period_us": 10000, 00:18:43.286 "nvme_ioq_poll_period_us": 0, 00:18:43.286 "io_queue_requests": 0, 00:18:43.286 "delay_cmd_submit": true, 00:18:43.286 "transport_retry_count": 4, 00:18:43.287 "bdev_retry_count": 3, 00:18:43.287 "transport_ack_timeout": 0, 00:18:43.287 "ctrlr_loss_timeout_sec": 0, 00:18:43.287 "reconnect_delay_sec": 0, 00:18:43.287 "fast_io_fail_timeout_sec": 0, 00:18:43.287 "disable_auto_failback": false, 00:18:43.287 "generate_uuids": false, 00:18:43.287 "transport_tos": 0, 00:18:43.287 "nvme_error_stat": false, 00:18:43.287 "rdma_srq_size": 0, 00:18:43.287 "io_path_stat": false, 00:18:43.287 "allow_accel_sequence": false, 00:18:43.287 "rdma_max_cq_size": 0, 00:18:43.287 "rdma_cm_event_timeout_ms": 0, 00:18:43.287 "dhchap_digests": [ 00:18:43.287 "sha256", 00:18:43.287 "sha384", 00:18:43.287 "sha512" 00:18:43.287 ], 00:18:43.287 "dhchap_dhgroups": [ 00:18:43.287 "null", 00:18:43.287 "ffdhe2048", 00:18:43.287 "ffdhe3072", 00:18:43.287 "ffdhe4096", 00:18:43.287 "ffdhe6144", 00:18:43.287 "ffdhe8192" 00:18:43.287 ] 00:18:43.287 } 00:18:43.287 }, 00:18:43.287 { 00:18:43.287 "method": "bdev_nvme_set_hotplug", 00:18:43.287 "params": { 00:18:43.287 "period_us": 100000, 00:18:43.287 "enable": false 00:18:43.287 } 00:18:43.287 }, 00:18:43.287 { 00:18:43.287 "method": "bdev_malloc_create", 00:18:43.287 "params": { 00:18:43.287 "name": "malloc0", 00:18:43.287 "num_blocks": 8192, 00:18:43.287 "block_size": 4096, 00:18:43.287 "physical_block_size": 4096, 00:18:43.287 "uuid": "22efd8d5-7322-4a9a-bcbb-3b3af07e7850", 00:18:43.287 "optimal_io_boundary": 0 00:18:43.287 } 00:18:43.287 }, 00:18:43.287 { 00:18:43.287 "method": "bdev_wait_for_examine" 00:18:43.287 } 00:18:43.287 ] 00:18:43.287 }, 00:18:43.287 { 00:18:43.287 "subsystem": "nbd", 00:18:43.287 "config": [] 00:18:43.287 }, 00:18:43.287 { 00:18:43.287 "subsystem": "scheduler", 00:18:43.287 "config": [ 00:18:43.287 { 00:18:43.287 "method": "framework_set_scheduler", 00:18:43.287 "params": { 00:18:43.287 "name": "static" 00:18:43.287 } 00:18:43.287 } 00:18:43.287 ] 00:18:43.287 }, 00:18:43.287 { 00:18:43.287 "subsystem": "nvmf", 00:18:43.287 "config": [ 00:18:43.287 { 00:18:43.287 "method": "nvmf_set_config", 00:18:43.287 "params": { 00:18:43.287 "discovery_filter": "match_any", 00:18:43.287 "admin_cmd_passthru": { 00:18:43.287 "identify_ctrlr": false 00:18:43.287 } 00:18:43.287 } 00:18:43.287 }, 00:18:43.287 { 00:18:43.287 "method": "nvmf_set_max_subsystems", 00:18:43.287 "params": { 00:18:43.287 "max_subsystems": 1024 00:18:43.287 } 00:18:43.287 }, 00:18:43.287 { 00:18:43.287 "method": "nvmf_set_crdt", 00:18:43.287 "params": { 00:18:43.287 "crdt1": 0, 00:18:43.287 "crdt2": 0, 00:18:43.287 "crdt3": 0 00:18:43.287 } 00:18:43.287 }, 00:18:43.287 { 00:18:43.287 "method": "nvmf_create_transport", 00:18:43.287 "params": { 00:18:43.287 "trtype": "TCP", 00:18:43.287 "max_queue_depth": 128, 00:18:43.287 "max_io_qpairs_per_ctrlr": 127, 00:18:43.287 "in_capsule_data_size": 4096, 00:18:43.287 "max_io_size": 131072, 00:18:43.287 "io_unit_size": 131072, 00:18:43.287 "max_aq_depth": 128, 00:18:43.287 "num_shared_buffers": 511, 00:18:43.287 "buf_cache_size": 4294967295, 00:18:43.287 "dif_insert_or_strip": false, 00:18:43.287 "zcopy": false, 00:18:43.287 "c2h_success": false, 00:18:43.287 "sock_priority": 0, 00:18:43.287 "abort_timeout_sec": 1, 00:18:43.287 "ack_timeout": 0, 00:18:43.287 "data_wr_pool_size": 0 00:18:43.287 } 00:18:43.287 }, 00:18:43.287 { 00:18:43.287 "method": "nvmf_create_subsystem", 00:18:43.287 "params": { 00:18:43.287 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:43.287 "allow_any_host": false, 00:18:43.287 "serial_number": "SPDK00000000000001", 00:18:43.287 "model_number": "SPDK bdev Controller", 00:18:43.287 "max_namespaces": 10, 00:18:43.287 "min_cntlid": 1, 00:18:43.287 "max_cntlid": 65519, 00:18:43.287 "ana_reporting": false 00:18:43.287 } 00:18:43.287 }, 00:18:43.287 { 00:18:43.287 "method": "nvmf_subsystem_add_host", 00:18:43.287 "params": { 00:18:43.287 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:43.287 "host": "nqn.2016-06.io.spdk:host1", 00:18:43.287 "psk": "/tmp/tmp.7kFpAoUVEj" 00:18:43.287 } 00:18:43.287 }, 00:18:43.287 { 00:18:43.287 "method": "nvmf_subsystem_add_ns", 00:18:43.287 "params": { 00:18:43.287 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:43.287 "namespace": { 00:18:43.287 "nsid": 1, 00:18:43.287 "bdev_name": "malloc0", 00:18:43.287 "nguid": "22EFD8D573224A9ABCBB3B3AF07E7850", 00:18:43.287 "uuid": "22efd8d5-7322-4a9a-bcbb-3b3af07e7850", 00:18:43.287 "no_auto_visible": false 00:18:43.287 } 00:18:43.287 } 00:18:43.287 }, 00:18:43.287 { 00:18:43.287 "method": "nvmf_subsystem_add_listener", 00:18:43.287 "params": { 00:18:43.287 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:43.287 "listen_address": { 00:18:43.287 "trtype": "TCP", 00:18:43.287 "adrfam": "IPv4", 00:18:43.287 "traddr": "10.0.0.2", 00:18:43.287 "trsvcid": "4420" 00:18:43.287 }, 00:18:43.287 "secure_channel": true 00:18:43.287 } 00:18:43.287 } 00:18:43.287 ] 00:18:43.287 } 00:18:43.287 ] 00:18:43.287 }' 00:18:43.287 00:20:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:43.287 00:20:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:18:43.287 00:20:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1540429 00:18:43.287 00:20:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1540429 00:18:43.287 00:20:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@823 -- # '[' -z 1540429 ']' 00:18:43.287 00:20:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:43.287 00:20:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # local max_retries=100 00:18:43.287 00:20:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:43.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:43.287 00:20:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # xtrace_disable 00:18:43.287 00:20:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:43.287 [2024-07-16 00:20:02.007870] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:18:43.287 [2024-07-16 00:20:02.007915] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:43.287 [2024-07-16 00:20:02.063537] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:43.546 [2024-07-16 00:20:02.141947] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:43.546 [2024-07-16 00:20:02.141978] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:43.546 [2024-07-16 00:20:02.141985] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:43.546 [2024-07-16 00:20:02.141991] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:43.546 [2024-07-16 00:20:02.141996] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:43.546 [2024-07-16 00:20:02.142044] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:43.546 [2024-07-16 00:20:02.345327] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:43.546 [2024-07-16 00:20:02.361310] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:43.546 [2024-07-16 00:20:02.377367] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:43.546 [2024-07-16 00:20:02.388548] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:44.116 00:20:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:18:44.116 00:20:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # return 0 00:18:44.116 00:20:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:44.116 00:20:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:44.116 00:20:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:44.116 00:20:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:44.116 00:20:02 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=1540667 00:18:44.116 00:20:02 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 1540667 /var/tmp/bdevperf.sock 00:18:44.116 00:20:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@823 -- # '[' -z 1540667 ']' 00:18:44.116 00:20:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:44.116 00:20:02 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:18:44.116 00:20:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # local max_retries=100 00:18:44.116 00:20:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:44.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:44.116 00:20:02 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:18:44.116 "subsystems": [ 00:18:44.116 { 00:18:44.116 "subsystem": "keyring", 00:18:44.116 "config": [] 00:18:44.116 }, 00:18:44.116 { 00:18:44.116 "subsystem": "iobuf", 00:18:44.116 "config": [ 00:18:44.116 { 00:18:44.116 "method": "iobuf_set_options", 00:18:44.116 "params": { 00:18:44.116 "small_pool_count": 8192, 00:18:44.116 "large_pool_count": 1024, 00:18:44.116 "small_bufsize": 8192, 00:18:44.116 "large_bufsize": 135168 00:18:44.116 } 00:18:44.116 } 00:18:44.116 ] 00:18:44.116 }, 00:18:44.116 { 00:18:44.116 "subsystem": "sock", 00:18:44.116 "config": [ 00:18:44.116 { 00:18:44.116 "method": "sock_set_default_impl", 00:18:44.116 "params": { 00:18:44.116 "impl_name": "posix" 00:18:44.116 } 00:18:44.116 }, 00:18:44.116 { 00:18:44.116 "method": "sock_impl_set_options", 00:18:44.116 "params": { 00:18:44.116 "impl_name": "ssl", 00:18:44.116 "recv_buf_size": 4096, 00:18:44.116 "send_buf_size": 4096, 00:18:44.116 "enable_recv_pipe": true, 00:18:44.116 "enable_quickack": false, 00:18:44.116 "enable_placement_id": 0, 00:18:44.116 "enable_zerocopy_send_server": true, 00:18:44.116 "enable_zerocopy_send_client": false, 00:18:44.116 "zerocopy_threshold": 0, 00:18:44.116 "tls_version": 0, 00:18:44.116 "enable_ktls": false 00:18:44.116 } 00:18:44.116 }, 00:18:44.116 { 00:18:44.116 "method": "sock_impl_set_options", 00:18:44.116 "params": { 00:18:44.116 "impl_name": "posix", 00:18:44.116 "recv_buf_size": 2097152, 00:18:44.116 "send_buf_size": 2097152, 00:18:44.116 "enable_recv_pipe": true, 00:18:44.116 "enable_quickack": false, 00:18:44.116 "enable_placement_id": 0, 00:18:44.116 "enable_zerocopy_send_server": true, 00:18:44.116 "enable_zerocopy_send_client": false, 00:18:44.116 "zerocopy_threshold": 0, 00:18:44.116 "tls_version": 0, 00:18:44.116 "enable_ktls": false 00:18:44.116 } 00:18:44.116 } 00:18:44.116 ] 00:18:44.116 }, 00:18:44.116 { 00:18:44.116 "subsystem": "vmd", 00:18:44.116 "config": [] 00:18:44.116 }, 00:18:44.116 { 00:18:44.116 "subsystem": "accel", 00:18:44.116 "config": [ 00:18:44.116 { 00:18:44.116 "method": "accel_set_options", 00:18:44.116 "params": { 00:18:44.116 "small_cache_size": 128, 00:18:44.116 "large_cache_size": 16, 00:18:44.116 "task_count": 2048, 00:18:44.116 "sequence_count": 2048, 00:18:44.116 "buf_count": 2048 00:18:44.116 } 00:18:44.116 } 00:18:44.116 ] 00:18:44.116 }, 00:18:44.116 { 00:18:44.116 "subsystem": "bdev", 00:18:44.116 "config": [ 00:18:44.116 { 00:18:44.116 "method": "bdev_set_options", 00:18:44.116 "params": { 00:18:44.116 "bdev_io_pool_size": 65535, 00:18:44.116 "bdev_io_cache_size": 256, 00:18:44.116 "bdev_auto_examine": true, 00:18:44.116 "iobuf_small_cache_size": 128, 00:18:44.116 "iobuf_large_cache_size": 16 00:18:44.116 } 00:18:44.116 }, 00:18:44.116 { 00:18:44.116 "method": "bdev_raid_set_options", 00:18:44.116 "params": { 00:18:44.116 "process_window_size_kb": 1024 00:18:44.116 } 00:18:44.116 }, 00:18:44.116 { 00:18:44.116 "method": "bdev_iscsi_set_options", 00:18:44.116 "params": { 00:18:44.116 "timeout_sec": 30 00:18:44.116 } 00:18:44.116 }, 00:18:44.116 { 00:18:44.116 "method": "bdev_nvme_set_options", 00:18:44.116 "params": { 00:18:44.116 "action_on_timeout": "none", 00:18:44.116 "timeout_us": 0, 00:18:44.116 "timeout_admin_us": 0, 00:18:44.116 "keep_alive_timeout_ms": 10000, 00:18:44.116 "arbitration_burst": 0, 00:18:44.116 "low_priority_weight": 0, 00:18:44.116 "medium_priority_weight": 0, 00:18:44.116 "high_priority_weight": 0, 00:18:44.116 "nvme_adminq_poll_period_us": 10000, 00:18:44.116 "nvme_ioq_poll_period_us": 0, 00:18:44.116 "io_queue_requests": 512, 00:18:44.116 "delay_cmd_submit": true, 00:18:44.116 "transport_retry_count": 4, 00:18:44.116 "bdev_retry_count": 3, 00:18:44.116 "transport_ack_timeout": 0, 00:18:44.116 "ctrlr_loss_timeout_sec": 0, 00:18:44.116 "reconnect_delay_sec": 0, 00:18:44.116 "fast_io_fail_timeout_sec": 0, 00:18:44.116 "disable_auto_failback": false, 00:18:44.116 "generate_uuids": false, 00:18:44.116 "transport_tos": 0, 00:18:44.116 "nvme_error_stat": false, 00:18:44.116 "rdma_srq_size": 0, 00:18:44.116 "io_path_stat": false, 00:18:44.116 "allow_accel_sequence": false, 00:18:44.116 "rdma_max_cq_size": 0, 00:18:44.116 "rdma_cm_event_timeout_ms": 0, 00:18:44.116 "dhchap_digests": [ 00:18:44.116 "sha256", 00:18:44.116 "sha384", 00:18:44.116 "sha512" 00:18:44.116 ], 00:18:44.116 "dhchap_dhgroups": [ 00:18:44.116 "null", 00:18:44.116 "ffdhe2048", 00:18:44.116 "ffdhe3072", 00:18:44.116 "ffdhe4096", 00:18:44.116 "ffdhe6144", 00:18:44.116 "ffdhe8192" 00:18:44.116 ] 00:18:44.116 } 00:18:44.116 }, 00:18:44.116 { 00:18:44.116 "method": "bdev_nvme_attach_controller", 00:18:44.116 "params": { 00:18:44.116 "name": "TLSTEST", 00:18:44.116 "trtype": "TCP", 00:18:44.116 "adrfam": "IPv4", 00:18:44.116 "traddr": "10.0.0.2", 00:18:44.116 "trsvcid": "4420", 00:18:44.116 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:44.116 "prchk_reftag": false, 00:18:44.116 "prchk_guard": false, 00:18:44.116 "ctrlr_loss_timeout_sec": 0, 00:18:44.116 "reconnect_delay_sec": 0, 00:18:44.116 "fast_io_fail_timeout_sec": 0, 00:18:44.116 "psk": "/tmp/tmp.7kFpAoUVEj", 00:18:44.116 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:44.116 "hdgst": false, 00:18:44.116 "ddgst": false 00:18:44.116 } 00:18:44.116 }, 00:18:44.116 { 00:18:44.116 "method": "bdev_nvme_set_hotplug", 00:18:44.116 "params": { 00:18:44.116 "period_us": 100000, 00:18:44.116 "enable": false 00:18:44.116 } 00:18:44.116 }, 00:18:44.116 { 00:18:44.116 "method": "bdev_wait_for_examine" 00:18:44.117 } 00:18:44.117 ] 00:18:44.117 }, 00:18:44.117 { 00:18:44.117 "subsystem": "nbd", 00:18:44.117 "config": [] 00:18:44.117 } 00:18:44.117 ] 00:18:44.117 }' 00:18:44.117 00:20:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # xtrace_disable 00:18:44.117 00:20:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:44.117 [2024-07-16 00:20:02.901171] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:18:44.117 [2024-07-16 00:20:02.901219] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1540667 ] 00:18:44.117 [2024-07-16 00:20:02.951614] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:44.376 [2024-07-16 00:20:03.025076] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:44.376 [2024-07-16 00:20:03.167483] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:44.376 [2024-07-16 00:20:03.167573] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:44.944 00:20:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:18:44.944 00:20:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # return 0 00:18:44.944 00:20:03 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:44.944 Running I/O for 10 seconds... 00:18:57.148 00:18:57.148 Latency(us) 00:18:57.148 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:57.148 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:57.148 Verification LBA range: start 0x0 length 0x2000 00:18:57.148 TLSTESTn1 : 10.04 3303.80 12.91 0.00 0.00 38669.38 4673.00 70664.90 00:18:57.148 =================================================================================================================== 00:18:57.148 Total : 3303.80 12.91 0.00 0.00 38669.38 4673.00 70664.90 00:18:57.148 0 00:18:57.148 00:20:13 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:57.148 00:20:13 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 1540667 00:18:57.148 00:20:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@942 -- # '[' -z 1540667 ']' 00:18:57.148 00:20:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # kill -0 1540667 00:18:57.148 00:20:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # uname 00:18:57.148 00:20:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:18:57.148 00:20:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1540667 00:18:57.148 00:20:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # process_name=reactor_2 00:18:57.148 00:20:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' reactor_2 = sudo ']' 00:18:57.148 00:20:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1540667' 00:18:57.148 killing process with pid 1540667 00:18:57.148 00:20:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@961 -- # kill 1540667 00:18:57.148 Received shutdown signal, test time was about 10.000000 seconds 00:18:57.148 00:18:57.148 Latency(us) 00:18:57.148 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:57.148 =================================================================================================================== 00:18:57.148 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:57.148 [2024-07-16 00:20:13.902740] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:57.148 00:20:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # wait 1540667 00:18:57.148 00:20:14 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 1540429 00:18:57.148 00:20:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@942 -- # '[' -z 1540429 ']' 00:18:57.148 00:20:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # kill -0 1540429 00:18:57.148 00:20:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # uname 00:18:57.148 00:20:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:18:57.149 00:20:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1540429 00:18:57.149 00:20:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:18:57.149 00:20:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:18:57.149 00:20:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1540429' 00:18:57.149 killing process with pid 1540429 00:18:57.149 00:20:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@961 -- # kill 1540429 00:18:57.149 [2024-07-16 00:20:14.128896] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:57.149 00:20:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # wait 1540429 00:18:57.149 00:20:14 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:18:57.149 00:20:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:57.149 00:20:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:57.149 00:20:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:57.149 00:20:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1542514 00:18:57.149 00:20:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1542514 00:18:57.149 00:20:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:57.149 00:20:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@823 -- # '[' -z 1542514 ']' 00:18:57.149 00:20:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:57.149 00:20:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # local max_retries=100 00:18:57.149 00:20:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:57.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:57.149 00:20:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # xtrace_disable 00:18:57.149 00:20:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:57.149 [2024-07-16 00:20:14.372003] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:18:57.149 [2024-07-16 00:20:14.372046] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:57.149 [2024-07-16 00:20:14.429156] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:57.149 [2024-07-16 00:20:14.497228] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:57.149 [2024-07-16 00:20:14.497281] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:57.149 [2024-07-16 00:20:14.497288] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:57.149 [2024-07-16 00:20:14.497294] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:57.149 [2024-07-16 00:20:14.497299] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:57.149 [2024-07-16 00:20:14.497316] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:57.149 00:20:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:18:57.149 00:20:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # return 0 00:18:57.149 00:20:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:57.149 00:20:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:57.149 00:20:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:57.149 00:20:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:57.149 00:20:15 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.7kFpAoUVEj 00:18:57.149 00:20:15 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.7kFpAoUVEj 00:18:57.149 00:20:15 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:57.149 [2024-07-16 00:20:15.372581] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:57.149 00:20:15 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:57.149 00:20:15 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:57.149 [2024-07-16 00:20:15.717471] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:57.149 [2024-07-16 00:20:15.717666] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:57.149 00:20:15 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:57.149 malloc0 00:18:57.149 00:20:15 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:57.408 00:20:16 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.7kFpAoUVEj 00:18:57.408 [2024-07-16 00:20:16.243179] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:57.408 00:20:16 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:57.408 00:20:16 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=1542776 00:18:57.408 00:20:16 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:57.408 00:20:16 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 1542776 /var/tmp/bdevperf.sock 00:18:57.408 00:20:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@823 -- # '[' -z 1542776 ']' 00:18:57.667 00:20:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:57.667 00:20:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # local max_retries=100 00:18:57.667 00:20:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:57.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:57.667 00:20:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # xtrace_disable 00:18:57.667 00:20:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:57.667 [2024-07-16 00:20:16.285931] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:18:57.667 [2024-07-16 00:20:16.285982] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1542776 ] 00:18:57.667 [2024-07-16 00:20:16.338858] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:57.667 [2024-07-16 00:20:16.418318] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:57.667 00:20:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:18:57.667 00:20:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # return 0 00:18:57.667 00:20:16 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.7kFpAoUVEj 00:18:57.927 00:20:16 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:58.186 [2024-07-16 00:20:16.845260] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:58.186 nvme0n1 00:18:58.186 00:20:16 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:58.186 Running I/O for 1 seconds... 00:18:59.566 00:18:59.566 Latency(us) 00:18:59.566 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:59.566 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:59.566 Verification LBA range: start 0x0 length 0x2000 00:18:59.566 nvme0n1 : 1.02 5148.25 20.11 0.00 0.00 24632.87 6268.66 31229.33 00:18:59.566 =================================================================================================================== 00:18:59.566 Total : 5148.25 20.11 0.00 0.00 24632.87 6268.66 31229.33 00:18:59.566 0 00:18:59.566 00:20:18 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 1542776 00:18:59.566 00:20:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@942 -- # '[' -z 1542776 ']' 00:18:59.566 00:20:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # kill -0 1542776 00:18:59.566 00:20:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # uname 00:18:59.566 00:20:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:18:59.566 00:20:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1542776 00:18:59.566 00:20:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:18:59.566 00:20:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:18:59.566 00:20:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1542776' 00:18:59.566 killing process with pid 1542776 00:18:59.566 00:20:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@961 -- # kill 1542776 00:18:59.566 Received shutdown signal, test time was about 1.000000 seconds 00:18:59.566 00:18:59.566 Latency(us) 00:18:59.566 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:59.566 =================================================================================================================== 00:18:59.566 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:59.566 00:20:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # wait 1542776 00:18:59.566 00:20:18 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 1542514 00:18:59.566 00:20:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@942 -- # '[' -z 1542514 ']' 00:18:59.566 00:20:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # kill -0 1542514 00:18:59.566 00:20:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # uname 00:18:59.566 00:20:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:18:59.566 00:20:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1542514 00:18:59.566 00:20:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:18:59.566 00:20:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:18:59.566 00:20:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1542514' 00:18:59.566 killing process with pid 1542514 00:18:59.566 00:20:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@961 -- # kill 1542514 00:18:59.566 [2024-07-16 00:20:18.334933] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:59.566 00:20:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # wait 1542514 00:18:59.826 00:20:18 nvmf_tcp.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:18:59.826 00:20:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:59.826 00:20:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:59.826 00:20:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:59.826 00:20:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1543245 00:18:59.826 00:20:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1543245 00:18:59.826 00:20:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:59.826 00:20:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@823 -- # '[' -z 1543245 ']' 00:18:59.826 00:20:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:59.826 00:20:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # local max_retries=100 00:18:59.826 00:20:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:59.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:59.826 00:20:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # xtrace_disable 00:18:59.826 00:20:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:59.826 [2024-07-16 00:20:18.582409] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:18:59.826 [2024-07-16 00:20:18.582457] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:59.826 [2024-07-16 00:20:18.637874] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:00.085 [2024-07-16 00:20:18.716081] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:00.085 [2024-07-16 00:20:18.716118] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:00.085 [2024-07-16 00:20:18.716129] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:00.085 [2024-07-16 00:20:18.716134] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:00.085 [2024-07-16 00:20:18.716139] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:00.085 [2024-07-16 00:20:18.716157] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:00.654 00:20:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:19:00.654 00:20:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # return 0 00:19:00.654 00:20:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:00.654 00:20:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:00.654 00:20:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:00.654 00:20:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:00.654 00:20:19 nvmf_tcp.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:19:00.654 00:20:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:00.654 00:20:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:00.654 [2024-07-16 00:20:19.429983] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:00.654 malloc0 00:19:00.654 [2024-07-16 00:20:19.458288] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:00.654 [2024-07-16 00:20:19.458480] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:00.654 00:20:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:00.654 00:20:19 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=1543410 00:19:00.654 00:20:19 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 1543410 /var/tmp/bdevperf.sock 00:19:00.654 00:20:19 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:00.654 00:20:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@823 -- # '[' -z 1543410 ']' 00:19:00.654 00:20:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:00.654 00:20:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # local max_retries=100 00:19:00.654 00:20:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:00.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:00.654 00:20:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # xtrace_disable 00:19:00.654 00:20:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:00.914 [2024-07-16 00:20:19.531357] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:19:00.914 [2024-07-16 00:20:19.531401] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1543410 ] 00:19:00.914 [2024-07-16 00:20:19.584172] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:00.914 [2024-07-16 00:20:19.664158] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:01.482 00:20:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:19:01.741 00:20:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # return 0 00:19:01.741 00:20:20 nvmf_tcp.nvmf_tls -- target/tls.sh@257 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.7kFpAoUVEj 00:19:01.741 00:20:20 nvmf_tcp.nvmf_tls -- target/tls.sh@258 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:02.000 [2024-07-16 00:20:20.672080] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:02.000 nvme0n1 00:19:02.000 00:20:20 nvmf_tcp.nvmf_tls -- target/tls.sh@262 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:02.292 Running I/O for 1 seconds... 00:19:03.229 00:19:03.229 Latency(us) 00:19:03.229 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:03.229 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:03.229 Verification LBA range: start 0x0 length 0x2000 00:19:03.229 nvme0n1 : 1.02 5360.77 20.94 0.00 0.00 23664.95 6126.19 41031.23 00:19:03.229 =================================================================================================================== 00:19:03.229 Total : 5360.77 20.94 0.00 0.00 23664.95 6126.19 41031.23 00:19:03.229 0 00:19:03.229 00:20:21 nvmf_tcp.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:19:03.229 00:20:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:03.229 00:20:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:03.229 00:20:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:03.229 00:20:22 nvmf_tcp.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:19:03.229 "subsystems": [ 00:19:03.229 { 00:19:03.229 "subsystem": "keyring", 00:19:03.229 "config": [ 00:19:03.229 { 00:19:03.229 "method": "keyring_file_add_key", 00:19:03.229 "params": { 00:19:03.229 "name": "key0", 00:19:03.229 "path": "/tmp/tmp.7kFpAoUVEj" 00:19:03.229 } 00:19:03.229 } 00:19:03.229 ] 00:19:03.229 }, 00:19:03.229 { 00:19:03.229 "subsystem": "iobuf", 00:19:03.229 "config": [ 00:19:03.229 { 00:19:03.229 "method": "iobuf_set_options", 00:19:03.229 "params": { 00:19:03.229 "small_pool_count": 8192, 00:19:03.229 "large_pool_count": 1024, 00:19:03.229 "small_bufsize": 8192, 00:19:03.229 "large_bufsize": 135168 00:19:03.229 } 00:19:03.229 } 00:19:03.229 ] 00:19:03.229 }, 00:19:03.229 { 00:19:03.229 "subsystem": "sock", 00:19:03.229 "config": [ 00:19:03.229 { 00:19:03.229 "method": "sock_set_default_impl", 00:19:03.229 "params": { 00:19:03.229 "impl_name": "posix" 00:19:03.229 } 00:19:03.229 }, 00:19:03.229 { 00:19:03.229 "method": "sock_impl_set_options", 00:19:03.229 "params": { 00:19:03.229 "impl_name": "ssl", 00:19:03.229 "recv_buf_size": 4096, 00:19:03.229 "send_buf_size": 4096, 00:19:03.229 "enable_recv_pipe": true, 00:19:03.229 "enable_quickack": false, 00:19:03.229 "enable_placement_id": 0, 00:19:03.229 "enable_zerocopy_send_server": true, 00:19:03.229 "enable_zerocopy_send_client": false, 00:19:03.229 "zerocopy_threshold": 0, 00:19:03.229 "tls_version": 0, 00:19:03.229 "enable_ktls": false 00:19:03.229 } 00:19:03.229 }, 00:19:03.229 { 00:19:03.229 "method": "sock_impl_set_options", 00:19:03.229 "params": { 00:19:03.229 "impl_name": "posix", 00:19:03.229 "recv_buf_size": 2097152, 00:19:03.229 "send_buf_size": 2097152, 00:19:03.229 "enable_recv_pipe": true, 00:19:03.229 "enable_quickack": false, 00:19:03.229 "enable_placement_id": 0, 00:19:03.229 "enable_zerocopy_send_server": true, 00:19:03.229 "enable_zerocopy_send_client": false, 00:19:03.229 "zerocopy_threshold": 0, 00:19:03.229 "tls_version": 0, 00:19:03.229 "enable_ktls": false 00:19:03.229 } 00:19:03.229 } 00:19:03.229 ] 00:19:03.229 }, 00:19:03.229 { 00:19:03.229 "subsystem": "vmd", 00:19:03.229 "config": [] 00:19:03.229 }, 00:19:03.229 { 00:19:03.229 "subsystem": "accel", 00:19:03.229 "config": [ 00:19:03.229 { 00:19:03.229 "method": "accel_set_options", 00:19:03.229 "params": { 00:19:03.229 "small_cache_size": 128, 00:19:03.229 "large_cache_size": 16, 00:19:03.229 "task_count": 2048, 00:19:03.229 "sequence_count": 2048, 00:19:03.229 "buf_count": 2048 00:19:03.229 } 00:19:03.229 } 00:19:03.229 ] 00:19:03.229 }, 00:19:03.229 { 00:19:03.230 "subsystem": "bdev", 00:19:03.230 "config": [ 00:19:03.230 { 00:19:03.230 "method": "bdev_set_options", 00:19:03.230 "params": { 00:19:03.230 "bdev_io_pool_size": 65535, 00:19:03.230 "bdev_io_cache_size": 256, 00:19:03.230 "bdev_auto_examine": true, 00:19:03.230 "iobuf_small_cache_size": 128, 00:19:03.230 "iobuf_large_cache_size": 16 00:19:03.230 } 00:19:03.230 }, 00:19:03.230 { 00:19:03.230 "method": "bdev_raid_set_options", 00:19:03.230 "params": { 00:19:03.230 "process_window_size_kb": 1024 00:19:03.230 } 00:19:03.230 }, 00:19:03.230 { 00:19:03.230 "method": "bdev_iscsi_set_options", 00:19:03.230 "params": { 00:19:03.230 "timeout_sec": 30 00:19:03.230 } 00:19:03.230 }, 00:19:03.230 { 00:19:03.230 "method": "bdev_nvme_set_options", 00:19:03.230 "params": { 00:19:03.230 "action_on_timeout": "none", 00:19:03.230 "timeout_us": 0, 00:19:03.230 "timeout_admin_us": 0, 00:19:03.230 "keep_alive_timeout_ms": 10000, 00:19:03.230 "arbitration_burst": 0, 00:19:03.230 "low_priority_weight": 0, 00:19:03.230 "medium_priority_weight": 0, 00:19:03.230 "high_priority_weight": 0, 00:19:03.230 "nvme_adminq_poll_period_us": 10000, 00:19:03.230 "nvme_ioq_poll_period_us": 0, 00:19:03.230 "io_queue_requests": 0, 00:19:03.230 "delay_cmd_submit": true, 00:19:03.230 "transport_retry_count": 4, 00:19:03.230 "bdev_retry_count": 3, 00:19:03.230 "transport_ack_timeout": 0, 00:19:03.230 "ctrlr_loss_timeout_sec": 0, 00:19:03.230 "reconnect_delay_sec": 0, 00:19:03.230 "fast_io_fail_timeout_sec": 0, 00:19:03.230 "disable_auto_failback": false, 00:19:03.230 "generate_uuids": false, 00:19:03.230 "transport_tos": 0, 00:19:03.230 "nvme_error_stat": false, 00:19:03.230 "rdma_srq_size": 0, 00:19:03.230 "io_path_stat": false, 00:19:03.230 "allow_accel_sequence": false, 00:19:03.230 "rdma_max_cq_size": 0, 00:19:03.230 "rdma_cm_event_timeout_ms": 0, 00:19:03.230 "dhchap_digests": [ 00:19:03.230 "sha256", 00:19:03.230 "sha384", 00:19:03.230 "sha512" 00:19:03.230 ], 00:19:03.230 "dhchap_dhgroups": [ 00:19:03.230 "null", 00:19:03.230 "ffdhe2048", 00:19:03.230 "ffdhe3072", 00:19:03.230 "ffdhe4096", 00:19:03.230 "ffdhe6144", 00:19:03.230 "ffdhe8192" 00:19:03.230 ] 00:19:03.230 } 00:19:03.230 }, 00:19:03.230 { 00:19:03.230 "method": "bdev_nvme_set_hotplug", 00:19:03.230 "params": { 00:19:03.230 "period_us": 100000, 00:19:03.230 "enable": false 00:19:03.230 } 00:19:03.230 }, 00:19:03.230 { 00:19:03.230 "method": "bdev_malloc_create", 00:19:03.230 "params": { 00:19:03.230 "name": "malloc0", 00:19:03.230 "num_blocks": 8192, 00:19:03.230 "block_size": 4096, 00:19:03.230 "physical_block_size": 4096, 00:19:03.230 "uuid": "add9ec5f-9632-4bfc-96fe-1a164fa5da20", 00:19:03.230 "optimal_io_boundary": 0 00:19:03.230 } 00:19:03.230 }, 00:19:03.230 { 00:19:03.230 "method": "bdev_wait_for_examine" 00:19:03.230 } 00:19:03.230 ] 00:19:03.230 }, 00:19:03.230 { 00:19:03.230 "subsystem": "nbd", 00:19:03.230 "config": [] 00:19:03.230 }, 00:19:03.230 { 00:19:03.230 "subsystem": "scheduler", 00:19:03.230 "config": [ 00:19:03.230 { 00:19:03.230 "method": "framework_set_scheduler", 00:19:03.230 "params": { 00:19:03.230 "name": "static" 00:19:03.230 } 00:19:03.230 } 00:19:03.230 ] 00:19:03.230 }, 00:19:03.230 { 00:19:03.230 "subsystem": "nvmf", 00:19:03.230 "config": [ 00:19:03.230 { 00:19:03.230 "method": "nvmf_set_config", 00:19:03.230 "params": { 00:19:03.230 "discovery_filter": "match_any", 00:19:03.230 "admin_cmd_passthru": { 00:19:03.230 "identify_ctrlr": false 00:19:03.230 } 00:19:03.230 } 00:19:03.230 }, 00:19:03.230 { 00:19:03.230 "method": "nvmf_set_max_subsystems", 00:19:03.230 "params": { 00:19:03.230 "max_subsystems": 1024 00:19:03.230 } 00:19:03.230 }, 00:19:03.230 { 00:19:03.230 "method": "nvmf_set_crdt", 00:19:03.230 "params": { 00:19:03.230 "crdt1": 0, 00:19:03.230 "crdt2": 0, 00:19:03.230 "crdt3": 0 00:19:03.230 } 00:19:03.230 }, 00:19:03.230 { 00:19:03.230 "method": "nvmf_create_transport", 00:19:03.230 "params": { 00:19:03.230 "trtype": "TCP", 00:19:03.230 "max_queue_depth": 128, 00:19:03.230 "max_io_qpairs_per_ctrlr": 127, 00:19:03.230 "in_capsule_data_size": 4096, 00:19:03.230 "max_io_size": 131072, 00:19:03.230 "io_unit_size": 131072, 00:19:03.230 "max_aq_depth": 128, 00:19:03.230 "num_shared_buffers": 511, 00:19:03.230 "buf_cache_size": 4294967295, 00:19:03.230 "dif_insert_or_strip": false, 00:19:03.230 "zcopy": false, 00:19:03.230 "c2h_success": false, 00:19:03.230 "sock_priority": 0, 00:19:03.230 "abort_timeout_sec": 1, 00:19:03.230 "ack_timeout": 0, 00:19:03.230 "data_wr_pool_size": 0 00:19:03.230 } 00:19:03.230 }, 00:19:03.230 { 00:19:03.230 "method": "nvmf_create_subsystem", 00:19:03.230 "params": { 00:19:03.230 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:03.230 "allow_any_host": false, 00:19:03.230 "serial_number": "00000000000000000000", 00:19:03.230 "model_number": "SPDK bdev Controller", 00:19:03.230 "max_namespaces": 32, 00:19:03.230 "min_cntlid": 1, 00:19:03.230 "max_cntlid": 65519, 00:19:03.230 "ana_reporting": false 00:19:03.230 } 00:19:03.230 }, 00:19:03.230 { 00:19:03.230 "method": "nvmf_subsystem_add_host", 00:19:03.230 "params": { 00:19:03.230 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:03.230 "host": "nqn.2016-06.io.spdk:host1", 00:19:03.230 "psk": "key0" 00:19:03.230 } 00:19:03.230 }, 00:19:03.230 { 00:19:03.230 "method": "nvmf_subsystem_add_ns", 00:19:03.230 "params": { 00:19:03.230 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:03.230 "namespace": { 00:19:03.230 "nsid": 1, 00:19:03.230 "bdev_name": "malloc0", 00:19:03.230 "nguid": "ADD9EC5F96324BFC96FE1A164FA5DA20", 00:19:03.230 "uuid": "add9ec5f-9632-4bfc-96fe-1a164fa5da20", 00:19:03.230 "no_auto_visible": false 00:19:03.230 } 00:19:03.230 } 00:19:03.230 }, 00:19:03.230 { 00:19:03.230 "method": "nvmf_subsystem_add_listener", 00:19:03.230 "params": { 00:19:03.230 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:03.230 "listen_address": { 00:19:03.230 "trtype": "TCP", 00:19:03.230 "adrfam": "IPv4", 00:19:03.230 "traddr": "10.0.0.2", 00:19:03.230 "trsvcid": "4420" 00:19:03.230 }, 00:19:03.230 "secure_channel": false, 00:19:03.230 "sock_impl": "ssl" 00:19:03.230 } 00:19:03.230 } 00:19:03.230 ] 00:19:03.230 } 00:19:03.230 ] 00:19:03.230 }' 00:19:03.230 00:20:22 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:03.490 00:20:22 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:19:03.490 "subsystems": [ 00:19:03.490 { 00:19:03.490 "subsystem": "keyring", 00:19:03.490 "config": [ 00:19:03.490 { 00:19:03.490 "method": "keyring_file_add_key", 00:19:03.490 "params": { 00:19:03.490 "name": "key0", 00:19:03.490 "path": "/tmp/tmp.7kFpAoUVEj" 00:19:03.490 } 00:19:03.490 } 00:19:03.490 ] 00:19:03.490 }, 00:19:03.490 { 00:19:03.490 "subsystem": "iobuf", 00:19:03.490 "config": [ 00:19:03.490 { 00:19:03.490 "method": "iobuf_set_options", 00:19:03.490 "params": { 00:19:03.490 "small_pool_count": 8192, 00:19:03.490 "large_pool_count": 1024, 00:19:03.490 "small_bufsize": 8192, 00:19:03.490 "large_bufsize": 135168 00:19:03.490 } 00:19:03.490 } 00:19:03.490 ] 00:19:03.491 }, 00:19:03.491 { 00:19:03.491 "subsystem": "sock", 00:19:03.491 "config": [ 00:19:03.491 { 00:19:03.491 "method": "sock_set_default_impl", 00:19:03.491 "params": { 00:19:03.491 "impl_name": "posix" 00:19:03.491 } 00:19:03.491 }, 00:19:03.491 { 00:19:03.491 "method": "sock_impl_set_options", 00:19:03.491 "params": { 00:19:03.491 "impl_name": "ssl", 00:19:03.491 "recv_buf_size": 4096, 00:19:03.491 "send_buf_size": 4096, 00:19:03.491 "enable_recv_pipe": true, 00:19:03.491 "enable_quickack": false, 00:19:03.491 "enable_placement_id": 0, 00:19:03.491 "enable_zerocopy_send_server": true, 00:19:03.491 "enable_zerocopy_send_client": false, 00:19:03.491 "zerocopy_threshold": 0, 00:19:03.491 "tls_version": 0, 00:19:03.491 "enable_ktls": false 00:19:03.491 } 00:19:03.491 }, 00:19:03.491 { 00:19:03.491 "method": "sock_impl_set_options", 00:19:03.491 "params": { 00:19:03.491 "impl_name": "posix", 00:19:03.491 "recv_buf_size": 2097152, 00:19:03.491 "send_buf_size": 2097152, 00:19:03.491 "enable_recv_pipe": true, 00:19:03.491 "enable_quickack": false, 00:19:03.491 "enable_placement_id": 0, 00:19:03.491 "enable_zerocopy_send_server": true, 00:19:03.491 "enable_zerocopy_send_client": false, 00:19:03.491 "zerocopy_threshold": 0, 00:19:03.491 "tls_version": 0, 00:19:03.491 "enable_ktls": false 00:19:03.491 } 00:19:03.491 } 00:19:03.491 ] 00:19:03.491 }, 00:19:03.491 { 00:19:03.491 "subsystem": "vmd", 00:19:03.491 "config": [] 00:19:03.491 }, 00:19:03.491 { 00:19:03.491 "subsystem": "accel", 00:19:03.491 "config": [ 00:19:03.491 { 00:19:03.491 "method": "accel_set_options", 00:19:03.491 "params": { 00:19:03.491 "small_cache_size": 128, 00:19:03.491 "large_cache_size": 16, 00:19:03.491 "task_count": 2048, 00:19:03.491 "sequence_count": 2048, 00:19:03.491 "buf_count": 2048 00:19:03.491 } 00:19:03.491 } 00:19:03.491 ] 00:19:03.491 }, 00:19:03.491 { 00:19:03.491 "subsystem": "bdev", 00:19:03.491 "config": [ 00:19:03.491 { 00:19:03.491 "method": "bdev_set_options", 00:19:03.491 "params": { 00:19:03.491 "bdev_io_pool_size": 65535, 00:19:03.491 "bdev_io_cache_size": 256, 00:19:03.491 "bdev_auto_examine": true, 00:19:03.491 "iobuf_small_cache_size": 128, 00:19:03.491 "iobuf_large_cache_size": 16 00:19:03.491 } 00:19:03.491 }, 00:19:03.491 { 00:19:03.491 "method": "bdev_raid_set_options", 00:19:03.491 "params": { 00:19:03.491 "process_window_size_kb": 1024 00:19:03.491 } 00:19:03.491 }, 00:19:03.491 { 00:19:03.491 "method": "bdev_iscsi_set_options", 00:19:03.491 "params": { 00:19:03.491 "timeout_sec": 30 00:19:03.491 } 00:19:03.491 }, 00:19:03.491 { 00:19:03.491 "method": "bdev_nvme_set_options", 00:19:03.491 "params": { 00:19:03.491 "action_on_timeout": "none", 00:19:03.491 "timeout_us": 0, 00:19:03.491 "timeout_admin_us": 0, 00:19:03.491 "keep_alive_timeout_ms": 10000, 00:19:03.491 "arbitration_burst": 0, 00:19:03.491 "low_priority_weight": 0, 00:19:03.491 "medium_priority_weight": 0, 00:19:03.491 "high_priority_weight": 0, 00:19:03.491 "nvme_adminq_poll_period_us": 10000, 00:19:03.491 "nvme_ioq_poll_period_us": 0, 00:19:03.491 "io_queue_requests": 512, 00:19:03.491 "delay_cmd_submit": true, 00:19:03.491 "transport_retry_count": 4, 00:19:03.491 "bdev_retry_count": 3, 00:19:03.491 "transport_ack_timeout": 0, 00:19:03.491 "ctrlr_loss_timeout_sec": 0, 00:19:03.491 "reconnect_delay_sec": 0, 00:19:03.491 "fast_io_fail_timeout_sec": 0, 00:19:03.491 "disable_auto_failback": false, 00:19:03.491 "generate_uuids": false, 00:19:03.491 "transport_tos": 0, 00:19:03.491 "nvme_error_stat": false, 00:19:03.491 "rdma_srq_size": 0, 00:19:03.491 "io_path_stat": false, 00:19:03.491 "allow_accel_sequence": false, 00:19:03.491 "rdma_max_cq_size": 0, 00:19:03.491 "rdma_cm_event_timeout_ms": 0, 00:19:03.491 "dhchap_digests": [ 00:19:03.491 "sha256", 00:19:03.491 "sha384", 00:19:03.491 "sha512" 00:19:03.491 ], 00:19:03.491 "dhchap_dhgroups": [ 00:19:03.491 "null", 00:19:03.491 "ffdhe2048", 00:19:03.491 "ffdhe3072", 00:19:03.491 "ffdhe4096", 00:19:03.491 "ffdhe6144", 00:19:03.491 "ffdhe8192" 00:19:03.491 ] 00:19:03.491 } 00:19:03.491 }, 00:19:03.491 { 00:19:03.491 "method": "bdev_nvme_attach_controller", 00:19:03.491 "params": { 00:19:03.491 "name": "nvme0", 00:19:03.491 "trtype": "TCP", 00:19:03.491 "adrfam": "IPv4", 00:19:03.491 "traddr": "10.0.0.2", 00:19:03.491 "trsvcid": "4420", 00:19:03.491 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:03.491 "prchk_reftag": false, 00:19:03.491 "prchk_guard": false, 00:19:03.491 "ctrlr_loss_timeout_sec": 0, 00:19:03.491 "reconnect_delay_sec": 0, 00:19:03.491 "fast_io_fail_timeout_sec": 0, 00:19:03.491 "psk": "key0", 00:19:03.491 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:03.491 "hdgst": false, 00:19:03.491 "ddgst": false 00:19:03.491 } 00:19:03.491 }, 00:19:03.491 { 00:19:03.491 "method": "bdev_nvme_set_hotplug", 00:19:03.491 "params": { 00:19:03.491 "period_us": 100000, 00:19:03.491 "enable": false 00:19:03.491 } 00:19:03.491 }, 00:19:03.491 { 00:19:03.491 "method": "bdev_enable_histogram", 00:19:03.491 "params": { 00:19:03.491 "name": "nvme0n1", 00:19:03.491 "enable": true 00:19:03.491 } 00:19:03.491 }, 00:19:03.491 { 00:19:03.491 "method": "bdev_wait_for_examine" 00:19:03.491 } 00:19:03.491 ] 00:19:03.491 }, 00:19:03.491 { 00:19:03.491 "subsystem": "nbd", 00:19:03.491 "config": [] 00:19:03.491 } 00:19:03.491 ] 00:19:03.491 }' 00:19:03.491 00:20:22 nvmf_tcp.nvmf_tls -- target/tls.sh@268 -- # killprocess 1543410 00:19:03.491 00:20:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@942 -- # '[' -z 1543410 ']' 00:19:03.491 00:20:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # kill -0 1543410 00:19:03.491 00:20:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # uname 00:19:03.491 00:20:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:19:03.491 00:20:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1543410 00:19:03.491 00:20:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:19:03.491 00:20:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:19:03.491 00:20:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1543410' 00:19:03.491 killing process with pid 1543410 00:19:03.491 00:20:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@961 -- # kill 1543410 00:19:03.491 Received shutdown signal, test time was about 1.000000 seconds 00:19:03.491 00:19:03.491 Latency(us) 00:19:03.491 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:03.491 =================================================================================================================== 00:19:03.491 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:03.491 00:20:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # wait 1543410 00:19:03.750 00:20:22 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # killprocess 1543245 00:19:03.751 00:20:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@942 -- # '[' -z 1543245 ']' 00:19:03.751 00:20:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # kill -0 1543245 00:19:03.751 00:20:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # uname 00:19:03.751 00:20:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:19:03.751 00:20:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1543245 00:19:03.751 00:20:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:19:03.751 00:20:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:19:03.751 00:20:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1543245' 00:19:03.751 killing process with pid 1543245 00:19:03.751 00:20:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@961 -- # kill 1543245 00:19:03.751 00:20:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # wait 1543245 00:19:04.010 00:20:22 nvmf_tcp.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:19:04.010 00:20:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:04.010 00:20:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:04.010 00:20:22 nvmf_tcp.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:19:04.010 "subsystems": [ 00:19:04.010 { 00:19:04.010 "subsystem": "keyring", 00:19:04.010 "config": [ 00:19:04.010 { 00:19:04.010 "method": "keyring_file_add_key", 00:19:04.010 "params": { 00:19:04.010 "name": "key0", 00:19:04.010 "path": "/tmp/tmp.7kFpAoUVEj" 00:19:04.010 } 00:19:04.010 } 00:19:04.010 ] 00:19:04.010 }, 00:19:04.010 { 00:19:04.010 "subsystem": "iobuf", 00:19:04.010 "config": [ 00:19:04.010 { 00:19:04.010 "method": "iobuf_set_options", 00:19:04.010 "params": { 00:19:04.010 "small_pool_count": 8192, 00:19:04.010 "large_pool_count": 1024, 00:19:04.010 "small_bufsize": 8192, 00:19:04.010 "large_bufsize": 135168 00:19:04.010 } 00:19:04.010 } 00:19:04.010 ] 00:19:04.010 }, 00:19:04.010 { 00:19:04.010 "subsystem": "sock", 00:19:04.010 "config": [ 00:19:04.010 { 00:19:04.010 "method": "sock_set_default_impl", 00:19:04.010 "params": { 00:19:04.010 "impl_name": "posix" 00:19:04.010 } 00:19:04.010 }, 00:19:04.010 { 00:19:04.010 "method": "sock_impl_set_options", 00:19:04.010 "params": { 00:19:04.010 "impl_name": "ssl", 00:19:04.010 "recv_buf_size": 4096, 00:19:04.010 "send_buf_size": 4096, 00:19:04.010 "enable_recv_pipe": true, 00:19:04.010 "enable_quickack": false, 00:19:04.010 "enable_placement_id": 0, 00:19:04.010 "enable_zerocopy_send_server": true, 00:19:04.010 "enable_zerocopy_send_client": false, 00:19:04.010 "zerocopy_threshold": 0, 00:19:04.010 "tls_version": 0, 00:19:04.010 "enable_ktls": false 00:19:04.010 } 00:19:04.010 }, 00:19:04.010 { 00:19:04.010 "method": "sock_impl_set_options", 00:19:04.010 "params": { 00:19:04.010 "impl_name": "posix", 00:19:04.010 "recv_buf_size": 2097152, 00:19:04.010 "send_buf_size": 2097152, 00:19:04.010 "enable_recv_pipe": true, 00:19:04.010 "enable_quickack": false, 00:19:04.010 "enable_placement_id": 0, 00:19:04.010 "enable_zerocopy_send_server": true, 00:19:04.010 "enable_zerocopy_send_client": false, 00:19:04.010 "zerocopy_threshold": 0, 00:19:04.010 "tls_version": 0, 00:19:04.010 "enable_ktls": false 00:19:04.010 } 00:19:04.010 } 00:19:04.011 ] 00:19:04.011 }, 00:19:04.011 { 00:19:04.011 "subsystem": "vmd", 00:19:04.011 "config": [] 00:19:04.011 }, 00:19:04.011 { 00:19:04.011 "subsystem": "accel", 00:19:04.011 "config": [ 00:19:04.011 { 00:19:04.011 "method": "accel_set_options", 00:19:04.011 "params": { 00:19:04.011 "small_cache_size": 128, 00:19:04.011 "large_cache_size": 16, 00:19:04.011 "task_count": 2048, 00:19:04.011 "sequence_count": 2048, 00:19:04.011 "buf_count": 2048 00:19:04.011 } 00:19:04.011 } 00:19:04.011 ] 00:19:04.011 }, 00:19:04.011 { 00:19:04.011 "subsystem": "bdev", 00:19:04.011 "config": [ 00:19:04.011 { 00:19:04.011 "method": "bdev_set_options", 00:19:04.011 "params": { 00:19:04.011 "bdev_io_pool_size": 65535, 00:19:04.011 "bdev_io_cache_size": 256, 00:19:04.011 "bdev_auto_examine": true, 00:19:04.011 "iobuf_small_cache_size": 128, 00:19:04.011 "iobuf_large_cache_size": 16 00:19:04.011 } 00:19:04.011 }, 00:19:04.011 { 00:19:04.011 "method": "bdev_raid_set_options", 00:19:04.011 "params": { 00:19:04.011 "process_window_size_kb": 1024 00:19:04.011 } 00:19:04.011 }, 00:19:04.011 { 00:19:04.011 "method": "bdev_iscsi_set_options", 00:19:04.011 "params": { 00:19:04.011 "timeout_sec": 30 00:19:04.011 } 00:19:04.011 }, 00:19:04.011 { 00:19:04.011 "method": "bdev_nvme_set_options", 00:19:04.011 "params": { 00:19:04.011 "action_on_timeout": "none", 00:19:04.011 "timeout_us": 0, 00:19:04.011 "timeout_admin_us": 0, 00:19:04.011 "keep_alive_timeout_ms": 10000, 00:19:04.011 "arbitration_burst": 0, 00:19:04.011 "low_priority_weight": 0, 00:19:04.011 "medium_priority_weight": 0, 00:19:04.011 "high_priority_weight": 0, 00:19:04.011 "nvme_adminq_poll_period_us": 10000, 00:19:04.011 "nvme_ioq_poll_period_us": 0, 00:19:04.011 "io_queue_requests": 0, 00:19:04.011 "delay_cmd_submit": true, 00:19:04.011 "transport_retry_count": 4, 00:19:04.011 "bdev_retry_count": 3, 00:19:04.011 "transport_ack_timeout": 0, 00:19:04.011 "ctrlr_loss_timeout_sec": 0, 00:19:04.011 "reconnect_delay_sec": 0, 00:19:04.011 "fast_io_fail_timeout_sec": 0, 00:19:04.011 "disable_auto_failback": false, 00:19:04.011 "generate_uuids": false, 00:19:04.011 "transport_tos": 0, 00:19:04.011 "nvme_error_stat": false, 00:19:04.011 "rdma_srq_size": 0, 00:19:04.011 "io_path_stat": false, 00:19:04.011 "allow_accel_sequence": false, 00:19:04.011 "rdma_max_cq_size": 0, 00:19:04.011 "rdma_cm_event_timeout_ms": 0, 00:19:04.011 "dhchap_digests": [ 00:19:04.011 "sha256", 00:19:04.011 "sha384", 00:19:04.011 "sha512" 00:19:04.011 ], 00:19:04.011 "dhchap_dhgroups": [ 00:19:04.011 "null", 00:19:04.011 "ffdhe2048", 00:19:04.011 "ffdhe3072", 00:19:04.011 "ffdhe4096", 00:19:04.011 "ffdhe6144", 00:19:04.011 "ffdhe8192" 00:19:04.011 ] 00:19:04.011 } 00:19:04.011 }, 00:19:04.011 { 00:19:04.011 "method": "bdev_nvme_set_hotplug", 00:19:04.011 "params": { 00:19:04.011 "period_us": 100000, 00:19:04.011 "enable": false 00:19:04.011 } 00:19:04.011 }, 00:19:04.011 { 00:19:04.011 "method": "bdev_malloc_create", 00:19:04.011 "params": { 00:19:04.011 "name": "malloc0", 00:19:04.011 "num_blocks": 8192, 00:19:04.011 "block_size": 4096, 00:19:04.011 "physical_block_size": 4096, 00:19:04.011 "uuid": "add9ec5f-9632-4bfc-96fe-1a164fa5da20", 00:19:04.011 "optimal_io_boundary": 0 00:19:04.011 } 00:19:04.011 }, 00:19:04.011 { 00:19:04.011 "method": "bdev_wait_for_examine" 00:19:04.011 } 00:19:04.011 ] 00:19:04.011 }, 00:19:04.011 { 00:19:04.011 "subsystem": "nbd", 00:19:04.011 "config": [] 00:19:04.011 }, 00:19:04.011 { 00:19:04.011 "subsystem": "scheduler", 00:19:04.011 "config": [ 00:19:04.011 { 00:19:04.011 "method": "framework_set_scheduler", 00:19:04.011 "params": { 00:19:04.011 "name": "static" 00:19:04.011 } 00:19:04.011 } 00:19:04.011 ] 00:19:04.011 }, 00:19:04.011 { 00:19:04.011 "subsystem": "nvmf", 00:19:04.011 "config": [ 00:19:04.011 { 00:19:04.011 "method": "nvmf_set_config", 00:19:04.011 "params": { 00:19:04.011 "discovery_filter": "match_any", 00:19:04.011 "admin_cmd_passthru": { 00:19:04.011 "identify_ctrlr": false 00:19:04.011 } 00:19:04.011 } 00:19:04.011 }, 00:19:04.011 { 00:19:04.011 "method": "nvmf_set_max_subsystems", 00:19:04.011 "params": { 00:19:04.011 "max_subsystems": 1024 00:19:04.011 } 00:19:04.011 }, 00:19:04.011 { 00:19:04.011 "method": "nvmf_set_crdt", 00:19:04.011 "params": { 00:19:04.011 "crdt1": 0, 00:19:04.011 "crdt2": 0, 00:19:04.011 "crdt3": 0 00:19:04.011 } 00:19:04.011 }, 00:19:04.011 { 00:19:04.011 "method": "nvmf_create_transport", 00:19:04.011 "params": { 00:19:04.011 "trtype": "TCP", 00:19:04.011 "max_queue_depth": 128, 00:19:04.011 "max_io_qpairs_per_ctrlr": 127, 00:19:04.011 "in_capsule_data_size": 4096, 00:19:04.011 "max_io_size": 131072, 00:19:04.011 "io_unit_size": 131072, 00:19:04.011 "max_aq_depth": 128, 00:19:04.011 "num_shared_buffers": 511, 00:19:04.011 "buf_cache_size": 4294967295, 00:19:04.011 "dif_insert_or_strip": false, 00:19:04.011 "zcopy": false, 00:19:04.011 "c2h_success": false, 00:19:04.011 "sock_priority": 0, 00:19:04.011 "abort_timeout_sec": 1, 00:19:04.011 "ack_timeout": 0, 00:19:04.011 "data_wr_pool_size": 0 00:19:04.011 } 00:19:04.011 }, 00:19:04.011 { 00:19:04.011 "method": "nvmf_create_subsystem", 00:19:04.011 "params": { 00:19:04.011 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:04.011 "allow_any_host": false, 00:19:04.011 "serial_number": "00000000000000000000", 00:19:04.011 "model_number": "SPDK bdev Controller", 00:19:04.011 "max_namespaces": 32, 00:19:04.011 "min_cntlid": 1, 00:19:04.011 "max_cntlid": 65519, 00:19:04.011 "ana_reporting": false 00:19:04.011 } 00:19:04.011 }, 00:19:04.011 { 00:19:04.011 "method": "nvmf_subsystem_add_host", 00:19:04.011 "params": { 00:19:04.011 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:04.011 "host": "nqn.2016-06.io.spdk:host1", 00:19:04.011 "psk": "key0" 00:19:04.011 } 00:19:04.011 }, 00:19:04.011 { 00:19:04.011 "method": "nvmf_subsystem_add_ns", 00:19:04.011 "params": { 00:19:04.011 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:04.011 "namespace": { 00:19:04.011 "nsid": 1, 00:19:04.011 "bdev_name": "malloc0", 00:19:04.011 "nguid": "ADD9EC5F96324BFC96FE1A164FA5DA20", 00:19:04.011 "uuid": "add9ec5f-9632-4bfc-96fe-1a164fa5da20", 00:19:04.011 "no_auto_visible": false 00:19:04.011 } 00:19:04.011 } 00:19:04.011 }, 00:19:04.011 { 00:19:04.011 "method": "nvmf_subsystem_add_listener", 00:19:04.011 "params": { 00:19:04.011 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:04.011 "listen_address": { 00:19:04.011 "trtype": "TCP", 00:19:04.011 "adrfam": "IPv4", 00:19:04.011 "traddr": "10.0.0.2", 00:19:04.011 "trsvcid": "4420" 00:19:04.011 }, 00:19:04.011 "secure_channel": false, 00:19:04.011 "sock_impl": "ssl" 00:19:04.011 } 00:19:04.011 } 00:19:04.011 ] 00:19:04.011 } 00:19:04.011 ] 00:19:04.011 }' 00:19:04.011 00:20:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:04.011 00:20:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1543968 00:19:04.011 00:20:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:19:04.011 00:20:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1543968 00:19:04.011 00:20:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@823 -- # '[' -z 1543968 ']' 00:19:04.011 00:20:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:04.011 00:20:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # local max_retries=100 00:19:04.011 00:20:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:04.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:04.011 00:20:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # xtrace_disable 00:19:04.011 00:20:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:04.011 [2024-07-16 00:20:22.774986] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:19:04.011 [2024-07-16 00:20:22.775035] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:04.011 [2024-07-16 00:20:22.831865] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:04.271 [2024-07-16 00:20:22.908956] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:04.271 [2024-07-16 00:20:22.908994] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:04.271 [2024-07-16 00:20:22.909002] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:04.271 [2024-07-16 00:20:22.909008] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:04.271 [2024-07-16 00:20:22.909012] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:04.271 [2024-07-16 00:20:22.909067] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:04.271 [2024-07-16 00:20:23.120937] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:04.538 [2024-07-16 00:20:23.152962] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:04.539 [2024-07-16 00:20:23.161518] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:04.803 00:20:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:19:04.803 00:20:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # return 0 00:19:04.803 00:20:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:04.803 00:20:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:04.803 00:20:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:04.803 00:20:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:04.803 00:20:23 nvmf_tcp.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=1544090 00:19:04.803 00:20:23 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 1544090 /var/tmp/bdevperf.sock 00:19:04.803 00:20:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@823 -- # '[' -z 1544090 ']' 00:19:04.803 00:20:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:04.803 00:20:23 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:19:04.803 00:20:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # local max_retries=100 00:19:04.803 00:20:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:04.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:04.803 00:20:23 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:19:04.803 "subsystems": [ 00:19:04.803 { 00:19:04.803 "subsystem": "keyring", 00:19:04.803 "config": [ 00:19:04.803 { 00:19:04.803 "method": "keyring_file_add_key", 00:19:04.803 "params": { 00:19:04.803 "name": "key0", 00:19:04.803 "path": "/tmp/tmp.7kFpAoUVEj" 00:19:04.803 } 00:19:04.803 } 00:19:04.803 ] 00:19:04.803 }, 00:19:04.803 { 00:19:04.803 "subsystem": "iobuf", 00:19:04.803 "config": [ 00:19:04.803 { 00:19:04.803 "method": "iobuf_set_options", 00:19:04.803 "params": { 00:19:04.803 "small_pool_count": 8192, 00:19:04.803 "large_pool_count": 1024, 00:19:04.803 "small_bufsize": 8192, 00:19:04.803 "large_bufsize": 135168 00:19:04.803 } 00:19:04.803 } 00:19:04.803 ] 00:19:04.803 }, 00:19:04.803 { 00:19:04.803 "subsystem": "sock", 00:19:04.803 "config": [ 00:19:04.803 { 00:19:04.803 "method": "sock_set_default_impl", 00:19:04.803 "params": { 00:19:04.803 "impl_name": "posix" 00:19:04.803 } 00:19:04.803 }, 00:19:04.803 { 00:19:04.803 "method": "sock_impl_set_options", 00:19:04.803 "params": { 00:19:04.803 "impl_name": "ssl", 00:19:04.803 "recv_buf_size": 4096, 00:19:04.803 "send_buf_size": 4096, 00:19:04.803 "enable_recv_pipe": true, 00:19:04.803 "enable_quickack": false, 00:19:04.803 "enable_placement_id": 0, 00:19:04.803 "enable_zerocopy_send_server": true, 00:19:04.803 "enable_zerocopy_send_client": false, 00:19:04.803 "zerocopy_threshold": 0, 00:19:04.803 "tls_version": 0, 00:19:04.803 "enable_ktls": false 00:19:04.803 } 00:19:04.803 }, 00:19:04.803 { 00:19:04.803 "method": "sock_impl_set_options", 00:19:04.803 "params": { 00:19:04.803 "impl_name": "posix", 00:19:04.803 "recv_buf_size": 2097152, 00:19:04.803 "send_buf_size": 2097152, 00:19:04.803 "enable_recv_pipe": true, 00:19:04.803 "enable_quickack": false, 00:19:04.803 "enable_placement_id": 0, 00:19:04.803 "enable_zerocopy_send_server": true, 00:19:04.803 "enable_zerocopy_send_client": false, 00:19:04.803 "zerocopy_threshold": 0, 00:19:04.803 "tls_version": 0, 00:19:04.803 "enable_ktls": false 00:19:04.803 } 00:19:04.803 } 00:19:04.803 ] 00:19:04.803 }, 00:19:04.803 { 00:19:04.803 "subsystem": "vmd", 00:19:04.803 "config": [] 00:19:04.803 }, 00:19:04.803 { 00:19:04.803 "subsystem": "accel", 00:19:04.803 "config": [ 00:19:04.803 { 00:19:04.803 "method": "accel_set_options", 00:19:04.803 "params": { 00:19:04.803 "small_cache_size": 128, 00:19:04.803 "large_cache_size": 16, 00:19:04.803 "task_count": 2048, 00:19:04.803 "sequence_count": 2048, 00:19:04.803 "buf_count": 2048 00:19:04.803 } 00:19:04.803 } 00:19:04.803 ] 00:19:04.803 }, 00:19:04.803 { 00:19:04.803 "subsystem": "bdev", 00:19:04.803 "config": [ 00:19:04.803 { 00:19:04.803 "method": "bdev_set_options", 00:19:04.803 "params": { 00:19:04.803 "bdev_io_pool_size": 65535, 00:19:04.803 "bdev_io_cache_size": 256, 00:19:04.803 "bdev_auto_examine": true, 00:19:04.803 "iobuf_small_cache_size": 128, 00:19:04.803 "iobuf_large_cache_size": 16 00:19:04.803 } 00:19:04.803 }, 00:19:04.803 { 00:19:04.803 "method": "bdev_raid_set_options", 00:19:04.803 "params": { 00:19:04.803 "process_window_size_kb": 1024 00:19:04.803 } 00:19:04.803 }, 00:19:04.803 { 00:19:04.803 "method": "bdev_iscsi_set_options", 00:19:04.803 "params": { 00:19:04.803 "timeout_sec": 30 00:19:04.803 } 00:19:04.803 }, 00:19:04.803 { 00:19:04.803 "method": "bdev_nvme_set_options", 00:19:04.803 "params": { 00:19:04.803 "action_on_timeout": "none", 00:19:04.803 "timeout_us": 0, 00:19:04.804 "timeout_admin_us": 0, 00:19:04.804 "keep_alive_timeout_ms": 10000, 00:19:04.804 "arbitration_burst": 0, 00:19:04.804 "low_priority_weight": 0, 00:19:04.804 "medium_priority_weight": 0, 00:19:04.804 "high_priority_weight": 0, 00:19:04.804 "nvme_adminq_poll_period_us": 10000, 00:19:04.804 "nvme_ioq_poll_period_us": 0, 00:19:04.804 "io_queue_requests": 512, 00:19:04.804 "delay_cmd_submit": true, 00:19:04.804 "transport_retry_count": 4, 00:19:04.804 "bdev_retry_count": 3, 00:19:04.804 "transport_ack_timeout": 0, 00:19:04.804 "ctrlr_loss_timeout_sec": 0, 00:19:04.804 "reconnect_delay_sec": 0, 00:19:04.804 "fast_io_fail_timeout_sec": 0, 00:19:04.804 "disable_auto_failback": false, 00:19:04.804 "generate_uuids": false, 00:19:04.804 "transport_tos": 0, 00:19:04.804 "nvme_error_stat": false, 00:19:04.804 "rdma_srq_size": 0, 00:19:04.804 "io_path_stat": false, 00:19:04.804 "allow_accel_sequence": false, 00:19:04.804 "rdma_max_cq_size": 0, 00:19:04.804 "rdma_cm_event_timeout_ms": 0, 00:19:04.804 "dhchap_digests": [ 00:19:04.804 "sha256", 00:19:04.804 "sha384", 00:19:04.804 "sha512" 00:19:04.804 ], 00:19:04.804 "dhchap_dhgroups": [ 00:19:04.804 "null", 00:19:04.804 "ffdhe2048", 00:19:04.804 "ffdhe3072", 00:19:04.804 "ffdhe4096", 00:19:04.804 "ffdhe6144", 00:19:04.804 "ffdhe8192" 00:19:04.804 ] 00:19:04.804 } 00:19:04.804 }, 00:19:04.804 { 00:19:04.804 "method": "bdev_nvme_attach_controller", 00:19:04.804 "params": { 00:19:04.804 "name": "nvme0", 00:19:04.804 "trtype": "TCP", 00:19:04.804 "adrfam": "IPv4", 00:19:04.804 "traddr": "10.0.0.2", 00:19:04.804 "trsvcid": "4420", 00:19:04.804 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:04.804 "prchk_reftag": false, 00:19:04.804 "prchk_guard": false, 00:19:04.804 "ctrlr_loss_timeout_sec": 0, 00:19:04.804 "reconnect_delay_sec": 0, 00:19:04.804 "fast_io_fail_timeout_sec": 0, 00:19:04.804 "psk": "key0", 00:19:04.804 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:04.804 "hdgst": false, 00:19:04.804 "ddgst": false 00:19:04.804 } 00:19:04.804 }, 00:19:04.804 { 00:19:04.804 "method": "bdev_nvme_set_hotplug", 00:19:04.804 "params": { 00:19:04.804 "period_us": 100000, 00:19:04.804 "enable": false 00:19:04.804 } 00:19:04.804 }, 00:19:04.804 { 00:19:04.804 "method": "bdev_enable_histogram", 00:19:04.804 "params": { 00:19:04.804 "name": "nvme0n1", 00:19:04.804 "enable": true 00:19:04.804 } 00:19:04.804 }, 00:19:04.804 { 00:19:04.804 "method": "bdev_wait_for_examine" 00:19:04.804 } 00:19:04.804 ] 00:19:04.804 }, 00:19:04.804 { 00:19:04.804 "subsystem": "nbd", 00:19:04.804 "config": [] 00:19:04.804 } 00:19:04.804 ] 00:19:04.804 }' 00:19:04.804 00:20:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # xtrace_disable 00:19:04.804 00:20:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:05.062 [2024-07-16 00:20:23.660460] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:19:05.062 [2024-07-16 00:20:23.660509] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1544090 ] 00:19:05.062 [2024-07-16 00:20:23.714422] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:05.062 [2024-07-16 00:20:23.790379] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:05.320 [2024-07-16 00:20:23.941660] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:05.888 00:20:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:19:05.888 00:20:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # return 0 00:19:05.888 00:20:24 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:05.888 00:20:24 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:19:05.888 00:20:24 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:05.888 00:20:24 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:05.888 Running I/O for 1 seconds... 00:19:07.266 00:19:07.266 Latency(us) 00:19:07.266 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:07.266 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:07.266 Verification LBA range: start 0x0 length 0x2000 00:19:07.266 nvme0n1 : 1.03 5033.22 19.66 0.00 0.00 25067.65 6069.20 75223.93 00:19:07.266 =================================================================================================================== 00:19:07.266 Total : 5033.22 19.66 0.00 0.00 25067.65 6069.20 75223.93 00:19:07.266 0 00:19:07.266 00:20:25 nvmf_tcp.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:19:07.266 00:20:25 nvmf_tcp.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:19:07.266 00:20:25 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:19:07.266 00:20:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@800 -- # type=--id 00:19:07.266 00:20:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@801 -- # id=0 00:19:07.266 00:20:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@802 -- # '[' --id = --pid ']' 00:19:07.266 00:20:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:07.266 00:20:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # shm_files=nvmf_trace.0 00:19:07.266 00:20:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # [[ -z nvmf_trace.0 ]] 00:19:07.266 00:20:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # for n in $shm_files 00:19:07.266 00:20:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@813 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:07.266 nvmf_trace.0 00:19:07.266 00:20:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@815 -- # return 0 00:19:07.266 00:20:25 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 1544090 00:19:07.266 00:20:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@942 -- # '[' -z 1544090 ']' 00:19:07.266 00:20:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # kill -0 1544090 00:19:07.266 00:20:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # uname 00:19:07.266 00:20:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:19:07.266 00:20:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1544090 00:19:07.266 00:20:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:19:07.266 00:20:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:19:07.266 00:20:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1544090' 00:19:07.266 killing process with pid 1544090 00:19:07.266 00:20:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@961 -- # kill 1544090 00:19:07.266 Received shutdown signal, test time was about 1.000000 seconds 00:19:07.266 00:19:07.266 Latency(us) 00:19:07.266 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:07.266 =================================================================================================================== 00:19:07.266 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:07.266 00:20:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # wait 1544090 00:19:07.266 00:20:26 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:19:07.266 00:20:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:07.266 00:20:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:19:07.266 00:20:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:07.266 00:20:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:19:07.266 00:20:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:07.266 00:20:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:07.266 rmmod nvme_tcp 00:19:07.266 rmmod nvme_fabrics 00:19:07.266 rmmod nvme_keyring 00:19:07.525 00:20:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:07.525 00:20:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:19:07.525 00:20:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:19:07.525 00:20:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 1543968 ']' 00:19:07.525 00:20:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 1543968 00:19:07.525 00:20:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@942 -- # '[' -z 1543968 ']' 00:19:07.525 00:20:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # kill -0 1543968 00:19:07.525 00:20:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # uname 00:19:07.525 00:20:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:19:07.525 00:20:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1543968 00:19:07.525 00:20:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:19:07.525 00:20:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:19:07.525 00:20:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1543968' 00:19:07.525 killing process with pid 1543968 00:19:07.525 00:20:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@961 -- # kill 1543968 00:19:07.525 00:20:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # wait 1543968 00:19:07.525 00:20:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:07.525 00:20:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:07.525 00:20:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:07.525 00:20:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:07.525 00:20:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:07.525 00:20:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:07.525 00:20:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:07.525 00:20:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:10.060 00:20:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:10.060 00:20:28 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.O3TMlue9DI /tmp/tmp.0UtCKdHTiv /tmp/tmp.7kFpAoUVEj 00:19:10.060 00:19:10.060 real 1m22.330s 00:19:10.060 user 2m7.398s 00:19:10.060 sys 0m27.624s 00:19:10.060 00:20:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1118 -- # xtrace_disable 00:19:10.060 00:20:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:10.060 ************************************ 00:19:10.060 END TEST nvmf_tls 00:19:10.060 ************************************ 00:19:10.060 00:20:28 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:19:10.060 00:20:28 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:10.060 00:20:28 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:19:10.060 00:20:28 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:19:10.060 00:20:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:10.060 ************************************ 00:19:10.060 START TEST nvmf_fips 00:19:10.060 ************************************ 00:19:10.060 00:20:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:10.060 * Looking for test storage... 00:19:10.060 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:19:10.060 00:20:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:10.060 00:20:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:19:10.060 00:20:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:10.060 00:20:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:10.060 00:20:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:10.060 00:20:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:10.060 00:20:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:10.060 00:20:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:10.060 00:20:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:10.060 00:20:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:10.060 00:20:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:10.060 00:20:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:10.060 00:20:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:10.060 00:20:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:10.060 00:20:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:10.060 00:20:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:10.060 00:20:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:10.060 00:20:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:10.060 00:20:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:10.060 00:20:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:10.060 00:20:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:10.060 00:20:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:10.060 00:20:28 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:10.060 00:20:28 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:10.060 00:20:28 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:10.060 00:20:28 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:19:10.060 00:20:28 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:10.060 00:20:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:19:10.060 00:20:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:10.060 00:20:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:10.060 00:20:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:10.060 00:20:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:10.060 00:20:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:10.060 00:20:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:10.060 00:20:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:10.060 00:20:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:10.060 00:20:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:10.060 00:20:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:19:10.061 00:20:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:19:10.061 00:20:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:19:10.061 00:20:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:19:10.061 00:20:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:19:10.061 00:20:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:19:10.061 00:20:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:19:10.061 00:20:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:19:10.061 00:20:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:19:10.061 00:20:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:19:10.061 00:20:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:19:10.061 00:20:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:19:10.061 00:20:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:19:10.061 00:20:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:19:10.061 00:20:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:19:10.061 00:20:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:19:10.061 00:20:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:19:10.061 00:20:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:19:10.061 00:20:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:19:10.061 00:20:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:10.061 00:20:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:19:10.061 00:20:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:19:10.061 00:20:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:10.061 00:20:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:19:10.061 00:20:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:19:10.061 00:20:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:19:10.061 00:20:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:19:10.061 00:20:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:10.061 00:20:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:19:10.061 00:20:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:19:10.061 00:20:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:19:10.061 00:20:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:19:10.061 00:20:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:19:10.061 00:20:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:10.061 00:20:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:19:10.061 00:20:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:19:10.061 00:20:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:10.061 00:20:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:19:10.061 00:20:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:19:10.061 00:20:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:19:10.061 00:20:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:19:10.061 00:20:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:10.061 00:20:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:19:10.061 00:20:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:19:10.061 00:20:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:19:10.061 00:20:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:19:10.061 00:20:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:19:10.061 00:20:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:10.061 00:20:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:19:10.061 00:20:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:19:10.061 00:20:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:19:10.061 00:20:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:19:10.061 00:20:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:19:10.061 00:20:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:19:10.061 00:20:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:19:10.061 00:20:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:10.061 00:20:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:19:10.061 00:20:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:19:10.061 00:20:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:19:10.061 00:20:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:19:10.061 00:20:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:19:10.061 00:20:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:19:10.061 00:20:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:19:10.061 00:20:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:19:10.061 00:20:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:19:10.061 00:20:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:19:10.061 00:20:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:19:10.061 00:20:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:19:10.061 00:20:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:19:10.061 00:20:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:19:10.061 00:20:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:19:10.061 00:20:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:19:10.061 00:20:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:19:10.061 00:20:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:19:10.061 00:20:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:19:10.061 00:20:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:19:10.061 00:20:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:19:10.061 00:20:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:19:10.061 00:20:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:19:10.061 00:20:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:19:10.061 00:20:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:19:10.061 00:20:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # local es=0 00:19:10.061 00:20:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@644 -- # valid_exec_arg openssl md5 /dev/fd/62 00:19:10.061 00:20:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@630 -- # local arg=openssl 00:19:10.061 00:20:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:19:10.061 00:20:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@634 -- # type -t openssl 00:19:10.061 00:20:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:19:10.061 00:20:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # type -P openssl 00:19:10.061 00:20:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:19:10.061 00:20:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # arg=/usr/bin/openssl 00:19:10.061 00:20:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # [[ -x /usr/bin/openssl ]] 00:19:10.061 00:20:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@645 -- # openssl md5 /dev/fd/62 00:19:10.061 Error setting digest 00:19:10.061 00C2EB63367F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:19:10.061 00C2EB63367F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:19:10.061 00:20:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@645 -- # es=1 00:19:10.061 00:20:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:19:10.061 00:20:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:19:10.061 00:20:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:19:10.061 00:20:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:19:10.061 00:20:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:10.061 00:20:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:10.061 00:20:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:10.061 00:20:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:10.061 00:20:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:10.061 00:20:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:10.061 00:20:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:10.061 00:20:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:10.061 00:20:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:10.061 00:20:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:10.061 00:20:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:19:10.061 00:20:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:15.322 00:20:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:15.322 00:20:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:19:15.322 00:20:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:15.322 00:20:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:15.322 00:20:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:15.322 00:20:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:15.322 00:20:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:15.322 00:20:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:19:15.322 00:20:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:15.322 00:20:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:19:15.322 00:20:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:19:15.322 00:20:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:19:15.322 00:20:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:19:15.322 00:20:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:19:15.322 00:20:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:19:15.322 00:20:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:15.322 00:20:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:15.322 00:20:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:15.322 00:20:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:15.322 00:20:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:15.322 00:20:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:15.322 00:20:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:15.322 00:20:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:15.322 00:20:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:15.322 00:20:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:15.322 00:20:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:15.322 00:20:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:15.322 00:20:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:15.322 00:20:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:15.322 00:20:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:15.322 00:20:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:15.322 00:20:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:15.322 00:20:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:15.322 00:20:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:15.322 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:15.322 00:20:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:15.322 00:20:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:15.322 00:20:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:15.322 00:20:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:15.322 00:20:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:15.322 00:20:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:15.322 00:20:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:15.322 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:15.322 00:20:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:15.322 00:20:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:15.322 00:20:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:15.322 00:20:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:15.322 00:20:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:15.322 00:20:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:15.322 00:20:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:15.322 00:20:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:15.322 00:20:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:15.322 00:20:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:15.322 00:20:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:15.322 00:20:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:15.322 00:20:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:15.322 00:20:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:15.322 00:20:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:15.322 00:20:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:15.322 Found net devices under 0000:86:00.0: cvl_0_0 00:19:15.322 00:20:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:15.322 00:20:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:15.322 00:20:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:15.322 00:20:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:15.322 00:20:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:15.322 00:20:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:15.322 00:20:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:15.322 00:20:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:15.322 00:20:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:15.322 Found net devices under 0000:86:00.1: cvl_0_1 00:19:15.322 00:20:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:15.322 00:20:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:15.322 00:20:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:19:15.322 00:20:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:15.322 00:20:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:15.322 00:20:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:15.322 00:20:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:15.322 00:20:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:15.322 00:20:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:15.322 00:20:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:15.322 00:20:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:15.322 00:20:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:15.322 00:20:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:15.322 00:20:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:15.322 00:20:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:15.322 00:20:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:15.322 00:20:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:15.322 00:20:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:15.322 00:20:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:15.322 00:20:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:15.322 00:20:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:15.322 00:20:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:15.322 00:20:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:15.322 00:20:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:15.322 00:20:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:15.322 00:20:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:15.322 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:15.322 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.172 ms 00:19:15.322 00:19:15.322 --- 10.0.0.2 ping statistics --- 00:19:15.322 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:15.322 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:19:15.322 00:20:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:15.322 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:15.322 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.251 ms 00:19:15.322 00:19:15.322 --- 10.0.0.1 ping statistics --- 00:19:15.322 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:15.322 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:19:15.322 00:20:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:15.322 00:20:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:19:15.322 00:20:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:15.322 00:20:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:15.322 00:20:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:15.322 00:20:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:15.322 00:20:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:15.322 00:20:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:15.322 00:20:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:15.322 00:20:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:19:15.322 00:20:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:15.322 00:20:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:15.322 00:20:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:15.322 00:20:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=1547898 00:19:15.322 00:20:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:15.322 00:20:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 1547898 00:19:15.322 00:20:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@823 -- # '[' -z 1547898 ']' 00:19:15.322 00:20:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:15.323 00:20:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@828 -- # local max_retries=100 00:19:15.323 00:20:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:15.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:15.323 00:20:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # xtrace_disable 00:19:15.323 00:20:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:15.323 [2024-07-16 00:20:33.541653] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:19:15.323 [2024-07-16 00:20:33.541696] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:15.323 [2024-07-16 00:20:33.600475] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:15.323 [2024-07-16 00:20:33.676128] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:15.323 [2024-07-16 00:20:33.676167] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:15.323 [2024-07-16 00:20:33.676174] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:15.323 [2024-07-16 00:20:33.676179] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:15.323 [2024-07-16 00:20:33.676184] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:15.323 [2024-07-16 00:20:33.676207] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:15.581 00:20:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:19:15.581 00:20:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@856 -- # return 0 00:19:15.581 00:20:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:15.581 00:20:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:15.581 00:20:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:15.581 00:20:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:15.581 00:20:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:19:15.581 00:20:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:15.581 00:20:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:15.581 00:20:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:15.581 00:20:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:15.581 00:20:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:15.581 00:20:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:15.581 00:20:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:15.840 [2024-07-16 00:20:34.517959] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:15.840 [2024-07-16 00:20:34.533965] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:15.840 [2024-07-16 00:20:34.534132] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:15.840 [2024-07-16 00:20:34.562154] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:15.840 malloc0 00:19:15.840 00:20:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:15.840 00:20:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:15.840 00:20:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=1548037 00:19:15.840 00:20:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 1548037 /var/tmp/bdevperf.sock 00:19:15.840 00:20:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@823 -- # '[' -z 1548037 ']' 00:19:15.840 00:20:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:15.840 00:20:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@828 -- # local max_retries=100 00:19:15.840 00:20:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:15.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:15.840 00:20:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # xtrace_disable 00:19:15.840 00:20:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:15.840 [2024-07-16 00:20:34.625669] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:19:15.840 [2024-07-16 00:20:34.625717] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1548037 ] 00:19:15.840 [2024-07-16 00:20:34.677828] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:16.099 [2024-07-16 00:20:34.752803] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:16.665 00:20:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:19:16.665 00:20:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@856 -- # return 0 00:19:16.665 00:20:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:16.923 [2024-07-16 00:20:35.566484] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:16.923 [2024-07-16 00:20:35.566565] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:16.923 TLSTESTn1 00:19:16.923 00:20:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:16.923 Running I/O for 10 seconds... 00:19:29.141 00:19:29.141 Latency(us) 00:19:29.141 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:29.141 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:29.141 Verification LBA range: start 0x0 length 0x2000 00:19:29.141 TLSTESTn1 : 10.02 5569.40 21.76 0.00 0.00 22944.35 6696.07 49009.53 00:19:29.141 =================================================================================================================== 00:19:29.141 Total : 5569.40 21.76 0.00 0.00 22944.35 6696.07 49009.53 00:19:29.141 0 00:19:29.141 00:20:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:19:29.141 00:20:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:19:29.141 00:20:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@800 -- # type=--id 00:19:29.141 00:20:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@801 -- # id=0 00:19:29.141 00:20:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@802 -- # '[' --id = --pid ']' 00:19:29.141 00:20:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:29.141 00:20:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # shm_files=nvmf_trace.0 00:19:29.141 00:20:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # [[ -z nvmf_trace.0 ]] 00:19:29.141 00:20:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # for n in $shm_files 00:19:29.141 00:20:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@813 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:29.141 nvmf_trace.0 00:19:29.141 00:20:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@815 -- # return 0 00:19:29.141 00:20:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 1548037 00:19:29.141 00:20:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@942 -- # '[' -z 1548037 ']' 00:19:29.141 00:20:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@946 -- # kill -0 1548037 00:19:29.141 00:20:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@947 -- # uname 00:19:29.141 00:20:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:19:29.141 00:20:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1548037 00:19:29.142 00:20:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # process_name=reactor_2 00:19:29.142 00:20:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # '[' reactor_2 = sudo ']' 00:19:29.142 00:20:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1548037' 00:19:29.142 killing process with pid 1548037 00:19:29.142 00:20:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@961 -- # kill 1548037 00:19:29.142 Received shutdown signal, test time was about 10.000000 seconds 00:19:29.142 00:19:29.142 Latency(us) 00:19:29.142 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:29.142 =================================================================================================================== 00:19:29.142 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:29.142 [2024-07-16 00:20:45.913080] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:29.142 00:20:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # wait 1548037 00:19:29.142 00:20:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:19:29.142 00:20:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:29.142 00:20:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:19:29.142 00:20:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:29.142 00:20:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:19:29.142 00:20:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:29.142 00:20:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:29.142 rmmod nvme_tcp 00:19:29.142 rmmod nvme_fabrics 00:19:29.142 rmmod nvme_keyring 00:19:29.142 00:20:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:29.142 00:20:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:19:29.142 00:20:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:19:29.142 00:20:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 1547898 ']' 00:19:29.142 00:20:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 1547898 00:19:29.142 00:20:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@942 -- # '[' -z 1547898 ']' 00:19:29.142 00:20:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@946 -- # kill -0 1547898 00:19:29.142 00:20:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@947 -- # uname 00:19:29.142 00:20:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:19:29.142 00:20:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1547898 00:19:29.142 00:20:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:19:29.142 00:20:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:19:29.142 00:20:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1547898' 00:19:29.142 killing process with pid 1547898 00:19:29.142 00:20:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@961 -- # kill 1547898 00:19:29.142 [2024-07-16 00:20:46.196622] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:29.142 00:20:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # wait 1547898 00:19:29.142 00:20:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:29.142 00:20:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:29.142 00:20:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:29.142 00:20:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:29.142 00:20:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:29.142 00:20:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:29.142 00:20:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:29.142 00:20:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:29.710 00:20:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:29.710 00:20:48 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:29.710 00:19:29.710 real 0m19.961s 00:19:29.710 user 0m22.452s 00:19:29.710 sys 0m8.143s 00:19:29.710 00:20:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1118 -- # xtrace_disable 00:19:29.710 00:20:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:29.710 ************************************ 00:19:29.710 END TEST nvmf_fips 00:19:29.710 ************************************ 00:19:29.710 00:20:48 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:19:29.710 00:20:48 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:19:29.710 00:20:48 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:19:29.710 00:20:48 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:19:29.710 00:20:48 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:19:29.710 00:20:48 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:19:29.710 00:20:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:35.017 00:20:53 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:35.017 00:20:53 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:19:35.017 00:20:53 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:35.017 00:20:53 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:35.017 00:20:53 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:35.017 00:20:53 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:35.017 00:20:53 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:35.017 00:20:53 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:19:35.017 00:20:53 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:35.017 00:20:53 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:19:35.017 00:20:53 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:19:35.017 00:20:53 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:19:35.017 00:20:53 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:19:35.017 00:20:53 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:19:35.017 00:20:53 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:19:35.017 00:20:53 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:35.017 00:20:53 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:35.017 00:20:53 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:35.017 00:20:53 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:35.017 00:20:53 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:35.017 00:20:53 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:35.017 00:20:53 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:35.017 00:20:53 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:35.017 00:20:53 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:35.017 00:20:53 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:35.017 00:20:53 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:35.017 00:20:53 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:35.017 00:20:53 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:35.017 00:20:53 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:35.017 00:20:53 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:35.017 00:20:53 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:35.017 00:20:53 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:35.017 00:20:53 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:35.017 00:20:53 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:35.017 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:35.017 00:20:53 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:35.017 00:20:53 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:35.017 00:20:53 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:35.017 00:20:53 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:35.017 00:20:53 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:35.017 00:20:53 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:35.017 00:20:53 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:35.017 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:35.017 00:20:53 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:35.017 00:20:53 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:35.017 00:20:53 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:35.017 00:20:53 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:35.017 00:20:53 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:35.017 00:20:53 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:35.017 00:20:53 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:35.017 00:20:53 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:35.017 00:20:53 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:35.017 00:20:53 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:35.017 00:20:53 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:35.017 00:20:53 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:35.017 00:20:53 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:35.017 00:20:53 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:35.017 00:20:53 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:35.017 00:20:53 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:35.017 Found net devices under 0000:86:00.0: cvl_0_0 00:19:35.017 00:20:53 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:35.017 00:20:53 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:35.017 00:20:53 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:35.017 00:20:53 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:35.017 00:20:53 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:35.017 00:20:53 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:35.017 00:20:53 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:35.017 00:20:53 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:35.017 00:20:53 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:35.017 Found net devices under 0000:86:00.1: cvl_0_1 00:19:35.017 00:20:53 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:35.017 00:20:53 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:35.017 00:20:53 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:35.017 00:20:53 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:19:35.017 00:20:53 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:19:35.017 00:20:53 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:19:35.017 00:20:53 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:19:35.017 00:20:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:35.017 ************************************ 00:19:35.017 START TEST nvmf_perf_adq 00:19:35.017 ************************************ 00:19:35.017 00:20:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:19:35.017 * Looking for test storage... 00:19:35.017 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:35.017 00:20:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:35.017 00:20:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:19:35.017 00:20:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:35.017 00:20:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:35.017 00:20:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:35.017 00:20:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:35.017 00:20:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:35.017 00:20:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:35.017 00:20:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:35.017 00:20:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:35.017 00:20:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:35.017 00:20:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:35.017 00:20:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:35.018 00:20:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:35.018 00:20:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:35.018 00:20:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:35.018 00:20:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:35.018 00:20:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:35.018 00:20:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:35.018 00:20:53 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:35.018 00:20:53 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:35.018 00:20:53 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:35.018 00:20:53 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.018 00:20:53 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.018 00:20:53 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.018 00:20:53 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:19:35.018 00:20:53 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.018 00:20:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:19:35.018 00:20:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:35.018 00:20:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:35.018 00:20:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:35.018 00:20:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:35.018 00:20:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:35.018 00:20:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:35.018 00:20:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:35.018 00:20:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:35.018 00:20:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:19:35.018 00:20:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:19:35.018 00:20:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:40.302 00:20:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:40.302 00:20:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:19:40.302 00:20:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:40.302 00:20:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:40.302 00:20:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:40.302 00:20:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:40.302 00:20:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:40.302 00:20:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:19:40.302 00:20:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:40.302 00:20:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:19:40.302 00:20:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:19:40.302 00:20:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:19:40.302 00:20:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:19:40.302 00:20:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:19:40.302 00:20:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:19:40.302 00:20:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:40.302 00:20:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:40.302 00:20:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:40.302 00:20:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:40.302 00:20:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:40.302 00:20:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:40.302 00:20:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:40.302 00:20:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:40.302 00:20:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:40.302 00:20:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:40.302 00:20:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:40.302 00:20:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:40.302 00:20:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:40.302 00:20:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:40.302 00:20:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:40.302 00:20:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:40.302 00:20:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:40.302 00:20:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:40.302 00:20:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:40.302 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:40.302 00:20:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:40.302 00:20:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:40.302 00:20:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:40.302 00:20:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:40.302 00:20:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:40.302 00:20:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:40.302 00:20:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:40.302 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:40.302 00:20:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:40.302 00:20:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:40.302 00:20:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:40.302 00:20:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:40.302 00:20:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:40.302 00:20:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:40.302 00:20:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:40.302 00:20:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:40.302 00:20:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:40.302 00:20:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:40.302 00:20:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:40.302 00:20:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:40.302 00:20:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:40.302 00:20:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:40.302 00:20:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:40.302 00:20:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:40.302 Found net devices under 0000:86:00.0: cvl_0_0 00:19:40.302 00:20:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:40.302 00:20:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:40.302 00:20:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:40.302 00:20:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:40.302 00:20:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:40.302 00:20:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:40.302 00:20:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:40.302 00:20:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:40.302 00:20:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:40.302 Found net devices under 0000:86:00.1: cvl_0_1 00:19:40.302 00:20:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:40.302 00:20:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:40.302 00:20:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:40.302 00:20:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:19:40.302 00:20:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:40.302 00:20:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:19:40.302 00:20:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:19:41.238 00:20:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:19:43.142 00:21:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:19:48.410 00:21:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:19:48.410 00:21:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:48.410 00:21:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:48.410 00:21:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:48.410 00:21:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:48.410 00:21:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:48.410 00:21:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:48.410 00:21:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:48.410 00:21:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:48.410 00:21:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:48.410 00:21:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:48.410 00:21:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:19:48.410 00:21:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:48.410 00:21:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:48.410 00:21:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:19:48.410 00:21:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:48.410 00:21:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:48.410 00:21:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:48.410 00:21:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:48.410 00:21:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:48.410 00:21:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:19:48.410 00:21:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:48.410 00:21:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:19:48.411 00:21:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:19:48.411 00:21:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:19:48.411 00:21:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:19:48.411 00:21:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:19:48.411 00:21:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:19:48.411 00:21:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:48.411 00:21:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:48.411 00:21:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:48.411 00:21:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:48.411 00:21:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:48.411 00:21:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:48.411 00:21:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:48.411 00:21:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:48.411 00:21:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:48.411 00:21:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:48.411 00:21:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:48.411 00:21:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:48.411 00:21:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:48.411 00:21:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:48.411 00:21:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:48.411 00:21:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:48.411 00:21:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:48.411 00:21:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:48.411 00:21:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:48.411 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:48.411 00:21:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:48.411 00:21:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:48.411 00:21:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:48.411 00:21:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:48.411 00:21:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:48.411 00:21:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:48.411 00:21:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:48.411 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:48.411 00:21:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:48.411 00:21:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:48.411 00:21:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:48.411 00:21:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:48.411 00:21:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:48.411 00:21:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:48.411 00:21:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:48.411 00:21:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:48.411 00:21:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:48.411 00:21:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:48.411 00:21:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:48.411 00:21:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:48.411 00:21:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:48.411 00:21:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:48.411 00:21:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:48.411 00:21:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:48.411 Found net devices under 0000:86:00.0: cvl_0_0 00:19:48.411 00:21:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:48.411 00:21:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:48.411 00:21:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:48.411 00:21:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:48.411 00:21:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:48.411 00:21:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:48.411 00:21:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:48.411 00:21:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:48.411 00:21:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:48.411 Found net devices under 0000:86:00.1: cvl_0_1 00:19:48.411 00:21:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:48.411 00:21:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:48.411 00:21:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:19:48.411 00:21:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:48.411 00:21:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:48.411 00:21:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:48.411 00:21:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:48.411 00:21:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:48.411 00:21:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:48.411 00:21:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:48.411 00:21:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:48.411 00:21:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:48.411 00:21:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:48.411 00:21:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:48.411 00:21:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:48.411 00:21:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:48.411 00:21:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:48.411 00:21:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:48.411 00:21:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:48.411 00:21:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:48.411 00:21:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:48.411 00:21:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:48.411 00:21:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:48.411 00:21:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:48.411 00:21:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:48.411 00:21:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:48.411 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:48.411 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.246 ms 00:19:48.411 00:19:48.411 --- 10.0.0.2 ping statistics --- 00:19:48.411 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:48.411 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:19:48.411 00:21:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:48.411 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:48.411 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.098 ms 00:19:48.411 00:19:48.411 --- 10.0.0.1 ping statistics --- 00:19:48.411 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:48.411 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:19:48.411 00:21:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:48.411 00:21:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:19:48.411 00:21:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:48.411 00:21:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:48.411 00:21:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:48.411 00:21:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:48.411 00:21:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:48.411 00:21:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:48.411 00:21:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:48.411 00:21:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:19:48.411 00:21:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:48.411 00:21:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:48.411 00:21:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:48.411 00:21:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1558073 00:19:48.411 00:21:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1558073 00:19:48.411 00:21:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:19:48.411 00:21:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@823 -- # '[' -z 1558073 ']' 00:19:48.411 00:21:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:48.411 00:21:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@828 -- # local max_retries=100 00:19:48.411 00:21:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:48.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:48.411 00:21:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@832 -- # xtrace_disable 00:19:48.411 00:21:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:48.411 [2024-07-16 00:21:07.080609] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:19:48.411 [2024-07-16 00:21:07.080652] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:48.411 [2024-07-16 00:21:07.139079] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:48.411 [2024-07-16 00:21:07.213228] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:48.412 [2024-07-16 00:21:07.213272] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:48.412 [2024-07-16 00:21:07.213279] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:48.412 [2024-07-16 00:21:07.213288] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:48.412 [2024-07-16 00:21:07.213293] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:48.412 [2024-07-16 00:21:07.213346] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:48.412 [2024-07-16 00:21:07.213465] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:48.412 [2024-07-16 00:21:07.213550] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:48.412 [2024-07-16 00:21:07.213552] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:49.349 00:21:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:19:49.349 00:21:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@856 -- # return 0 00:19:49.349 00:21:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:49.349 00:21:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:49.349 00:21:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:49.349 00:21:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:49.349 00:21:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:19:49.349 00:21:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:19:49.349 00:21:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:19:49.349 00:21:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:49.349 00:21:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:49.349 00:21:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:49.349 00:21:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:19:49.349 00:21:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:19:49.349 00:21:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:49.349 00:21:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:49.349 00:21:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:49.349 00:21:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:19:49.349 00:21:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:49.349 00:21:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:49.349 00:21:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:49.349 00:21:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:19:49.349 00:21:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:49.349 00:21:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:49.349 [2024-07-16 00:21:08.079964] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:49.349 00:21:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:49.349 00:21:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:49.349 00:21:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:49.349 00:21:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:49.349 Malloc1 00:19:49.349 00:21:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:49.349 00:21:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:49.349 00:21:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:49.349 00:21:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:49.349 00:21:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:49.349 00:21:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:49.349 00:21:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:49.349 00:21:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:49.349 00:21:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:49.349 00:21:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:49.349 00:21:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:49.349 00:21:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:49.349 [2024-07-16 00:21:08.127470] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:49.349 00:21:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:49.349 00:21:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=1558175 00:19:49.349 00:21:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:19:49.349 00:21:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:19:51.885 00:21:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:19:51.885 00:21:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:51.885 00:21:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:51.885 00:21:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:51.885 00:21:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:19:51.885 "tick_rate": 2300000000, 00:19:51.885 "poll_groups": [ 00:19:51.885 { 00:19:51.885 "name": "nvmf_tgt_poll_group_000", 00:19:51.885 "admin_qpairs": 1, 00:19:51.885 "io_qpairs": 1, 00:19:51.885 "current_admin_qpairs": 1, 00:19:51.885 "current_io_qpairs": 1, 00:19:51.885 "pending_bdev_io": 0, 00:19:51.885 "completed_nvme_io": 20217, 00:19:51.885 "transports": [ 00:19:51.885 { 00:19:51.885 "trtype": "TCP" 00:19:51.885 } 00:19:51.885 ] 00:19:51.885 }, 00:19:51.885 { 00:19:51.885 "name": "nvmf_tgt_poll_group_001", 00:19:51.885 "admin_qpairs": 0, 00:19:51.885 "io_qpairs": 1, 00:19:51.885 "current_admin_qpairs": 0, 00:19:51.885 "current_io_qpairs": 1, 00:19:51.885 "pending_bdev_io": 0, 00:19:51.885 "completed_nvme_io": 20793, 00:19:51.885 "transports": [ 00:19:51.885 { 00:19:51.885 "trtype": "TCP" 00:19:51.885 } 00:19:51.885 ] 00:19:51.885 }, 00:19:51.885 { 00:19:51.885 "name": "nvmf_tgt_poll_group_002", 00:19:51.885 "admin_qpairs": 0, 00:19:51.885 "io_qpairs": 1, 00:19:51.885 "current_admin_qpairs": 0, 00:19:51.885 "current_io_qpairs": 1, 00:19:51.885 "pending_bdev_io": 0, 00:19:51.885 "completed_nvme_io": 20599, 00:19:51.885 "transports": [ 00:19:51.885 { 00:19:51.885 "trtype": "TCP" 00:19:51.885 } 00:19:51.885 ] 00:19:51.885 }, 00:19:51.885 { 00:19:51.885 "name": "nvmf_tgt_poll_group_003", 00:19:51.885 "admin_qpairs": 0, 00:19:51.885 "io_qpairs": 1, 00:19:51.885 "current_admin_qpairs": 0, 00:19:51.885 "current_io_qpairs": 1, 00:19:51.885 "pending_bdev_io": 0, 00:19:51.885 "completed_nvme_io": 20154, 00:19:51.885 "transports": [ 00:19:51.885 { 00:19:51.885 "trtype": "TCP" 00:19:51.885 } 00:19:51.885 ] 00:19:51.885 } 00:19:51.885 ] 00:19:51.885 }' 00:19:51.885 00:21:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:19:51.885 00:21:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:19:51.885 00:21:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:19:51.885 00:21:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:19:51.885 00:21:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 1558175 00:20:00.008 Initializing NVMe Controllers 00:20:00.008 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:00.008 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:00.008 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:00.008 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:00.008 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:00.008 Initialization complete. Launching workers. 00:20:00.008 ======================================================== 00:20:00.008 Latency(us) 00:20:00.008 Device Information : IOPS MiB/s Average min max 00:20:00.008 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10650.20 41.60 6021.17 2017.17 44871.20 00:20:00.008 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10953.00 42.79 5843.65 1766.97 10097.63 00:20:00.008 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10824.60 42.28 5911.74 1828.11 10107.86 00:20:00.008 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10644.70 41.58 6013.08 1776.99 10177.92 00:20:00.008 ======================================================== 00:20:00.008 Total : 43072.50 168.25 5946.53 1766.97 44871.20 00:20:00.008 00:20:00.008 00:21:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:20:00.008 00:21:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:00.008 00:21:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:20:00.008 00:21:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:00.008 00:21:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:20:00.008 00:21:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:00.008 00:21:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:00.008 rmmod nvme_tcp 00:20:00.008 rmmod nvme_fabrics 00:20:00.008 rmmod nvme_keyring 00:20:00.008 00:21:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:00.008 00:21:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:20:00.008 00:21:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:20:00.008 00:21:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1558073 ']' 00:20:00.008 00:21:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1558073 00:20:00.008 00:21:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@942 -- # '[' -z 1558073 ']' 00:20:00.008 00:21:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@946 -- # kill -0 1558073 00:20:00.008 00:21:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@947 -- # uname 00:20:00.008 00:21:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:20:00.008 00:21:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1558073 00:20:00.008 00:21:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:20:00.008 00:21:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:20:00.008 00:21:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1558073' 00:20:00.009 killing process with pid 1558073 00:20:00.009 00:21:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@961 -- # kill 1558073 00:20:00.009 00:21:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # wait 1558073 00:20:00.009 00:21:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:00.009 00:21:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:00.009 00:21:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:00.009 00:21:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:00.009 00:21:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:00.009 00:21:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:00.009 00:21:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:00.009 00:21:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:01.913 00:21:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:01.913 00:21:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:20:01.913 00:21:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:20:03.289 00:21:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:20:05.191 00:21:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:20:10.511 00:21:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:20:10.511 00:21:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:10.511 00:21:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:10.511 00:21:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:10.511 00:21:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:10.511 00:21:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:10.511 00:21:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:10.511 00:21:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:10.511 00:21:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:10.511 00:21:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:10.511 00:21:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:10.511 00:21:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:20:10.511 00:21:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:10.511 00:21:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:10.511 00:21:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:20:10.511 00:21:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:10.511 00:21:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:10.511 00:21:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:10.511 00:21:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:10.511 00:21:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:10.511 00:21:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:20:10.511 00:21:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:10.511 00:21:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:20:10.511 00:21:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:20:10.511 00:21:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:20:10.511 00:21:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:20:10.511 00:21:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:20:10.511 00:21:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:20:10.511 00:21:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:10.511 00:21:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:10.511 00:21:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:10.511 00:21:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:10.511 00:21:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:10.512 00:21:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:10.512 00:21:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:10.512 00:21:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:10.512 00:21:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:10.512 00:21:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:10.512 00:21:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:10.512 00:21:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:10.512 00:21:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:10.512 00:21:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:10.512 00:21:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:10.512 00:21:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:10.512 00:21:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:10.512 00:21:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:10.512 00:21:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:10.512 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:10.512 00:21:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:10.512 00:21:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:10.512 00:21:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:10.512 00:21:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:10.512 00:21:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:10.512 00:21:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:10.512 00:21:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:10.512 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:10.512 00:21:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:10.512 00:21:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:10.512 00:21:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:10.512 00:21:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:10.512 00:21:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:10.512 00:21:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:10.512 00:21:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:10.512 00:21:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:10.512 00:21:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:10.512 00:21:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:10.512 00:21:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:10.512 00:21:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:10.512 00:21:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:10.512 00:21:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:10.512 00:21:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:10.512 00:21:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:10.512 Found net devices under 0000:86:00.0: cvl_0_0 00:20:10.512 00:21:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:10.512 00:21:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:10.512 00:21:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:10.512 00:21:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:10.512 00:21:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:10.512 00:21:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:10.512 00:21:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:10.512 00:21:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:10.512 00:21:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:10.512 Found net devices under 0000:86:00.1: cvl_0_1 00:20:10.512 00:21:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:10.512 00:21:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:10.512 00:21:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:20:10.512 00:21:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:10.512 00:21:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:10.512 00:21:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:10.512 00:21:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:10.512 00:21:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:10.512 00:21:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:10.512 00:21:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:10.512 00:21:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:10.512 00:21:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:10.512 00:21:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:10.512 00:21:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:10.512 00:21:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:10.512 00:21:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:10.512 00:21:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:10.512 00:21:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:10.512 00:21:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:10.512 00:21:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:10.512 00:21:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:10.512 00:21:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:10.512 00:21:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:10.512 00:21:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:10.512 00:21:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:10.512 00:21:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:10.512 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:10.512 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:20:10.512 00:20:10.512 --- 10.0.0.2 ping statistics --- 00:20:10.512 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:10.512 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:20:10.512 00:21:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:10.512 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:10.512 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.245 ms 00:20:10.512 00:20:10.512 --- 10.0.0.1 ping statistics --- 00:20:10.512 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:10.512 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:20:10.512 00:21:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:10.512 00:21:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:20:10.512 00:21:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:10.512 00:21:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:10.512 00:21:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:10.512 00:21:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:10.512 00:21:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:10.512 00:21:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:10.512 00:21:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:10.512 00:21:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:20:10.512 00:21:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:20:10.512 00:21:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:20:10.512 00:21:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:20:10.512 net.core.busy_poll = 1 00:20:10.512 00:21:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:20:10.512 net.core.busy_read = 1 00:20:10.512 00:21:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:20:10.512 00:21:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:20:10.512 00:21:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:20:10.512 00:21:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:20:10.512 00:21:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:20:10.512 00:21:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:10.512 00:21:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:10.512 00:21:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:10.512 00:21:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:10.512 00:21:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1562349 00:20:10.512 00:21:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1562349 00:20:10.512 00:21:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@823 -- # '[' -z 1562349 ']' 00:20:10.512 00:21:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:10.512 00:21:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@828 -- # local max_retries=100 00:20:10.512 00:21:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:10.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:10.512 00:21:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@832 -- # xtrace_disable 00:20:10.512 00:21:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:10.512 00:21:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:10.512 [2024-07-16 00:21:29.322412] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:20:10.512 [2024-07-16 00:21:29.322457] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:10.771 [2024-07-16 00:21:29.380281] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:10.771 [2024-07-16 00:21:29.461053] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:10.771 [2024-07-16 00:21:29.461088] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:10.771 [2024-07-16 00:21:29.461096] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:10.771 [2024-07-16 00:21:29.461102] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:10.771 [2024-07-16 00:21:29.461107] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:10.771 [2024-07-16 00:21:29.461145] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:10.771 [2024-07-16 00:21:29.461247] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:10.771 [2024-07-16 00:21:29.461315] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:10.771 [2024-07-16 00:21:29.461316] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:11.338 00:21:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:20:11.338 00:21:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@856 -- # return 0 00:20:11.338 00:21:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:11.338 00:21:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:11.338 00:21:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:11.338 00:21:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:11.338 00:21:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:20:11.338 00:21:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:20:11.338 00:21:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:20:11.338 00:21:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:11.338 00:21:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:11.339 00:21:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:11.597 00:21:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:20:11.597 00:21:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:20:11.597 00:21:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:11.597 00:21:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:11.597 00:21:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:11.597 00:21:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:20:11.597 00:21:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:11.597 00:21:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:11.597 00:21:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:11.597 00:21:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:20:11.597 00:21:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:11.597 00:21:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:11.597 [2024-07-16 00:21:30.311038] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:11.597 00:21:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:11.597 00:21:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:11.597 00:21:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:11.597 00:21:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:11.597 Malloc1 00:20:11.597 00:21:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:11.597 00:21:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:11.597 00:21:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:11.597 00:21:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:11.597 00:21:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:11.597 00:21:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:11.597 00:21:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:11.597 00:21:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:11.597 00:21:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:11.598 00:21:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:11.598 00:21:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:11.598 00:21:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:11.598 [2024-07-16 00:21:30.358488] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:11.598 00:21:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:11.598 00:21:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=1562541 00:20:11.598 00:21:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:20:11.598 00:21:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:14.130 00:21:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:20:14.130 00:21:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:14.130 00:21:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:14.130 00:21:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:14.130 00:21:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:20:14.130 "tick_rate": 2300000000, 00:20:14.130 "poll_groups": [ 00:20:14.130 { 00:20:14.130 "name": "nvmf_tgt_poll_group_000", 00:20:14.130 "admin_qpairs": 1, 00:20:14.130 "io_qpairs": 3, 00:20:14.130 "current_admin_qpairs": 1, 00:20:14.130 "current_io_qpairs": 3, 00:20:14.130 "pending_bdev_io": 0, 00:20:14.130 "completed_nvme_io": 30412, 00:20:14.130 "transports": [ 00:20:14.130 { 00:20:14.130 "trtype": "TCP" 00:20:14.130 } 00:20:14.130 ] 00:20:14.130 }, 00:20:14.130 { 00:20:14.130 "name": "nvmf_tgt_poll_group_001", 00:20:14.130 "admin_qpairs": 0, 00:20:14.130 "io_qpairs": 1, 00:20:14.130 "current_admin_qpairs": 0, 00:20:14.130 "current_io_qpairs": 1, 00:20:14.130 "pending_bdev_io": 0, 00:20:14.130 "completed_nvme_io": 28032, 00:20:14.130 "transports": [ 00:20:14.130 { 00:20:14.130 "trtype": "TCP" 00:20:14.130 } 00:20:14.130 ] 00:20:14.130 }, 00:20:14.130 { 00:20:14.130 "name": "nvmf_tgt_poll_group_002", 00:20:14.130 "admin_qpairs": 0, 00:20:14.130 "io_qpairs": 0, 00:20:14.130 "current_admin_qpairs": 0, 00:20:14.130 "current_io_qpairs": 0, 00:20:14.130 "pending_bdev_io": 0, 00:20:14.130 "completed_nvme_io": 0, 00:20:14.130 "transports": [ 00:20:14.130 { 00:20:14.130 "trtype": "TCP" 00:20:14.130 } 00:20:14.130 ] 00:20:14.130 }, 00:20:14.130 { 00:20:14.130 "name": "nvmf_tgt_poll_group_003", 00:20:14.130 "admin_qpairs": 0, 00:20:14.130 "io_qpairs": 0, 00:20:14.130 "current_admin_qpairs": 0, 00:20:14.130 "current_io_qpairs": 0, 00:20:14.130 "pending_bdev_io": 0, 00:20:14.130 "completed_nvme_io": 0, 00:20:14.130 "transports": [ 00:20:14.130 { 00:20:14.131 "trtype": "TCP" 00:20:14.131 } 00:20:14.131 ] 00:20:14.131 } 00:20:14.131 ] 00:20:14.131 }' 00:20:14.131 00:21:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:20:14.131 00:21:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:20:14.131 00:21:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:20:14.131 00:21:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:20:14.131 00:21:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 1562541 00:20:22.249 Initializing NVMe Controllers 00:20:22.249 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:22.249 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:22.249 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:22.249 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:22.249 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:22.249 Initialization complete. Launching workers. 00:20:22.249 ======================================================== 00:20:22.249 Latency(us) 00:20:22.249 Device Information : IOPS MiB/s Average min max 00:20:22.249 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 5618.50 21.95 11430.10 1730.88 58107.49 00:20:22.249 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 14826.30 57.92 4316.64 1437.83 46417.43 00:20:22.249 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 5330.90 20.82 12009.99 1843.71 57111.84 00:20:22.249 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 5130.00 20.04 12481.18 1776.60 58510.52 00:20:22.249 ======================================================== 00:20:22.249 Total : 30905.70 120.73 8292.07 1437.83 58510.52 00:20:22.249 00:20:22.249 00:21:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:20:22.249 00:21:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:22.249 00:21:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:20:22.249 00:21:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:22.249 00:21:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:20:22.249 00:21:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:22.249 00:21:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:22.249 rmmod nvme_tcp 00:20:22.249 rmmod nvme_fabrics 00:20:22.249 rmmod nvme_keyring 00:20:22.249 00:21:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:22.249 00:21:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:20:22.249 00:21:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:20:22.249 00:21:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1562349 ']' 00:20:22.249 00:21:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1562349 00:20:22.249 00:21:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@942 -- # '[' -z 1562349 ']' 00:20:22.249 00:21:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@946 -- # kill -0 1562349 00:20:22.249 00:21:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@947 -- # uname 00:20:22.249 00:21:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:20:22.249 00:21:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1562349 00:20:22.249 00:21:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:20:22.249 00:21:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:20:22.249 00:21:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1562349' 00:20:22.249 killing process with pid 1562349 00:20:22.249 00:21:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@961 -- # kill 1562349 00:20:22.249 00:21:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # wait 1562349 00:20:22.249 00:21:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:22.249 00:21:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:22.249 00:21:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:22.249 00:21:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:22.249 00:21:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:22.249 00:21:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:22.249 00:21:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:22.249 00:21:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:24.149 00:21:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:24.149 00:21:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:20:24.149 00:20:24.149 real 0m49.334s 00:20:24.149 user 2m49.565s 00:20:24.149 sys 0m9.191s 00:20:24.149 00:21:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1118 -- # xtrace_disable 00:20:24.149 00:21:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:24.149 ************************************ 00:20:24.149 END TEST nvmf_perf_adq 00:20:24.149 ************************************ 00:20:24.149 00:21:42 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:20:24.149 00:21:42 nvmf_tcp -- nvmf/nvmf.sh@83 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:20:24.149 00:21:42 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:20:24.149 00:21:42 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:20:24.149 00:21:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:24.408 ************************************ 00:20:24.408 START TEST nvmf_shutdown 00:20:24.408 ************************************ 00:20:24.408 00:21:43 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:20:24.408 * Looking for test storage... 00:20:24.408 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:24.408 00:21:43 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:24.408 00:21:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:20:24.408 00:21:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:24.408 00:21:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:24.408 00:21:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:24.408 00:21:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:24.408 00:21:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:24.408 00:21:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:24.408 00:21:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:24.408 00:21:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:24.408 00:21:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:24.408 00:21:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:24.408 00:21:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:24.408 00:21:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:20:24.408 00:21:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:24.408 00:21:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:24.408 00:21:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:24.408 00:21:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:24.408 00:21:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:24.408 00:21:43 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:24.408 00:21:43 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:24.408 00:21:43 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:24.409 00:21:43 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:24.409 00:21:43 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:24.409 00:21:43 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:24.409 00:21:43 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:20:24.409 00:21:43 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:24.409 00:21:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:20:24.409 00:21:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:24.409 00:21:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:24.409 00:21:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:24.409 00:21:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:24.409 00:21:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:24.409 00:21:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:24.409 00:21:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:24.409 00:21:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:24.409 00:21:43 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:24.409 00:21:43 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:24.409 00:21:43 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:20:24.409 00:21:43 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:20:24.409 00:21:43 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # xtrace_disable 00:20:24.409 00:21:43 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:24.409 ************************************ 00:20:24.409 START TEST nvmf_shutdown_tc1 00:20:24.409 ************************************ 00:20:24.409 00:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1117 -- # nvmf_shutdown_tc1 00:20:24.409 00:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:20:24.409 00:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:20:24.409 00:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:24.409 00:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:24.409 00:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:24.409 00:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:24.409 00:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:24.409 00:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:24.409 00:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:24.409 00:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:24.409 00:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:24.409 00:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:24.409 00:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:24.409 00:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:29.682 00:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:29.682 00:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:29.682 00:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:29.682 00:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:29.682 00:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:29.682 00:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:29.682 00:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:29.682 00:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:20:29.682 00:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:29.682 00:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:20:29.682 00:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:20:29.682 00:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:20:29.682 00:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:20:29.682 00:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:20:29.682 00:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:29.682 00:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:29.682 00:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:29.682 00:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:29.682 00:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:29.682 00:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:29.682 00:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:29.682 00:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:29.682 00:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:29.682 00:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:29.682 00:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:29.682 00:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:29.682 00:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:29.682 00:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:29.682 00:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:29.682 00:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:29.682 00:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:29.682 00:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:29.682 00:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:29.682 00:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:29.682 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:29.682 00:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:29.682 00:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:29.682 00:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:29.682 00:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:29.682 00:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:29.682 00:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:29.682 00:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:29.682 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:29.683 00:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:29.683 00:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:29.683 00:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:29.683 00:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:29.683 00:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:29.683 00:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:29.683 00:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:29.683 00:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:29.683 00:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:29.683 00:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:29.683 00:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:29.683 00:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:29.683 00:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:29.683 00:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:29.683 00:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:29.683 00:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:29.683 Found net devices under 0000:86:00.0: cvl_0_0 00:20:29.683 00:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:29.683 00:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:29.683 00:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:29.683 00:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:29.683 00:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:29.683 00:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:29.683 00:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:29.683 00:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:29.683 00:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:29.683 Found net devices under 0000:86:00.1: cvl_0_1 00:20:29.683 00:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:29.683 00:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:29.683 00:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:20:29.683 00:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:29.683 00:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:29.683 00:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:29.683 00:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:29.683 00:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:29.683 00:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:29.683 00:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:29.683 00:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:29.683 00:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:29.683 00:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:29.683 00:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:29.683 00:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:29.683 00:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:29.683 00:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:29.683 00:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:29.683 00:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:29.683 00:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:29.683 00:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:29.683 00:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:29.683 00:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:29.942 00:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:29.942 00:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:29.942 00:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:29.942 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:29.942 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.164 ms 00:20:29.942 00:20:29.942 --- 10.0.0.2 ping statistics --- 00:20:29.942 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:29.942 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:20:29.942 00:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:29.942 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:29.942 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:20:29.942 00:20:29.942 --- 10.0.0.1 ping statistics --- 00:20:29.942 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:29.942 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:20:29.942 00:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:29.942 00:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:20:29.942 00:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:29.942 00:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:29.942 00:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:29.942 00:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:29.942 00:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:29.942 00:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:29.942 00:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:29.942 00:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:20:29.942 00:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:29.942 00:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:29.942 00:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:29.942 00:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=1567742 00:20:29.942 00:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 1567742 00:20:29.942 00:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:29.942 00:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@823 -- # '[' -z 1567742 ']' 00:20:29.942 00:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:29.942 00:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@828 -- # local max_retries=100 00:20:29.942 00:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:29.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:29.942 00:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@832 -- # xtrace_disable 00:20:29.942 00:21:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:29.942 [2024-07-16 00:21:48.689608] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:20:29.942 [2024-07-16 00:21:48.689652] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:29.942 [2024-07-16 00:21:48.748834] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:30.201 [2024-07-16 00:21:48.821865] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:30.201 [2024-07-16 00:21:48.821905] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:30.201 [2024-07-16 00:21:48.821911] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:30.201 [2024-07-16 00:21:48.821917] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:30.201 [2024-07-16 00:21:48.821922] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:30.201 [2024-07-16 00:21:48.822027] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:30.201 [2024-07-16 00:21:48.822131] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:30.201 [2024-07-16 00:21:48.822263] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:30.201 [2024-07-16 00:21:48.822265] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:20:30.770 00:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:20:30.770 00:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@856 -- # return 0 00:20:30.770 00:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:30.770 00:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:30.770 00:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:30.770 00:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:30.770 00:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:30.770 00:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:30.770 00:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:30.770 [2024-07-16 00:21:49.533106] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:30.770 00:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:30.770 00:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:20:30.770 00:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:20:30.770 00:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:30.770 00:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:30.770 00:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:30.770 00:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:30.770 00:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:30.770 00:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:30.770 00:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:30.770 00:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:30.770 00:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:30.770 00:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:30.770 00:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:30.770 00:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:30.770 00:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:30.770 00:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:30.770 00:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:30.770 00:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:30.770 00:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:30.770 00:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:30.770 00:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:30.770 00:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:30.770 00:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:30.770 00:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:30.770 00:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:30.770 00:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:20:30.770 00:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:30.770 00:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:30.770 Malloc1 00:20:31.029 [2024-07-16 00:21:49.628843] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:31.029 Malloc2 00:20:31.029 Malloc3 00:20:31.029 Malloc4 00:20:31.029 Malloc5 00:20:31.029 Malloc6 00:20:31.029 Malloc7 00:20:31.289 Malloc8 00:20:31.289 Malloc9 00:20:31.289 Malloc10 00:20:31.289 00:21:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:31.289 00:21:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:20:31.289 00:21:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:31.289 00:21:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:31.289 00:21:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=1568024 00:20:31.289 00:21:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 1568024 /var/tmp/bdevperf.sock 00:20:31.289 00:21:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@823 -- # '[' -z 1568024 ']' 00:20:31.289 00:21:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:20:31.289 00:21:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:31.289 00:21:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:31.289 00:21:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@828 -- # local max_retries=100 00:20:31.289 00:21:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:20:31.289 00:21:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:31.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:31.289 00:21:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:20:31.289 00:21:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@832 -- # xtrace_disable 00:20:31.289 00:21:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:31.289 00:21:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:31.289 00:21:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:31.289 { 00:20:31.289 "params": { 00:20:31.289 "name": "Nvme$subsystem", 00:20:31.289 "trtype": "$TEST_TRANSPORT", 00:20:31.289 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:31.289 "adrfam": "ipv4", 00:20:31.289 "trsvcid": "$NVMF_PORT", 00:20:31.289 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:31.289 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:31.289 "hdgst": ${hdgst:-false}, 00:20:31.289 "ddgst": ${ddgst:-false} 00:20:31.289 }, 00:20:31.289 "method": "bdev_nvme_attach_controller" 00:20:31.289 } 00:20:31.289 EOF 00:20:31.289 )") 00:20:31.289 00:21:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:31.289 00:21:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:31.289 00:21:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:31.289 { 00:20:31.289 "params": { 00:20:31.289 "name": "Nvme$subsystem", 00:20:31.289 "trtype": "$TEST_TRANSPORT", 00:20:31.289 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:31.289 "adrfam": "ipv4", 00:20:31.289 "trsvcid": "$NVMF_PORT", 00:20:31.289 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:31.289 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:31.289 "hdgst": ${hdgst:-false}, 00:20:31.289 "ddgst": ${ddgst:-false} 00:20:31.289 }, 00:20:31.289 "method": "bdev_nvme_attach_controller" 00:20:31.289 } 00:20:31.289 EOF 00:20:31.289 )") 00:20:31.289 00:21:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:31.289 00:21:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:31.289 00:21:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:31.289 { 00:20:31.289 "params": { 00:20:31.289 "name": "Nvme$subsystem", 00:20:31.289 "trtype": "$TEST_TRANSPORT", 00:20:31.289 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:31.289 "adrfam": "ipv4", 00:20:31.289 "trsvcid": "$NVMF_PORT", 00:20:31.289 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:31.289 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:31.289 "hdgst": ${hdgst:-false}, 00:20:31.289 "ddgst": ${ddgst:-false} 00:20:31.289 }, 00:20:31.289 "method": "bdev_nvme_attach_controller" 00:20:31.289 } 00:20:31.289 EOF 00:20:31.289 )") 00:20:31.289 00:21:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:31.289 00:21:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:31.289 00:21:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:31.289 { 00:20:31.289 "params": { 00:20:31.289 "name": "Nvme$subsystem", 00:20:31.289 "trtype": "$TEST_TRANSPORT", 00:20:31.289 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:31.289 "adrfam": "ipv4", 00:20:31.289 "trsvcid": "$NVMF_PORT", 00:20:31.289 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:31.289 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:31.289 "hdgst": ${hdgst:-false}, 00:20:31.289 "ddgst": ${ddgst:-false} 00:20:31.289 }, 00:20:31.289 "method": "bdev_nvme_attach_controller" 00:20:31.289 } 00:20:31.289 EOF 00:20:31.289 )") 00:20:31.289 00:21:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:31.289 00:21:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:31.289 00:21:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:31.289 { 00:20:31.289 "params": { 00:20:31.289 "name": "Nvme$subsystem", 00:20:31.289 "trtype": "$TEST_TRANSPORT", 00:20:31.289 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:31.289 "adrfam": "ipv4", 00:20:31.289 "trsvcid": "$NVMF_PORT", 00:20:31.289 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:31.289 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:31.289 "hdgst": ${hdgst:-false}, 00:20:31.289 "ddgst": ${ddgst:-false} 00:20:31.289 }, 00:20:31.289 "method": "bdev_nvme_attach_controller" 00:20:31.289 } 00:20:31.289 EOF 00:20:31.289 )") 00:20:31.289 00:21:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:31.289 00:21:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:31.289 00:21:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:31.289 { 00:20:31.289 "params": { 00:20:31.289 "name": "Nvme$subsystem", 00:20:31.289 "trtype": "$TEST_TRANSPORT", 00:20:31.289 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:31.289 "adrfam": "ipv4", 00:20:31.289 "trsvcid": "$NVMF_PORT", 00:20:31.289 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:31.289 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:31.289 "hdgst": ${hdgst:-false}, 00:20:31.289 "ddgst": ${ddgst:-false} 00:20:31.289 }, 00:20:31.289 "method": "bdev_nvme_attach_controller" 00:20:31.289 } 00:20:31.289 EOF 00:20:31.289 )") 00:20:31.289 00:21:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:31.289 00:21:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:31.289 00:21:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:31.289 { 00:20:31.289 "params": { 00:20:31.289 "name": "Nvme$subsystem", 00:20:31.289 "trtype": "$TEST_TRANSPORT", 00:20:31.290 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:31.290 "adrfam": "ipv4", 00:20:31.290 "trsvcid": "$NVMF_PORT", 00:20:31.290 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:31.290 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:31.290 "hdgst": ${hdgst:-false}, 00:20:31.290 "ddgst": ${ddgst:-false} 00:20:31.290 }, 00:20:31.290 "method": "bdev_nvme_attach_controller" 00:20:31.290 } 00:20:31.290 EOF 00:20:31.290 )") 00:20:31.290 00:21:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:31.290 [2024-07-16 00:21:50.095939] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:20:31.290 [2024-07-16 00:21:50.095990] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:20:31.290 00:21:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:31.290 00:21:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:31.290 { 00:20:31.290 "params": { 00:20:31.290 "name": "Nvme$subsystem", 00:20:31.290 "trtype": "$TEST_TRANSPORT", 00:20:31.290 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:31.290 "adrfam": "ipv4", 00:20:31.290 "trsvcid": "$NVMF_PORT", 00:20:31.290 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:31.290 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:31.290 "hdgst": ${hdgst:-false}, 00:20:31.290 "ddgst": ${ddgst:-false} 00:20:31.290 }, 00:20:31.290 "method": "bdev_nvme_attach_controller" 00:20:31.290 } 00:20:31.290 EOF 00:20:31.290 )") 00:20:31.290 00:21:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:31.290 00:21:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:31.290 00:21:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:31.290 { 00:20:31.290 "params": { 00:20:31.290 "name": "Nvme$subsystem", 00:20:31.290 "trtype": "$TEST_TRANSPORT", 00:20:31.290 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:31.290 "adrfam": "ipv4", 00:20:31.290 "trsvcid": "$NVMF_PORT", 00:20:31.290 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:31.290 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:31.290 "hdgst": ${hdgst:-false}, 00:20:31.290 "ddgst": ${ddgst:-false} 00:20:31.290 }, 00:20:31.290 "method": "bdev_nvme_attach_controller" 00:20:31.290 } 00:20:31.290 EOF 00:20:31.290 )") 00:20:31.290 00:21:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:31.290 00:21:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:31.290 00:21:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:31.290 { 00:20:31.290 "params": { 00:20:31.290 "name": "Nvme$subsystem", 00:20:31.290 "trtype": "$TEST_TRANSPORT", 00:20:31.290 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:31.290 "adrfam": "ipv4", 00:20:31.290 "trsvcid": "$NVMF_PORT", 00:20:31.290 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:31.290 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:31.290 "hdgst": ${hdgst:-false}, 00:20:31.290 "ddgst": ${ddgst:-false} 00:20:31.290 }, 00:20:31.290 "method": "bdev_nvme_attach_controller" 00:20:31.290 } 00:20:31.290 EOF 00:20:31.290 )") 00:20:31.290 00:21:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:31.290 00:21:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:20:31.290 00:21:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:20:31.290 00:21:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:31.290 "params": { 00:20:31.290 "name": "Nvme1", 00:20:31.290 "trtype": "tcp", 00:20:31.290 "traddr": "10.0.0.2", 00:20:31.290 "adrfam": "ipv4", 00:20:31.290 "trsvcid": "4420", 00:20:31.290 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:31.290 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:31.290 "hdgst": false, 00:20:31.290 "ddgst": false 00:20:31.290 }, 00:20:31.290 "method": "bdev_nvme_attach_controller" 00:20:31.290 },{ 00:20:31.290 "params": { 00:20:31.290 "name": "Nvme2", 00:20:31.290 "trtype": "tcp", 00:20:31.290 "traddr": "10.0.0.2", 00:20:31.290 "adrfam": "ipv4", 00:20:31.290 "trsvcid": "4420", 00:20:31.290 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:31.290 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:31.290 "hdgst": false, 00:20:31.290 "ddgst": false 00:20:31.290 }, 00:20:31.290 "method": "bdev_nvme_attach_controller" 00:20:31.290 },{ 00:20:31.290 "params": { 00:20:31.290 "name": "Nvme3", 00:20:31.290 "trtype": "tcp", 00:20:31.290 "traddr": "10.0.0.2", 00:20:31.290 "adrfam": "ipv4", 00:20:31.290 "trsvcid": "4420", 00:20:31.290 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:31.290 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:31.290 "hdgst": false, 00:20:31.290 "ddgst": false 00:20:31.290 }, 00:20:31.290 "method": "bdev_nvme_attach_controller" 00:20:31.290 },{ 00:20:31.290 "params": { 00:20:31.290 "name": "Nvme4", 00:20:31.290 "trtype": "tcp", 00:20:31.290 "traddr": "10.0.0.2", 00:20:31.290 "adrfam": "ipv4", 00:20:31.290 "trsvcid": "4420", 00:20:31.290 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:31.290 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:31.290 "hdgst": false, 00:20:31.290 "ddgst": false 00:20:31.290 }, 00:20:31.290 "method": "bdev_nvme_attach_controller" 00:20:31.290 },{ 00:20:31.290 "params": { 00:20:31.290 "name": "Nvme5", 00:20:31.290 "trtype": "tcp", 00:20:31.290 "traddr": "10.0.0.2", 00:20:31.290 "adrfam": "ipv4", 00:20:31.290 "trsvcid": "4420", 00:20:31.290 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:31.290 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:31.290 "hdgst": false, 00:20:31.290 "ddgst": false 00:20:31.290 }, 00:20:31.290 "method": "bdev_nvme_attach_controller" 00:20:31.290 },{ 00:20:31.290 "params": { 00:20:31.290 "name": "Nvme6", 00:20:31.290 "trtype": "tcp", 00:20:31.290 "traddr": "10.0.0.2", 00:20:31.290 "adrfam": "ipv4", 00:20:31.290 "trsvcid": "4420", 00:20:31.290 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:31.290 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:31.290 "hdgst": false, 00:20:31.290 "ddgst": false 00:20:31.290 }, 00:20:31.290 "method": "bdev_nvme_attach_controller" 00:20:31.290 },{ 00:20:31.290 "params": { 00:20:31.290 "name": "Nvme7", 00:20:31.290 "trtype": "tcp", 00:20:31.290 "traddr": "10.0.0.2", 00:20:31.290 "adrfam": "ipv4", 00:20:31.290 "trsvcid": "4420", 00:20:31.290 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:31.290 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:31.290 "hdgst": false, 00:20:31.290 "ddgst": false 00:20:31.290 }, 00:20:31.290 "method": "bdev_nvme_attach_controller" 00:20:31.290 },{ 00:20:31.290 "params": { 00:20:31.290 "name": "Nvme8", 00:20:31.290 "trtype": "tcp", 00:20:31.290 "traddr": "10.0.0.2", 00:20:31.290 "adrfam": "ipv4", 00:20:31.290 "trsvcid": "4420", 00:20:31.290 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:31.290 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:31.290 "hdgst": false, 00:20:31.290 "ddgst": false 00:20:31.290 }, 00:20:31.290 "method": "bdev_nvme_attach_controller" 00:20:31.290 },{ 00:20:31.290 "params": { 00:20:31.290 "name": "Nvme9", 00:20:31.290 "trtype": "tcp", 00:20:31.290 "traddr": "10.0.0.2", 00:20:31.290 "adrfam": "ipv4", 00:20:31.290 "trsvcid": "4420", 00:20:31.290 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:31.290 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:31.290 "hdgst": false, 00:20:31.290 "ddgst": false 00:20:31.290 }, 00:20:31.290 "method": "bdev_nvme_attach_controller" 00:20:31.290 },{ 00:20:31.290 "params": { 00:20:31.290 "name": "Nvme10", 00:20:31.290 "trtype": "tcp", 00:20:31.290 "traddr": "10.0.0.2", 00:20:31.290 "adrfam": "ipv4", 00:20:31.290 "trsvcid": "4420", 00:20:31.290 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:31.290 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:31.290 "hdgst": false, 00:20:31.290 "ddgst": false 00:20:31.290 }, 00:20:31.290 "method": "bdev_nvme_attach_controller" 00:20:31.290 }' 00:20:31.549 [2024-07-16 00:21:50.151237] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:31.549 [2024-07-16 00:21:50.226204] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:32.926 00:21:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:20:32.926 00:21:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@856 -- # return 0 00:20:32.926 00:21:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:32.926 00:21:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:32.926 00:21:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:32.926 00:21:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:32.926 00:21:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 1568024 00:20:32.926 00:21:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:20:32.926 00:21:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:20:33.867 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 1568024 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:20:33.867 00:21:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 1567742 00:20:33.867 00:21:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:20:33.867 00:21:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:33.867 00:21:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:20:33.867 00:21:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:20:33.867 00:21:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:33.867 00:21:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:33.867 { 00:20:33.867 "params": { 00:20:33.867 "name": "Nvme$subsystem", 00:20:33.867 "trtype": "$TEST_TRANSPORT", 00:20:33.867 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:33.867 "adrfam": "ipv4", 00:20:33.867 "trsvcid": "$NVMF_PORT", 00:20:33.867 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:33.867 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:33.867 "hdgst": ${hdgst:-false}, 00:20:33.867 "ddgst": ${ddgst:-false} 00:20:33.867 }, 00:20:33.867 "method": "bdev_nvme_attach_controller" 00:20:33.867 } 00:20:33.867 EOF 00:20:33.867 )") 00:20:33.867 00:21:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:33.867 00:21:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:33.867 00:21:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:33.867 { 00:20:33.867 "params": { 00:20:33.867 "name": "Nvme$subsystem", 00:20:33.867 "trtype": "$TEST_TRANSPORT", 00:20:33.867 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:33.867 "adrfam": "ipv4", 00:20:33.867 "trsvcid": "$NVMF_PORT", 00:20:33.867 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:33.867 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:33.867 "hdgst": ${hdgst:-false}, 00:20:33.867 "ddgst": ${ddgst:-false} 00:20:33.867 }, 00:20:33.867 "method": "bdev_nvme_attach_controller" 00:20:33.867 } 00:20:33.867 EOF 00:20:33.867 )") 00:20:33.867 00:21:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:33.867 00:21:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:33.867 00:21:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:33.867 { 00:20:33.867 "params": { 00:20:33.867 "name": "Nvme$subsystem", 00:20:33.867 "trtype": "$TEST_TRANSPORT", 00:20:33.867 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:33.867 "adrfam": "ipv4", 00:20:33.867 "trsvcid": "$NVMF_PORT", 00:20:33.867 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:33.867 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:33.867 "hdgst": ${hdgst:-false}, 00:20:33.867 "ddgst": ${ddgst:-false} 00:20:33.867 }, 00:20:33.867 "method": "bdev_nvme_attach_controller" 00:20:33.867 } 00:20:33.867 EOF 00:20:33.867 )") 00:20:33.867 00:21:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:33.867 00:21:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:33.867 00:21:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:33.867 { 00:20:33.867 "params": { 00:20:33.867 "name": "Nvme$subsystem", 00:20:33.867 "trtype": "$TEST_TRANSPORT", 00:20:33.867 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:33.867 "adrfam": "ipv4", 00:20:33.867 "trsvcid": "$NVMF_PORT", 00:20:33.867 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:33.867 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:33.867 "hdgst": ${hdgst:-false}, 00:20:33.867 "ddgst": ${ddgst:-false} 00:20:33.867 }, 00:20:33.867 "method": "bdev_nvme_attach_controller" 00:20:33.867 } 00:20:33.867 EOF 00:20:33.867 )") 00:20:33.867 00:21:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:33.867 00:21:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:33.867 00:21:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:33.867 { 00:20:33.867 "params": { 00:20:33.867 "name": "Nvme$subsystem", 00:20:33.867 "trtype": "$TEST_TRANSPORT", 00:20:33.867 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:33.867 "adrfam": "ipv4", 00:20:33.867 "trsvcid": "$NVMF_PORT", 00:20:33.867 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:33.867 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:33.867 "hdgst": ${hdgst:-false}, 00:20:33.867 "ddgst": ${ddgst:-false} 00:20:33.867 }, 00:20:33.867 "method": "bdev_nvme_attach_controller" 00:20:33.867 } 00:20:33.867 EOF 00:20:33.867 )") 00:20:33.867 00:21:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:33.867 00:21:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:33.867 00:21:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:33.867 { 00:20:33.867 "params": { 00:20:33.867 "name": "Nvme$subsystem", 00:20:33.867 "trtype": "$TEST_TRANSPORT", 00:20:33.867 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:33.867 "adrfam": "ipv4", 00:20:33.867 "trsvcid": "$NVMF_PORT", 00:20:33.867 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:33.867 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:33.867 "hdgst": ${hdgst:-false}, 00:20:33.867 "ddgst": ${ddgst:-false} 00:20:33.867 }, 00:20:33.867 "method": "bdev_nvme_attach_controller" 00:20:33.867 } 00:20:33.867 EOF 00:20:33.867 )") 00:20:33.867 00:21:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:33.867 00:21:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:33.867 00:21:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:33.867 { 00:20:33.867 "params": { 00:20:33.867 "name": "Nvme$subsystem", 00:20:33.867 "trtype": "$TEST_TRANSPORT", 00:20:33.867 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:33.867 "adrfam": "ipv4", 00:20:33.867 "trsvcid": "$NVMF_PORT", 00:20:33.867 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:33.867 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:33.867 "hdgst": ${hdgst:-false}, 00:20:33.867 "ddgst": ${ddgst:-false} 00:20:33.867 }, 00:20:33.867 "method": "bdev_nvme_attach_controller" 00:20:33.867 } 00:20:33.867 EOF 00:20:33.867 )") 00:20:33.867 00:21:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:33.867 [2024-07-16 00:21:52.593242] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:20:33.867 [2024-07-16 00:21:52.593289] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1568504 ] 00:20:33.867 00:21:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:33.867 00:21:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:33.867 { 00:20:33.867 "params": { 00:20:33.867 "name": "Nvme$subsystem", 00:20:33.867 "trtype": "$TEST_TRANSPORT", 00:20:33.867 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:33.867 "adrfam": "ipv4", 00:20:33.867 "trsvcid": "$NVMF_PORT", 00:20:33.867 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:33.867 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:33.867 "hdgst": ${hdgst:-false}, 00:20:33.867 "ddgst": ${ddgst:-false} 00:20:33.867 }, 00:20:33.867 "method": "bdev_nvme_attach_controller" 00:20:33.867 } 00:20:33.867 EOF 00:20:33.867 )") 00:20:33.867 00:21:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:33.867 00:21:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:33.867 00:21:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:33.867 { 00:20:33.867 "params": { 00:20:33.867 "name": "Nvme$subsystem", 00:20:33.867 "trtype": "$TEST_TRANSPORT", 00:20:33.867 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:33.867 "adrfam": "ipv4", 00:20:33.867 "trsvcid": "$NVMF_PORT", 00:20:33.867 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:33.867 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:33.868 "hdgst": ${hdgst:-false}, 00:20:33.868 "ddgst": ${ddgst:-false} 00:20:33.868 }, 00:20:33.868 "method": "bdev_nvme_attach_controller" 00:20:33.868 } 00:20:33.868 EOF 00:20:33.868 )") 00:20:33.868 00:21:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:33.868 00:21:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:33.868 00:21:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:33.868 { 00:20:33.868 "params": { 00:20:33.868 "name": "Nvme$subsystem", 00:20:33.868 "trtype": "$TEST_TRANSPORT", 00:20:33.868 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:33.868 "adrfam": "ipv4", 00:20:33.868 "trsvcid": "$NVMF_PORT", 00:20:33.868 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:33.868 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:33.868 "hdgst": ${hdgst:-false}, 00:20:33.868 "ddgst": ${ddgst:-false} 00:20:33.868 }, 00:20:33.868 "method": "bdev_nvme_attach_controller" 00:20:33.868 } 00:20:33.868 EOF 00:20:33.868 )") 00:20:33.868 00:21:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:33.868 00:21:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:20:33.868 00:21:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:20:33.868 00:21:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:33.868 "params": { 00:20:33.868 "name": "Nvme1", 00:20:33.868 "trtype": "tcp", 00:20:33.868 "traddr": "10.0.0.2", 00:20:33.868 "adrfam": "ipv4", 00:20:33.868 "trsvcid": "4420", 00:20:33.868 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:33.868 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:33.868 "hdgst": false, 00:20:33.868 "ddgst": false 00:20:33.868 }, 00:20:33.868 "method": "bdev_nvme_attach_controller" 00:20:33.868 },{ 00:20:33.868 "params": { 00:20:33.868 "name": "Nvme2", 00:20:33.868 "trtype": "tcp", 00:20:33.868 "traddr": "10.0.0.2", 00:20:33.868 "adrfam": "ipv4", 00:20:33.868 "trsvcid": "4420", 00:20:33.868 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:33.868 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:33.868 "hdgst": false, 00:20:33.868 "ddgst": false 00:20:33.868 }, 00:20:33.868 "method": "bdev_nvme_attach_controller" 00:20:33.868 },{ 00:20:33.868 "params": { 00:20:33.868 "name": "Nvme3", 00:20:33.868 "trtype": "tcp", 00:20:33.868 "traddr": "10.0.0.2", 00:20:33.868 "adrfam": "ipv4", 00:20:33.868 "trsvcid": "4420", 00:20:33.868 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:33.868 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:33.868 "hdgst": false, 00:20:33.868 "ddgst": false 00:20:33.868 }, 00:20:33.868 "method": "bdev_nvme_attach_controller" 00:20:33.868 },{ 00:20:33.868 "params": { 00:20:33.868 "name": "Nvme4", 00:20:33.868 "trtype": "tcp", 00:20:33.868 "traddr": "10.0.0.2", 00:20:33.868 "adrfam": "ipv4", 00:20:33.868 "trsvcid": "4420", 00:20:33.868 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:33.868 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:33.868 "hdgst": false, 00:20:33.868 "ddgst": false 00:20:33.868 }, 00:20:33.868 "method": "bdev_nvme_attach_controller" 00:20:33.868 },{ 00:20:33.868 "params": { 00:20:33.868 "name": "Nvme5", 00:20:33.868 "trtype": "tcp", 00:20:33.868 "traddr": "10.0.0.2", 00:20:33.868 "adrfam": "ipv4", 00:20:33.868 "trsvcid": "4420", 00:20:33.868 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:33.868 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:33.868 "hdgst": false, 00:20:33.868 "ddgst": false 00:20:33.868 }, 00:20:33.868 "method": "bdev_nvme_attach_controller" 00:20:33.868 },{ 00:20:33.868 "params": { 00:20:33.868 "name": "Nvme6", 00:20:33.868 "trtype": "tcp", 00:20:33.868 "traddr": "10.0.0.2", 00:20:33.868 "adrfam": "ipv4", 00:20:33.868 "trsvcid": "4420", 00:20:33.868 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:33.868 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:33.868 "hdgst": false, 00:20:33.868 "ddgst": false 00:20:33.868 }, 00:20:33.868 "method": "bdev_nvme_attach_controller" 00:20:33.868 },{ 00:20:33.868 "params": { 00:20:33.868 "name": "Nvme7", 00:20:33.868 "trtype": "tcp", 00:20:33.868 "traddr": "10.0.0.2", 00:20:33.868 "adrfam": "ipv4", 00:20:33.868 "trsvcid": "4420", 00:20:33.868 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:33.868 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:33.868 "hdgst": false, 00:20:33.868 "ddgst": false 00:20:33.868 }, 00:20:33.868 "method": "bdev_nvme_attach_controller" 00:20:33.868 },{ 00:20:33.868 "params": { 00:20:33.868 "name": "Nvme8", 00:20:33.868 "trtype": "tcp", 00:20:33.868 "traddr": "10.0.0.2", 00:20:33.868 "adrfam": "ipv4", 00:20:33.868 "trsvcid": "4420", 00:20:33.868 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:33.868 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:33.868 "hdgst": false, 00:20:33.868 "ddgst": false 00:20:33.868 }, 00:20:33.868 "method": "bdev_nvme_attach_controller" 00:20:33.868 },{ 00:20:33.868 "params": { 00:20:33.868 "name": "Nvme9", 00:20:33.868 "trtype": "tcp", 00:20:33.868 "traddr": "10.0.0.2", 00:20:33.868 "adrfam": "ipv4", 00:20:33.868 "trsvcid": "4420", 00:20:33.868 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:33.868 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:33.868 "hdgst": false, 00:20:33.868 "ddgst": false 00:20:33.868 }, 00:20:33.868 "method": "bdev_nvme_attach_controller" 00:20:33.868 },{ 00:20:33.868 "params": { 00:20:33.868 "name": "Nvme10", 00:20:33.868 "trtype": "tcp", 00:20:33.868 "traddr": "10.0.0.2", 00:20:33.868 "adrfam": "ipv4", 00:20:33.868 "trsvcid": "4420", 00:20:33.868 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:33.868 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:33.868 "hdgst": false, 00:20:33.868 "ddgst": false 00:20:33.868 }, 00:20:33.868 "method": "bdev_nvme_attach_controller" 00:20:33.868 }' 00:20:33.868 [2024-07-16 00:21:52.650064] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:34.127 [2024-07-16 00:21:52.724440] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:35.500 Running I/O for 1 seconds... 00:20:36.459 00:20:36.459 Latency(us) 00:20:36.459 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:36.459 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:36.459 Verification LBA range: start 0x0 length 0x400 00:20:36.459 Nvme1n1 : 1.13 286.90 17.93 0.00 0.00 221094.30 17438.27 210627.01 00:20:36.459 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:36.459 Verification LBA range: start 0x0 length 0x400 00:20:36.459 Nvme2n1 : 1.03 248.96 15.56 0.00 0.00 250347.30 19945.74 217009.64 00:20:36.459 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:36.459 Verification LBA range: start 0x0 length 0x400 00:20:36.459 Nvme3n1 : 1.14 280.95 17.56 0.00 0.00 219133.51 18919.96 231598.53 00:20:36.459 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:36.459 Verification LBA range: start 0x0 length 0x400 00:20:36.459 Nvme4n1 : 1.15 278.91 17.43 0.00 0.00 217818.16 15500.69 217009.64 00:20:36.459 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:36.459 Verification LBA range: start 0x0 length 0x400 00:20:36.459 Nvme5n1 : 1.14 280.26 17.52 0.00 0.00 213503.11 18350.08 212450.62 00:20:36.459 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:36.459 Verification LBA range: start 0x0 length 0x400 00:20:36.459 Nvme6n1 : 1.15 277.18 17.32 0.00 0.00 212732.53 18122.13 219745.06 00:20:36.459 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:36.459 Verification LBA range: start 0x0 length 0x400 00:20:36.459 Nvme7n1 : 1.12 285.28 17.83 0.00 0.00 202804.00 16184.54 215186.03 00:20:36.459 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:36.459 Verification LBA range: start 0x0 length 0x400 00:20:36.459 Nvme8n1 : 1.14 280.48 17.53 0.00 0.00 203565.15 19831.76 214274.23 00:20:36.459 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:36.459 Verification LBA range: start 0x0 length 0x400 00:20:36.459 Nvme9n1 : 1.15 277.90 17.37 0.00 0.00 202862.06 15956.59 221568.67 00:20:36.459 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:36.459 Verification LBA range: start 0x0 length 0x400 00:20:36.459 Nvme10n1 : 1.16 275.85 17.24 0.00 0.00 201505.39 17324.30 242540.19 00:20:36.459 =================================================================================================================== 00:20:36.459 Total : 2772.67 173.29 0.00 0.00 213815.00 15500.69 242540.19 00:20:36.719 00:21:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:20:36.719 00:21:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:20:36.719 00:21:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:36.719 00:21:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:36.719 00:21:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:20:36.719 00:21:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:36.719 00:21:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:20:36.719 00:21:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:36.719 00:21:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:20:36.719 00:21:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:36.719 00:21:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:36.719 rmmod nvme_tcp 00:20:36.719 rmmod nvme_fabrics 00:20:36.719 rmmod nvme_keyring 00:20:36.719 00:21:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:36.719 00:21:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:20:36.719 00:21:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:20:36.719 00:21:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 1567742 ']' 00:20:36.719 00:21:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 1567742 00:20:36.719 00:21:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@942 -- # '[' -z 1567742 ']' 00:20:36.719 00:21:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@946 -- # kill -0 1567742 00:20:36.719 00:21:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@947 -- # uname 00:20:36.719 00:21:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:20:36.719 00:21:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1567742 00:20:36.719 00:21:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:20:36.719 00:21:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:20:36.719 00:21:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1567742' 00:20:36.719 killing process with pid 1567742 00:20:36.719 00:21:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@961 -- # kill 1567742 00:20:36.719 00:21:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # wait 1567742 00:20:37.288 00:21:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:37.288 00:21:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:37.288 00:21:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:37.288 00:21:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:37.288 00:21:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:37.288 00:21:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:37.288 00:21:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:37.288 00:21:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:39.195 00:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:39.196 00:20:39.196 real 0m14.766s 00:20:39.196 user 0m33.364s 00:20:39.196 sys 0m5.440s 00:20:39.196 00:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1118 -- # xtrace_disable 00:20:39.196 00:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:39.196 ************************************ 00:20:39.196 END TEST nvmf_shutdown_tc1 00:20:39.196 ************************************ 00:20:39.196 00:21:57 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1136 -- # return 0 00:20:39.196 00:21:57 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:20:39.196 00:21:57 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:20:39.196 00:21:57 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # xtrace_disable 00:20:39.196 00:21:57 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:39.196 ************************************ 00:20:39.196 START TEST nvmf_shutdown_tc2 00:20:39.196 ************************************ 00:20:39.196 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1117 -- # nvmf_shutdown_tc2 00:20:39.196 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:20:39.196 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:20:39.196 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:39.196 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:39.196 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:39.196 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:39.196 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:39.196 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:39.196 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:39.196 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:39.196 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:39.196 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:39.196 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:39.196 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:39.196 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:39.196 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:39.196 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:39.196 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:39.196 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:39.196 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:39.196 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:39.196 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:20:39.196 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:39.196 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:20:39.196 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:20:39.196 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:20:39.196 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:20:39.196 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:20:39.196 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:39.196 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:39.196 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:39.196 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:39.196 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:39.196 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:39.196 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:39.196 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:39.196 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:39.196 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:39.196 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:39.196 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:39.196 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:39.196 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:39.196 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:39.196 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:39.196 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:39.196 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:39.196 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:39.196 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:39.196 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:39.196 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:39.196 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:39.196 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:39.196 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:39.196 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:39.196 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:39.196 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:39.196 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:39.196 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:39.196 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:39.196 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:39.196 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:39.196 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:39.196 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:39.196 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:39.196 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:39.196 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:39.196 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:39.196 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:39.196 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:39.196 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:39.196 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:39.196 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:39.196 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:39.196 Found net devices under 0000:86:00.0: cvl_0_0 00:20:39.196 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:39.196 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:39.196 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:39.196 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:39.196 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:39.196 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:39.196 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:39.196 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:39.196 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:39.197 Found net devices under 0000:86:00.1: cvl_0_1 00:20:39.197 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:39.197 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:39.197 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:20:39.197 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:39.197 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:39.197 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:39.197 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:39.197 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:39.197 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:39.197 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:39.197 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:39.197 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:39.197 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:39.197 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:39.197 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:39.197 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:39.197 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:39.197 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:39.197 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:39.456 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:39.456 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:39.456 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:39.456 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:39.456 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:39.456 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:39.456 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:39.456 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:39.456 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:20:39.456 00:20:39.456 --- 10.0.0.2 ping statistics --- 00:20:39.456 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:39.456 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:20:39.456 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:39.456 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:39.456 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.233 ms 00:20:39.456 00:20:39.456 --- 10.0.0.1 ping statistics --- 00:20:39.456 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:39.456 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:20:39.456 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:39.456 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:20:39.456 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:39.456 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:39.456 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:39.456 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:39.456 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:39.456 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:39.456 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:39.716 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:20:39.716 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:39.716 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:39.716 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:39.716 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1569533 00:20:39.716 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1569533 00:20:39.716 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:39.716 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@823 -- # '[' -z 1569533 ']' 00:20:39.716 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:39.716 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@828 -- # local max_retries=100 00:20:39.716 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:39.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:39.716 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@832 -- # xtrace_disable 00:20:39.716 00:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:39.716 [2024-07-16 00:21:58.370963] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:20:39.716 [2024-07-16 00:21:58.371008] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:39.716 [2024-07-16 00:21:58.430424] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:39.716 [2024-07-16 00:21:58.505672] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:39.716 [2024-07-16 00:21:58.505712] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:39.716 [2024-07-16 00:21:58.505718] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:39.716 [2024-07-16 00:21:58.505724] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:39.716 [2024-07-16 00:21:58.505729] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:39.716 [2024-07-16 00:21:58.505833] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:39.716 [2024-07-16 00:21:58.505900] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:39.716 [2024-07-16 00:21:58.505988] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:39.716 [2024-07-16 00:21:58.505989] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:20:40.651 00:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:20:40.651 00:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@856 -- # return 0 00:20:40.651 00:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:40.651 00:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:40.651 00:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:40.651 00:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:40.651 00:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:40.651 00:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:40.651 00:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:40.651 [2024-07-16 00:21:59.223137] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:40.651 00:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:40.651 00:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:20:40.651 00:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:20:40.651 00:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:40.651 00:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:40.651 00:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:40.651 00:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:40.651 00:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:40.651 00:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:40.651 00:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:40.651 00:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:40.651 00:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:40.651 00:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:40.651 00:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:40.651 00:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:40.651 00:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:40.651 00:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:40.651 00:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:40.651 00:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:40.651 00:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:40.651 00:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:40.651 00:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:40.651 00:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:40.651 00:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:40.651 00:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:40.651 00:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:40.651 00:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:20:40.651 00:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:40.651 00:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:40.651 Malloc1 00:20:40.651 [2024-07-16 00:21:59.318819] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:40.651 Malloc2 00:20:40.651 Malloc3 00:20:40.651 Malloc4 00:20:40.651 Malloc5 00:20:40.909 Malloc6 00:20:40.909 Malloc7 00:20:40.909 Malloc8 00:20:40.909 Malloc9 00:20:40.909 Malloc10 00:20:40.909 00:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:40.909 00:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:20:40.909 00:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:40.909 00:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:40.909 00:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=1569812 00:20:40.909 00:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 1569812 /var/tmp/bdevperf.sock 00:20:40.909 00:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@823 -- # '[' -z 1569812 ']' 00:20:40.909 00:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:40.909 00:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:20:40.909 00:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@828 -- # local max_retries=100 00:20:40.909 00:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:40.909 00:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:40.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:40.909 00:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:20:40.909 00:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@832 -- # xtrace_disable 00:20:40.909 00:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:20:40.909 00:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:40.909 00:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:40.909 00:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:40.909 { 00:20:40.909 "params": { 00:20:40.909 "name": "Nvme$subsystem", 00:20:40.909 "trtype": "$TEST_TRANSPORT", 00:20:40.909 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:40.909 "adrfam": "ipv4", 00:20:40.909 "trsvcid": "$NVMF_PORT", 00:20:40.909 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:40.909 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:40.909 "hdgst": ${hdgst:-false}, 00:20:40.909 "ddgst": ${ddgst:-false} 00:20:40.909 }, 00:20:40.909 "method": "bdev_nvme_attach_controller" 00:20:40.909 } 00:20:40.909 EOF 00:20:40.909 )") 00:20:40.909 00:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:40.909 00:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:40.909 00:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:40.909 { 00:20:40.909 "params": { 00:20:40.909 "name": "Nvme$subsystem", 00:20:40.909 "trtype": "$TEST_TRANSPORT", 00:20:40.909 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:40.909 "adrfam": "ipv4", 00:20:40.909 "trsvcid": "$NVMF_PORT", 00:20:40.909 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:40.909 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:40.909 "hdgst": ${hdgst:-false}, 00:20:40.909 "ddgst": ${ddgst:-false} 00:20:40.909 }, 00:20:40.909 "method": "bdev_nvme_attach_controller" 00:20:40.909 } 00:20:40.909 EOF 00:20:40.909 )") 00:20:40.909 00:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:40.909 00:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:40.909 00:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:40.909 { 00:20:40.909 "params": { 00:20:40.909 "name": "Nvme$subsystem", 00:20:40.909 "trtype": "$TEST_TRANSPORT", 00:20:40.909 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:40.909 "adrfam": "ipv4", 00:20:40.909 "trsvcid": "$NVMF_PORT", 00:20:40.909 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:40.909 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:40.909 "hdgst": ${hdgst:-false}, 00:20:40.909 "ddgst": ${ddgst:-false} 00:20:40.909 }, 00:20:40.909 "method": "bdev_nvme_attach_controller" 00:20:40.909 } 00:20:40.909 EOF 00:20:40.909 )") 00:20:40.909 00:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:41.167 00:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:41.167 00:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:41.167 { 00:20:41.167 "params": { 00:20:41.167 "name": "Nvme$subsystem", 00:20:41.167 "trtype": "$TEST_TRANSPORT", 00:20:41.167 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:41.167 "adrfam": "ipv4", 00:20:41.167 "trsvcid": "$NVMF_PORT", 00:20:41.167 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:41.167 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:41.167 "hdgst": ${hdgst:-false}, 00:20:41.167 "ddgst": ${ddgst:-false} 00:20:41.167 }, 00:20:41.167 "method": "bdev_nvme_attach_controller" 00:20:41.167 } 00:20:41.168 EOF 00:20:41.168 )") 00:20:41.168 00:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:41.168 00:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:41.168 00:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:41.168 { 00:20:41.168 "params": { 00:20:41.168 "name": "Nvme$subsystem", 00:20:41.168 "trtype": "$TEST_TRANSPORT", 00:20:41.168 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:41.168 "adrfam": "ipv4", 00:20:41.168 "trsvcid": "$NVMF_PORT", 00:20:41.168 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:41.168 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:41.168 "hdgst": ${hdgst:-false}, 00:20:41.168 "ddgst": ${ddgst:-false} 00:20:41.168 }, 00:20:41.168 "method": "bdev_nvme_attach_controller" 00:20:41.168 } 00:20:41.168 EOF 00:20:41.168 )") 00:20:41.168 00:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:41.168 00:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:41.168 00:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:41.168 { 00:20:41.168 "params": { 00:20:41.168 "name": "Nvme$subsystem", 00:20:41.168 "trtype": "$TEST_TRANSPORT", 00:20:41.168 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:41.168 "adrfam": "ipv4", 00:20:41.168 "trsvcid": "$NVMF_PORT", 00:20:41.168 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:41.168 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:41.168 "hdgst": ${hdgst:-false}, 00:20:41.168 "ddgst": ${ddgst:-false} 00:20:41.168 }, 00:20:41.168 "method": "bdev_nvme_attach_controller" 00:20:41.168 } 00:20:41.168 EOF 00:20:41.168 )") 00:20:41.168 00:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:41.168 [2024-07-16 00:21:59.782538] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:20:41.168 [2024-07-16 00:21:59.782586] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1569812 ] 00:20:41.168 00:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:41.168 00:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:41.168 { 00:20:41.168 "params": { 00:20:41.168 "name": "Nvme$subsystem", 00:20:41.168 "trtype": "$TEST_TRANSPORT", 00:20:41.168 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:41.168 "adrfam": "ipv4", 00:20:41.168 "trsvcid": "$NVMF_PORT", 00:20:41.168 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:41.168 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:41.168 "hdgst": ${hdgst:-false}, 00:20:41.168 "ddgst": ${ddgst:-false} 00:20:41.168 }, 00:20:41.168 "method": "bdev_nvme_attach_controller" 00:20:41.168 } 00:20:41.168 EOF 00:20:41.168 )") 00:20:41.168 00:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:41.168 00:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:41.168 00:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:41.168 { 00:20:41.168 "params": { 00:20:41.168 "name": "Nvme$subsystem", 00:20:41.168 "trtype": "$TEST_TRANSPORT", 00:20:41.168 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:41.168 "adrfam": "ipv4", 00:20:41.168 "trsvcid": "$NVMF_PORT", 00:20:41.168 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:41.168 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:41.168 "hdgst": ${hdgst:-false}, 00:20:41.168 "ddgst": ${ddgst:-false} 00:20:41.168 }, 00:20:41.168 "method": "bdev_nvme_attach_controller" 00:20:41.168 } 00:20:41.168 EOF 00:20:41.168 )") 00:20:41.168 00:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:41.168 00:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:41.168 00:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:41.168 { 00:20:41.168 "params": { 00:20:41.168 "name": "Nvme$subsystem", 00:20:41.168 "trtype": "$TEST_TRANSPORT", 00:20:41.168 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:41.168 "adrfam": "ipv4", 00:20:41.168 "trsvcid": "$NVMF_PORT", 00:20:41.168 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:41.168 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:41.168 "hdgst": ${hdgst:-false}, 00:20:41.168 "ddgst": ${ddgst:-false} 00:20:41.168 }, 00:20:41.168 "method": "bdev_nvme_attach_controller" 00:20:41.168 } 00:20:41.168 EOF 00:20:41.168 )") 00:20:41.168 00:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:41.168 00:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:41.168 00:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:41.168 { 00:20:41.168 "params": { 00:20:41.168 "name": "Nvme$subsystem", 00:20:41.168 "trtype": "$TEST_TRANSPORT", 00:20:41.168 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:41.168 "adrfam": "ipv4", 00:20:41.168 "trsvcid": "$NVMF_PORT", 00:20:41.168 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:41.168 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:41.168 "hdgst": ${hdgst:-false}, 00:20:41.168 "ddgst": ${ddgst:-false} 00:20:41.168 }, 00:20:41.168 "method": "bdev_nvme_attach_controller" 00:20:41.168 } 00:20:41.168 EOF 00:20:41.168 )") 00:20:41.168 00:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:41.168 00:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:20:41.168 00:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:20:41.168 00:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:41.168 "params": { 00:20:41.168 "name": "Nvme1", 00:20:41.168 "trtype": "tcp", 00:20:41.168 "traddr": "10.0.0.2", 00:20:41.168 "adrfam": "ipv4", 00:20:41.168 "trsvcid": "4420", 00:20:41.168 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:41.168 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:41.168 "hdgst": false, 00:20:41.168 "ddgst": false 00:20:41.168 }, 00:20:41.168 "method": "bdev_nvme_attach_controller" 00:20:41.168 },{ 00:20:41.168 "params": { 00:20:41.168 "name": "Nvme2", 00:20:41.168 "trtype": "tcp", 00:20:41.168 "traddr": "10.0.0.2", 00:20:41.168 "adrfam": "ipv4", 00:20:41.168 "trsvcid": "4420", 00:20:41.168 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:41.168 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:41.168 "hdgst": false, 00:20:41.168 "ddgst": false 00:20:41.168 }, 00:20:41.168 "method": "bdev_nvme_attach_controller" 00:20:41.168 },{ 00:20:41.168 "params": { 00:20:41.168 "name": "Nvme3", 00:20:41.168 "trtype": "tcp", 00:20:41.168 "traddr": "10.0.0.2", 00:20:41.168 "adrfam": "ipv4", 00:20:41.168 "trsvcid": "4420", 00:20:41.168 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:41.168 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:41.168 "hdgst": false, 00:20:41.168 "ddgst": false 00:20:41.168 }, 00:20:41.168 "method": "bdev_nvme_attach_controller" 00:20:41.168 },{ 00:20:41.168 "params": { 00:20:41.168 "name": "Nvme4", 00:20:41.168 "trtype": "tcp", 00:20:41.168 "traddr": "10.0.0.2", 00:20:41.168 "adrfam": "ipv4", 00:20:41.168 "trsvcid": "4420", 00:20:41.168 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:41.168 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:41.168 "hdgst": false, 00:20:41.168 "ddgst": false 00:20:41.168 }, 00:20:41.168 "method": "bdev_nvme_attach_controller" 00:20:41.168 },{ 00:20:41.168 "params": { 00:20:41.168 "name": "Nvme5", 00:20:41.168 "trtype": "tcp", 00:20:41.168 "traddr": "10.0.0.2", 00:20:41.168 "adrfam": "ipv4", 00:20:41.168 "trsvcid": "4420", 00:20:41.168 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:41.168 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:41.168 "hdgst": false, 00:20:41.168 "ddgst": false 00:20:41.168 }, 00:20:41.168 "method": "bdev_nvme_attach_controller" 00:20:41.168 },{ 00:20:41.168 "params": { 00:20:41.168 "name": "Nvme6", 00:20:41.168 "trtype": "tcp", 00:20:41.168 "traddr": "10.0.0.2", 00:20:41.168 "adrfam": "ipv4", 00:20:41.168 "trsvcid": "4420", 00:20:41.168 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:41.168 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:41.168 "hdgst": false, 00:20:41.168 "ddgst": false 00:20:41.168 }, 00:20:41.168 "method": "bdev_nvme_attach_controller" 00:20:41.168 },{ 00:20:41.168 "params": { 00:20:41.168 "name": "Nvme7", 00:20:41.168 "trtype": "tcp", 00:20:41.168 "traddr": "10.0.0.2", 00:20:41.168 "adrfam": "ipv4", 00:20:41.168 "trsvcid": "4420", 00:20:41.168 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:41.168 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:41.168 "hdgst": false, 00:20:41.168 "ddgst": false 00:20:41.168 }, 00:20:41.168 "method": "bdev_nvme_attach_controller" 00:20:41.168 },{ 00:20:41.168 "params": { 00:20:41.168 "name": "Nvme8", 00:20:41.168 "trtype": "tcp", 00:20:41.168 "traddr": "10.0.0.2", 00:20:41.168 "adrfam": "ipv4", 00:20:41.168 "trsvcid": "4420", 00:20:41.168 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:41.168 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:41.168 "hdgst": false, 00:20:41.168 "ddgst": false 00:20:41.168 }, 00:20:41.168 "method": "bdev_nvme_attach_controller" 00:20:41.169 },{ 00:20:41.169 "params": { 00:20:41.169 "name": "Nvme9", 00:20:41.169 "trtype": "tcp", 00:20:41.169 "traddr": "10.0.0.2", 00:20:41.169 "adrfam": "ipv4", 00:20:41.169 "trsvcid": "4420", 00:20:41.169 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:41.169 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:41.169 "hdgst": false, 00:20:41.169 "ddgst": false 00:20:41.169 }, 00:20:41.169 "method": "bdev_nvme_attach_controller" 00:20:41.169 },{ 00:20:41.169 "params": { 00:20:41.169 "name": "Nvme10", 00:20:41.169 "trtype": "tcp", 00:20:41.169 "traddr": "10.0.0.2", 00:20:41.169 "adrfam": "ipv4", 00:20:41.169 "trsvcid": "4420", 00:20:41.169 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:41.169 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:41.169 "hdgst": false, 00:20:41.169 "ddgst": false 00:20:41.169 }, 00:20:41.169 "method": "bdev_nvme_attach_controller" 00:20:41.169 }' 00:20:41.169 [2024-07-16 00:21:59.837113] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:41.169 [2024-07-16 00:21:59.909587] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:43.069 Running I/O for 10 seconds... 00:20:43.069 00:22:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:20:43.069 00:22:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@856 -- # return 0 00:20:43.069 00:22:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:43.069 00:22:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:43.069 00:22:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:43.069 00:22:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:43.069 00:22:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:20:43.069 00:22:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:20:43.069 00:22:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:20:43.069 00:22:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:20:43.069 00:22:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:20:43.069 00:22:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:20:43.069 00:22:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:43.069 00:22:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:43.069 00:22:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:43.069 00:22:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:43.069 00:22:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:43.069 00:22:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:43.069 00:22:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:20:43.069 00:22:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:20:43.069 00:22:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:20:43.328 00:22:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:20:43.328 00:22:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:43.328 00:22:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:43.328 00:22:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:43.328 00:22:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:43.328 00:22:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:43.328 00:22:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:43.328 00:22:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:20:43.328 00:22:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:20:43.328 00:22:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:20:43.587 00:22:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:20:43.587 00:22:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:43.587 00:22:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:43.587 00:22:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:43.588 00:22:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:43.588 00:22:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:43.588 00:22:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:43.588 00:22:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:20:43.588 00:22:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:20:43.588 00:22:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:20:43.588 00:22:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:20:43.588 00:22:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:20:43.588 00:22:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 1569812 00:20:43.588 00:22:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@942 -- # '[' -z 1569812 ']' 00:20:43.588 00:22:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@946 -- # kill -0 1569812 00:20:43.588 00:22:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@947 -- # uname 00:20:43.588 00:22:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:20:43.588 00:22:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1569812 00:20:43.588 00:22:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:20:43.588 00:22:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:20:43.588 00:22:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1569812' 00:20:43.588 killing process with pid 1569812 00:20:43.588 00:22:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@961 -- # kill 1569812 00:20:43.588 00:22:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # wait 1569812 00:20:43.588 Received shutdown signal, test time was about 0.926774 seconds 00:20:43.588 00:20:43.588 Latency(us) 00:20:43.588 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:43.588 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:43.588 Verification LBA range: start 0x0 length 0x400 00:20:43.588 Nvme1n1 : 0.89 216.50 13.53 0.00 0.00 292149.57 19261.89 248011.02 00:20:43.588 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:43.588 Verification LBA range: start 0x0 length 0x400 00:20:43.588 Nvme2n1 : 0.91 280.85 17.55 0.00 0.00 221277.27 18122.13 208803.39 00:20:43.588 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:43.588 Verification LBA range: start 0x0 length 0x400 00:20:43.588 Nvme3n1 : 0.90 285.07 17.82 0.00 0.00 214271.11 21085.50 215186.03 00:20:43.588 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:43.588 Verification LBA range: start 0x0 length 0x400 00:20:43.588 Nvme4n1 : 0.92 278.72 17.42 0.00 0.00 215196.27 17552.25 219745.06 00:20:43.588 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:43.588 Verification LBA range: start 0x0 length 0x400 00:20:43.588 Nvme5n1 : 0.90 288.10 18.01 0.00 0.00 202611.12 6838.54 211538.81 00:20:43.588 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:43.588 Verification LBA range: start 0x0 length 0x400 00:20:43.588 Nvme6n1 : 0.92 278.96 17.44 0.00 0.00 206957.97 17096.35 217921.45 00:20:43.588 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:43.588 Verification LBA range: start 0x0 length 0x400 00:20:43.588 Nvme7n1 : 0.91 281.10 17.57 0.00 0.00 201327.97 21427.42 213362.42 00:20:43.588 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:43.588 Verification LBA range: start 0x0 length 0x400 00:20:43.588 Nvme8n1 : 0.90 283.97 17.75 0.00 0.00 195213.58 15956.59 213362.42 00:20:43.588 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:43.588 Verification LBA range: start 0x0 length 0x400 00:20:43.588 Nvme9n1 : 0.93 276.49 17.28 0.00 0.00 197455.25 19831.76 217921.45 00:20:43.588 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:43.588 Verification LBA range: start 0x0 length 0x400 00:20:43.588 Nvme10n1 : 0.92 277.00 17.31 0.00 0.00 193103.03 20059.71 220656.86 00:20:43.588 =================================================================================================================== 00:20:43.588 Total : 2746.76 171.67 0.00 0.00 211936.41 6838.54 248011.02 00:20:43.847 00:22:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:20:44.785 00:22:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 1569533 00:20:44.785 00:22:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:20:44.785 00:22:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:20:44.785 00:22:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:44.785 00:22:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:44.785 00:22:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:20:44.785 00:22:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:44.785 00:22:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:20:44.785 00:22:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:44.785 00:22:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:20:44.785 00:22:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:44.785 00:22:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:44.785 rmmod nvme_tcp 00:20:44.785 rmmod nvme_fabrics 00:20:45.043 rmmod nvme_keyring 00:20:45.043 00:22:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:45.043 00:22:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:20:45.043 00:22:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:20:45.043 00:22:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 1569533 ']' 00:20:45.043 00:22:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 1569533 00:20:45.043 00:22:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@942 -- # '[' -z 1569533 ']' 00:20:45.043 00:22:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@946 -- # kill -0 1569533 00:20:45.043 00:22:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@947 -- # uname 00:20:45.043 00:22:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:20:45.043 00:22:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1569533 00:20:45.043 00:22:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:20:45.043 00:22:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:20:45.043 00:22:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1569533' 00:20:45.043 killing process with pid 1569533 00:20:45.043 00:22:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@961 -- # kill 1569533 00:20:45.043 00:22:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # wait 1569533 00:20:45.300 00:22:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:45.300 00:22:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:45.300 00:22:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:45.300 00:22:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:45.300 00:22:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:45.300 00:22:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:45.300 00:22:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:45.300 00:22:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:47.835 00:20:47.835 real 0m8.154s 00:20:47.835 user 0m25.046s 00:20:47.835 sys 0m1.341s 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1118 -- # xtrace_disable 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:47.835 ************************************ 00:20:47.835 END TEST nvmf_shutdown_tc2 00:20:47.835 ************************************ 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1136 -- # return 0 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # xtrace_disable 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:47.835 ************************************ 00:20:47.835 START TEST nvmf_shutdown_tc3 00:20:47.835 ************************************ 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1117 -- # nvmf_shutdown_tc3 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:47.835 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:47.835 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:47.835 Found net devices under 0000:86:00.0: cvl_0_0 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:47.835 Found net devices under 0000:86:00.1: cvl_0_1 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:47.835 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:47.835 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:20:47.835 00:20:47.835 --- 10.0.0.2 ping statistics --- 00:20:47.835 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:47.835 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:47.835 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:47.835 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.257 ms 00:20:47.835 00:20:47.835 --- 10.0.0.1 ping statistics --- 00:20:47.835 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:47.835 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=1571075 00:20:47.835 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 1571075 00:20:47.836 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:47.836 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@823 -- # '[' -z 1571075 ']' 00:20:47.836 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:47.836 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@828 -- # local max_retries=100 00:20:47.836 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:47.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:47.836 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@832 -- # xtrace_disable 00:20:47.836 00:22:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:47.836 [2024-07-16 00:22:06.619922] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:20:47.836 [2024-07-16 00:22:06.619971] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:47.836 [2024-07-16 00:22:06.676836] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:48.095 [2024-07-16 00:22:06.757518] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:48.095 [2024-07-16 00:22:06.757552] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:48.095 [2024-07-16 00:22:06.757559] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:48.095 [2024-07-16 00:22:06.757565] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:48.095 [2024-07-16 00:22:06.757571] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:48.095 [2024-07-16 00:22:06.757674] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:48.095 [2024-07-16 00:22:06.757780] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:48.095 [2024-07-16 00:22:06.757888] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:48.095 [2024-07-16 00:22:06.757889] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:20:48.662 00:22:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:20:48.662 00:22:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@856 -- # return 0 00:20:48.662 00:22:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:48.662 00:22:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:48.662 00:22:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:48.662 00:22:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:48.662 00:22:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:48.662 00:22:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:48.662 00:22:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:48.662 [2024-07-16 00:22:07.468284] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:48.662 00:22:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:48.662 00:22:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:20:48.662 00:22:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:20:48.662 00:22:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:48.662 00:22:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:48.662 00:22:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:48.662 00:22:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:48.662 00:22:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:48.662 00:22:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:48.662 00:22:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:48.662 00:22:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:48.662 00:22:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:48.662 00:22:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:48.662 00:22:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:48.662 00:22:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:48.662 00:22:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:48.662 00:22:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:48.662 00:22:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:48.662 00:22:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:48.662 00:22:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:48.662 00:22:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:48.662 00:22:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:48.662 00:22:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:48.662 00:22:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:48.921 00:22:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:48.921 00:22:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:48.921 00:22:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:20:48.921 00:22:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:48.921 00:22:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:48.921 Malloc1 00:20:48.921 [2024-07-16 00:22:07.564037] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:48.921 Malloc2 00:20:48.921 Malloc3 00:20:48.921 Malloc4 00:20:48.921 Malloc5 00:20:48.921 Malloc6 00:20:49.179 Malloc7 00:20:49.179 Malloc8 00:20:49.179 Malloc9 00:20:49.179 Malloc10 00:20:49.179 00:22:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:49.179 00:22:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:20:49.179 00:22:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:49.179 00:22:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:49.179 00:22:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=1571355 00:20:49.179 00:22:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 1571355 /var/tmp/bdevperf.sock 00:20:49.179 00:22:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@823 -- # '[' -z 1571355 ']' 00:20:49.180 00:22:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:49.180 00:22:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:20:49.180 00:22:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@828 -- # local max_retries=100 00:20:49.180 00:22:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:49.180 00:22:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:49.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:49.180 00:22:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:20:49.180 00:22:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@832 -- # xtrace_disable 00:20:49.180 00:22:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:20:49.180 00:22:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:49.180 00:22:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:49.180 00:22:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:49.180 { 00:20:49.180 "params": { 00:20:49.180 "name": "Nvme$subsystem", 00:20:49.180 "trtype": "$TEST_TRANSPORT", 00:20:49.180 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:49.180 "adrfam": "ipv4", 00:20:49.180 "trsvcid": "$NVMF_PORT", 00:20:49.180 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:49.180 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:49.180 "hdgst": ${hdgst:-false}, 00:20:49.180 "ddgst": ${ddgst:-false} 00:20:49.180 }, 00:20:49.180 "method": "bdev_nvme_attach_controller" 00:20:49.180 } 00:20:49.180 EOF 00:20:49.180 )") 00:20:49.180 00:22:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:49.180 00:22:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:49.180 00:22:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:49.180 { 00:20:49.180 "params": { 00:20:49.180 "name": "Nvme$subsystem", 00:20:49.180 "trtype": "$TEST_TRANSPORT", 00:20:49.180 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:49.180 "adrfam": "ipv4", 00:20:49.180 "trsvcid": "$NVMF_PORT", 00:20:49.180 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:49.180 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:49.180 "hdgst": ${hdgst:-false}, 00:20:49.180 "ddgst": ${ddgst:-false} 00:20:49.180 }, 00:20:49.180 "method": "bdev_nvme_attach_controller" 00:20:49.180 } 00:20:49.180 EOF 00:20:49.180 )") 00:20:49.180 00:22:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:49.180 00:22:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:49.180 00:22:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:49.180 { 00:20:49.180 "params": { 00:20:49.180 "name": "Nvme$subsystem", 00:20:49.180 "trtype": "$TEST_TRANSPORT", 00:20:49.180 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:49.180 "adrfam": "ipv4", 00:20:49.180 "trsvcid": "$NVMF_PORT", 00:20:49.180 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:49.180 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:49.180 "hdgst": ${hdgst:-false}, 00:20:49.180 "ddgst": ${ddgst:-false} 00:20:49.180 }, 00:20:49.180 "method": "bdev_nvme_attach_controller" 00:20:49.180 } 00:20:49.180 EOF 00:20:49.180 )") 00:20:49.180 00:22:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:49.180 00:22:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:49.180 00:22:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:49.180 { 00:20:49.180 "params": { 00:20:49.180 "name": "Nvme$subsystem", 00:20:49.180 "trtype": "$TEST_TRANSPORT", 00:20:49.180 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:49.180 "adrfam": "ipv4", 00:20:49.180 "trsvcid": "$NVMF_PORT", 00:20:49.180 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:49.180 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:49.180 "hdgst": ${hdgst:-false}, 00:20:49.180 "ddgst": ${ddgst:-false} 00:20:49.180 }, 00:20:49.180 "method": "bdev_nvme_attach_controller" 00:20:49.180 } 00:20:49.180 EOF 00:20:49.180 )") 00:20:49.180 00:22:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:49.180 00:22:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:49.180 00:22:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:49.180 { 00:20:49.180 "params": { 00:20:49.180 "name": "Nvme$subsystem", 00:20:49.180 "trtype": "$TEST_TRANSPORT", 00:20:49.180 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:49.180 "adrfam": "ipv4", 00:20:49.180 "trsvcid": "$NVMF_PORT", 00:20:49.180 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:49.180 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:49.180 "hdgst": ${hdgst:-false}, 00:20:49.180 "ddgst": ${ddgst:-false} 00:20:49.180 }, 00:20:49.180 "method": "bdev_nvme_attach_controller" 00:20:49.180 } 00:20:49.180 EOF 00:20:49.180 )") 00:20:49.180 00:22:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:49.180 00:22:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:49.180 00:22:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:49.180 { 00:20:49.180 "params": { 00:20:49.180 "name": "Nvme$subsystem", 00:20:49.180 "trtype": "$TEST_TRANSPORT", 00:20:49.180 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:49.180 "adrfam": "ipv4", 00:20:49.180 "trsvcid": "$NVMF_PORT", 00:20:49.180 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:49.180 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:49.180 "hdgst": ${hdgst:-false}, 00:20:49.180 "ddgst": ${ddgst:-false} 00:20:49.180 }, 00:20:49.180 "method": "bdev_nvme_attach_controller" 00:20:49.180 } 00:20:49.180 EOF 00:20:49.180 )") 00:20:49.180 00:22:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:49.180 00:22:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:49.180 00:22:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:49.180 { 00:20:49.180 "params": { 00:20:49.180 "name": "Nvme$subsystem", 00:20:49.180 "trtype": "$TEST_TRANSPORT", 00:20:49.180 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:49.180 "adrfam": "ipv4", 00:20:49.180 "trsvcid": "$NVMF_PORT", 00:20:49.180 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:49.180 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:49.180 "hdgst": ${hdgst:-false}, 00:20:49.180 "ddgst": ${ddgst:-false} 00:20:49.180 }, 00:20:49.180 "method": "bdev_nvme_attach_controller" 00:20:49.180 } 00:20:49.180 EOF 00:20:49.180 )") 00:20:49.180 [2024-07-16 00:22:08.029550] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:20:49.181 [2024-07-16 00:22:08.029600] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1571355 ] 00:20:49.181 00:22:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:49.440 00:22:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:49.440 00:22:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:49.440 { 00:20:49.440 "params": { 00:20:49.440 "name": "Nvme$subsystem", 00:20:49.440 "trtype": "$TEST_TRANSPORT", 00:20:49.440 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:49.440 "adrfam": "ipv4", 00:20:49.440 "trsvcid": "$NVMF_PORT", 00:20:49.440 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:49.440 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:49.440 "hdgst": ${hdgst:-false}, 00:20:49.440 "ddgst": ${ddgst:-false} 00:20:49.440 }, 00:20:49.440 "method": "bdev_nvme_attach_controller" 00:20:49.440 } 00:20:49.440 EOF 00:20:49.440 )") 00:20:49.440 00:22:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:49.440 00:22:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:49.440 00:22:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:49.440 { 00:20:49.440 "params": { 00:20:49.440 "name": "Nvme$subsystem", 00:20:49.440 "trtype": "$TEST_TRANSPORT", 00:20:49.440 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:49.440 "adrfam": "ipv4", 00:20:49.440 "trsvcid": "$NVMF_PORT", 00:20:49.440 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:49.440 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:49.440 "hdgst": ${hdgst:-false}, 00:20:49.440 "ddgst": ${ddgst:-false} 00:20:49.440 }, 00:20:49.440 "method": "bdev_nvme_attach_controller" 00:20:49.440 } 00:20:49.440 EOF 00:20:49.440 )") 00:20:49.440 00:22:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:49.440 00:22:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:49.440 00:22:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:49.440 { 00:20:49.440 "params": { 00:20:49.440 "name": "Nvme$subsystem", 00:20:49.440 "trtype": "$TEST_TRANSPORT", 00:20:49.440 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:49.440 "adrfam": "ipv4", 00:20:49.440 "trsvcid": "$NVMF_PORT", 00:20:49.440 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:49.440 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:49.440 "hdgst": ${hdgst:-false}, 00:20:49.440 "ddgst": ${ddgst:-false} 00:20:49.440 }, 00:20:49.440 "method": "bdev_nvme_attach_controller" 00:20:49.440 } 00:20:49.440 EOF 00:20:49.440 )") 00:20:49.440 00:22:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:49.440 00:22:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:20:49.440 00:22:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:20:49.440 00:22:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:49.440 "params": { 00:20:49.440 "name": "Nvme1", 00:20:49.440 "trtype": "tcp", 00:20:49.440 "traddr": "10.0.0.2", 00:20:49.440 "adrfam": "ipv4", 00:20:49.440 "trsvcid": "4420", 00:20:49.440 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:49.440 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:49.440 "hdgst": false, 00:20:49.440 "ddgst": false 00:20:49.440 }, 00:20:49.440 "method": "bdev_nvme_attach_controller" 00:20:49.440 },{ 00:20:49.440 "params": { 00:20:49.440 "name": "Nvme2", 00:20:49.440 "trtype": "tcp", 00:20:49.440 "traddr": "10.0.0.2", 00:20:49.440 "adrfam": "ipv4", 00:20:49.440 "trsvcid": "4420", 00:20:49.440 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:49.440 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:49.440 "hdgst": false, 00:20:49.440 "ddgst": false 00:20:49.440 }, 00:20:49.440 "method": "bdev_nvme_attach_controller" 00:20:49.440 },{ 00:20:49.440 "params": { 00:20:49.440 "name": "Nvme3", 00:20:49.440 "trtype": "tcp", 00:20:49.440 "traddr": "10.0.0.2", 00:20:49.440 "adrfam": "ipv4", 00:20:49.440 "trsvcid": "4420", 00:20:49.440 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:49.440 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:49.440 "hdgst": false, 00:20:49.440 "ddgst": false 00:20:49.440 }, 00:20:49.440 "method": "bdev_nvme_attach_controller" 00:20:49.440 },{ 00:20:49.440 "params": { 00:20:49.440 "name": "Nvme4", 00:20:49.440 "trtype": "tcp", 00:20:49.440 "traddr": "10.0.0.2", 00:20:49.440 "adrfam": "ipv4", 00:20:49.440 "trsvcid": "4420", 00:20:49.440 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:49.440 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:49.440 "hdgst": false, 00:20:49.440 "ddgst": false 00:20:49.440 }, 00:20:49.440 "method": "bdev_nvme_attach_controller" 00:20:49.440 },{ 00:20:49.440 "params": { 00:20:49.440 "name": "Nvme5", 00:20:49.440 "trtype": "tcp", 00:20:49.440 "traddr": "10.0.0.2", 00:20:49.440 "adrfam": "ipv4", 00:20:49.440 "trsvcid": "4420", 00:20:49.440 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:49.441 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:49.441 "hdgst": false, 00:20:49.441 "ddgst": false 00:20:49.441 }, 00:20:49.441 "method": "bdev_nvme_attach_controller" 00:20:49.441 },{ 00:20:49.441 "params": { 00:20:49.441 "name": "Nvme6", 00:20:49.441 "trtype": "tcp", 00:20:49.441 "traddr": "10.0.0.2", 00:20:49.441 "adrfam": "ipv4", 00:20:49.441 "trsvcid": "4420", 00:20:49.441 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:49.441 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:49.441 "hdgst": false, 00:20:49.441 "ddgst": false 00:20:49.441 }, 00:20:49.441 "method": "bdev_nvme_attach_controller" 00:20:49.441 },{ 00:20:49.441 "params": { 00:20:49.441 "name": "Nvme7", 00:20:49.441 "trtype": "tcp", 00:20:49.441 "traddr": "10.0.0.2", 00:20:49.441 "adrfam": "ipv4", 00:20:49.441 "trsvcid": "4420", 00:20:49.441 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:49.441 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:49.441 "hdgst": false, 00:20:49.441 "ddgst": false 00:20:49.441 }, 00:20:49.441 "method": "bdev_nvme_attach_controller" 00:20:49.441 },{ 00:20:49.441 "params": { 00:20:49.441 "name": "Nvme8", 00:20:49.441 "trtype": "tcp", 00:20:49.441 "traddr": "10.0.0.2", 00:20:49.441 "adrfam": "ipv4", 00:20:49.441 "trsvcid": "4420", 00:20:49.441 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:49.441 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:49.441 "hdgst": false, 00:20:49.441 "ddgst": false 00:20:49.441 }, 00:20:49.441 "method": "bdev_nvme_attach_controller" 00:20:49.441 },{ 00:20:49.441 "params": { 00:20:49.441 "name": "Nvme9", 00:20:49.441 "trtype": "tcp", 00:20:49.441 "traddr": "10.0.0.2", 00:20:49.441 "adrfam": "ipv4", 00:20:49.441 "trsvcid": "4420", 00:20:49.441 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:49.441 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:49.441 "hdgst": false, 00:20:49.441 "ddgst": false 00:20:49.441 }, 00:20:49.441 "method": "bdev_nvme_attach_controller" 00:20:49.441 },{ 00:20:49.441 "params": { 00:20:49.441 "name": "Nvme10", 00:20:49.441 "trtype": "tcp", 00:20:49.441 "traddr": "10.0.0.2", 00:20:49.441 "adrfam": "ipv4", 00:20:49.441 "trsvcid": "4420", 00:20:49.441 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:49.441 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:49.441 "hdgst": false, 00:20:49.441 "ddgst": false 00:20:49.441 }, 00:20:49.441 "method": "bdev_nvme_attach_controller" 00:20:49.441 }' 00:20:49.441 [2024-07-16 00:22:08.084536] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:49.441 [2024-07-16 00:22:08.157392] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:51.345 Running I/O for 10 seconds... 00:20:51.928 00:22:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:20:51.928 00:22:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@856 -- # return 0 00:20:51.928 00:22:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:51.928 00:22:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:51.928 00:22:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:51.928 00:22:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:51.928 00:22:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:51.928 00:22:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:20:51.928 00:22:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:20:51.928 00:22:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:20:51.928 00:22:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:20:51.928 00:22:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:20:51.928 00:22:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:20:51.928 00:22:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:51.928 00:22:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:51.928 00:22:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:51.928 00:22:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:51.928 00:22:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:51.928 00:22:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:51.928 00:22:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:20:51.928 00:22:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:20:51.928 00:22:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:20:51.928 00:22:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:20:51.928 00:22:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:20:51.928 00:22:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 1571075 00:20:51.928 00:22:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@942 -- # '[' -z 1571075 ']' 00:20:51.928 00:22:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@946 -- # kill -0 1571075 00:20:51.928 00:22:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@947 -- # uname 00:20:51.928 00:22:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:20:51.928 00:22:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1571075 00:20:51.928 00:22:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:20:51.928 00:22:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:20:51.928 00:22:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1571075' 00:20:51.928 killing process with pid 1571075 00:20:51.928 00:22:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@961 -- # kill 1571075 00:20:51.928 00:22:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # wait 1571075 00:20:51.928 [2024-07-16 00:22:10.701095] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057ad0 is same with the state(5) to be set 00:20:51.928 [2024-07-16 00:22:10.701146] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057ad0 is same with the state(5) to be set 00:20:51.928 [2024-07-16 00:22:10.701154] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057ad0 is same with the state(5) to be set 00:20:51.928 [2024-07-16 00:22:10.701161] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057ad0 is same with the state(5) to be set 00:20:51.928 [2024-07-16 00:22:10.701168] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057ad0 is same with the state(5) to be set 00:20:51.928 [2024-07-16 00:22:10.701174] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057ad0 is same with the state(5) to be set 00:20:51.928 [2024-07-16 00:22:10.701181] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057ad0 is same with the state(5) to be set 00:20:51.928 [2024-07-16 00:22:10.701187] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057ad0 is same with the state(5) to be set 00:20:51.928 [2024-07-16 00:22:10.701193] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057ad0 is same with the state(5) to be set 00:20:51.929 [2024-07-16 00:22:10.701199] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057ad0 is same with the state(5) to be set 00:20:51.929 [2024-07-16 00:22:10.701205] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057ad0 is same with the state(5) to be set 00:20:51.929 [2024-07-16 00:22:10.701210] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057ad0 is same with the state(5) to be set 00:20:51.929 [2024-07-16 00:22:10.701217] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057ad0 is same with the state(5) to be set 00:20:51.929 [2024-07-16 00:22:10.701223] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057ad0 is same with the state(5) to be set 00:20:51.929 [2024-07-16 00:22:10.701235] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057ad0 is same with the state(5) to be set 00:20:51.929 [2024-07-16 00:22:10.701241] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057ad0 is same with the state(5) to be set 00:20:51.929 [2024-07-16 00:22:10.701247] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057ad0 is same with the state(5) to be set 00:20:51.929 [2024-07-16 00:22:10.701253] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057ad0 is same with the state(5) to be set 00:20:51.929 [2024-07-16 00:22:10.701259] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057ad0 is same with the state(5) to be set 00:20:51.929 [2024-07-16 00:22:10.701265] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057ad0 is same with the state(5) to be set 00:20:51.929 [2024-07-16 00:22:10.701271] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057ad0 is same with the state(5) to be set 00:20:51.929 [2024-07-16 00:22:10.701278] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057ad0 is same with the state(5) to be set 00:20:51.929 [2024-07-16 00:22:10.701284] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057ad0 is same with the state(5) to be set 00:20:51.929 [2024-07-16 00:22:10.701290] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057ad0 is same with the state(5) to be set 00:20:51.929 [2024-07-16 00:22:10.701297] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057ad0 is same with the state(5) to be set 00:20:51.929 [2024-07-16 00:22:10.701309] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057ad0 is same with the state(5) to be set 00:20:51.929 [2024-07-16 00:22:10.701315] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057ad0 is same with the state(5) to be set 00:20:51.929 [2024-07-16 00:22:10.701321] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057ad0 is same with the state(5) to be set 00:20:51.929 [2024-07-16 00:22:10.701327] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057ad0 is same with the state(5) to be set 00:20:51.929 [2024-07-16 00:22:10.701335] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057ad0 is same with the state(5) to be set 00:20:51.929 [2024-07-16 00:22:10.701341] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057ad0 is same with the state(5) to be set 00:20:51.929 [2024-07-16 00:22:10.701347] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057ad0 is same with the state(5) to be set 00:20:51.929 [2024-07-16 00:22:10.701354] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057ad0 is same with the state(5) to be set 00:20:51.929 [2024-07-16 00:22:10.701360] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057ad0 is same with the state(5) to be set 00:20:51.929 [2024-07-16 00:22:10.701366] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057ad0 is same with the state(5) to be set 00:20:51.929 [2024-07-16 00:22:10.701372] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057ad0 is same with the state(5) to be set 00:20:51.929 [2024-07-16 00:22:10.701379] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057ad0 is same with the state(5) to be set 00:20:51.929 [2024-07-16 00:22:10.701385] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057ad0 is same with the state(5) to be set 00:20:51.929 [2024-07-16 00:22:10.701391] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057ad0 is same with the state(5) to be set 00:20:51.929 [2024-07-16 00:22:10.701397] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057ad0 is same with the state(5) to be set 00:20:51.929 [2024-07-16 00:22:10.701403] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057ad0 is same with the state(5) to be set 00:20:51.929 [2024-07-16 00:22:10.701409] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057ad0 is same with the state(5) to be set 00:20:51.929 [2024-07-16 00:22:10.701415] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057ad0 is same with the state(5) to be set 00:20:51.929 [2024-07-16 00:22:10.701422] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057ad0 is same with the state(5) to be set 00:20:51.929 [2024-07-16 00:22:10.701428] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057ad0 is same with the state(5) to be set 00:20:51.929 [2024-07-16 00:22:10.701434] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057ad0 is same with the state(5) to be set 00:20:51.929 [2024-07-16 00:22:10.701441] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057ad0 is same with the state(5) to be set 00:20:51.929 [2024-07-16 00:22:10.701447] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057ad0 is same with the state(5) to be set 00:20:51.929 [2024-07-16 00:22:10.701453] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057ad0 is same with the state(5) to be set 00:20:51.929 [2024-07-16 00:22:10.701459] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057ad0 is same with the state(5) to be set 00:20:51.929 [2024-07-16 00:22:10.701465] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057ad0 is same with the state(5) to be set 00:20:51.929 [2024-07-16 00:22:10.701470] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057ad0 is same with the state(5) to be set 00:20:51.929 [2024-07-16 00:22:10.701478] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057ad0 is same with the state(5) to be set 00:20:51.929 [2024-07-16 00:22:10.701483] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057ad0 is same with the state(5) to be set 00:20:51.929 [2024-07-16 00:22:10.701489] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057ad0 is same with the state(5) to be set 00:20:51.929 [2024-07-16 00:22:10.701495] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057ad0 is same with the state(5) to be set 00:20:51.929 [2024-07-16 00:22:10.701501] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057ad0 is same with the state(5) to be set 00:20:51.929 [2024-07-16 00:22:10.701507] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057ad0 is same with the state(5) to be set 00:20:51.929 [2024-07-16 00:22:10.701512] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057ad0 is same with the state(5) to be set 00:20:51.929 [2024-07-16 00:22:10.701518] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057ad0 is same with the state(5) to be set 00:20:51.929 [2024-07-16 00:22:10.701524] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057ad0 is same with the state(5) to be set 00:20:51.929 [2024-07-16 00:22:10.701533] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057ad0 is same with the state(5) to be set 00:20:51.929 [2024-07-16 00:22:10.701539] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057ad0 is same with the state(5) to be set 00:20:51.929 [2024-07-16 00:22:10.703093] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123ba00 is same with the state(5) to be set 00:20:51.929 [2024-07-16 00:22:10.703119] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123ba00 is same with the state(5) to be set 00:20:51.929 [2024-07-16 00:22:10.703126] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123ba00 is same with the state(5) to be set 00:20:51.929 [2024-07-16 00:22:10.703133] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123ba00 is same with the state(5) to be set 00:20:51.929 [2024-07-16 00:22:10.703139] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123ba00 is same with the state(5) to be set 00:20:51.929 [2024-07-16 00:22:10.703146] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123ba00 is same with the state(5) to be set 00:20:51.929 [2024-07-16 00:22:10.703153] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123ba00 is same with the state(5) to be set 00:20:51.929 [2024-07-16 00:22:10.703159] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123ba00 is same with the state(5) to be set 00:20:51.929 [2024-07-16 00:22:10.703166] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123ba00 is same with the state(5) to be set 00:20:51.929 [2024-07-16 00:22:10.703172] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123ba00 is same with the state(5) to be set 00:20:51.929 [2024-07-16 00:22:10.703179] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123ba00 is same with the state(5) to be set 00:20:51.929 [2024-07-16 00:22:10.703185] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123ba00 is same with the state(5) to be set 00:20:51.929 [2024-07-16 00:22:10.703191] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123ba00 is same with the state(5) to be set 00:20:51.929 [2024-07-16 00:22:10.703197] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123ba00 is same with the state(5) to be set 00:20:51.929 [2024-07-16 00:22:10.703204] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123ba00 is same with the state(5) to be set 00:20:51.929 [2024-07-16 00:22:10.703210] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123ba00 is same with the state(5) to be set 00:20:51.929 [2024-07-16 00:22:10.703221] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123ba00 is same with the state(5) to be set 00:20:51.929 [2024-07-16 00:22:10.703233] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123ba00 is same with the state(5) to be set 00:20:51.929 [2024-07-16 00:22:10.703240] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123ba00 is same with the state(5) to be set 00:20:51.929 [2024-07-16 00:22:10.703247] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123ba00 is same with the state(5) to be set 00:20:51.929 [2024-07-16 00:22:10.703253] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123ba00 is same with the state(5) to be set 00:20:51.929 [2024-07-16 00:22:10.703259] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123ba00 is same with the state(5) to be set 00:20:51.929 [2024-07-16 00:22:10.703265] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123ba00 is same with the state(5) to be set 00:20:51.929 [2024-07-16 00:22:10.703271] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123ba00 is same with the state(5) to be set 00:20:51.929 [2024-07-16 00:22:10.703277] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123ba00 is same with the state(5) to be set 00:20:51.929 [2024-07-16 00:22:10.703284] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123ba00 is same with the state(5) to be set 00:20:51.929 [2024-07-16 00:22:10.703290] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123ba00 is same with the state(5) to be set 00:20:51.929 [2024-07-16 00:22:10.703296] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123ba00 is same with the state(5) to be set 00:20:51.929 [2024-07-16 00:22:10.703302] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123ba00 is same with the state(5) to be set 00:20:51.929 [2024-07-16 00:22:10.703308] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123ba00 is same with the state(5) to be set 00:20:51.929 [2024-07-16 00:22:10.703314] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123ba00 is same with the state(5) to be set 00:20:51.929 [2024-07-16 00:22:10.703319] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123ba00 is same with the state(5) to be set 00:20:51.929 [2024-07-16 00:22:10.703325] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123ba00 is same with the state(5) to be set 00:20:51.929 [2024-07-16 00:22:10.703339] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123ba00 is same with the state(5) to be set 00:20:51.929 [2024-07-16 00:22:10.703345] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123ba00 is same with the state(5) to be set 00:20:51.929 [2024-07-16 00:22:10.703351] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123ba00 is same with the state(5) to be set 00:20:51.930 [2024-07-16 00:22:10.703357] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123ba00 is same with the state(5) to be set 00:20:51.930 [2024-07-16 00:22:10.703364] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123ba00 is same with the state(5) to be set 00:20:51.930 [2024-07-16 00:22:10.703370] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123ba00 is same with the state(5) to be set 00:20:51.930 [2024-07-16 00:22:10.703376] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123ba00 is same with the state(5) to be set 00:20:51.930 [2024-07-16 00:22:10.703382] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123ba00 is same with the state(5) to be set 00:20:51.930 [2024-07-16 00:22:10.703388] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123ba00 is same with the state(5) to be set 00:20:51.930 [2024-07-16 00:22:10.703394] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123ba00 is same with the state(5) to be set 00:20:51.930 [2024-07-16 00:22:10.703401] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123ba00 is same with the state(5) to be set 00:20:51.930 [2024-07-16 00:22:10.703407] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123ba00 is same with the state(5) to be set 00:20:51.930 [2024-07-16 00:22:10.703413] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123ba00 is same with the state(5) to be set 00:20:51.930 [2024-07-16 00:22:10.703418] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123ba00 is same with the state(5) to be set 00:20:51.930 [2024-07-16 00:22:10.703424] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123ba00 is same with the state(5) to be set 00:20:51.930 [2024-07-16 00:22:10.703430] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123ba00 is same with the state(5) to be set 00:20:51.930 [2024-07-16 00:22:10.703436] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123ba00 is same with the state(5) to be set 00:20:51.930 [2024-07-16 00:22:10.703443] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123ba00 is same with the state(5) to be set 00:20:51.930 [2024-07-16 00:22:10.703449] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123ba00 is same with the state(5) to be set 00:20:51.930 [2024-07-16 00:22:10.703455] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123ba00 is same with the state(5) to be set 00:20:51.930 [2024-07-16 00:22:10.703461] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123ba00 is same with the state(5) to be set 00:20:51.930 [2024-07-16 00:22:10.703467] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123ba00 is same with the state(5) to be set 00:20:51.930 [2024-07-16 00:22:10.703473] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123ba00 is same with the state(5) to be set 00:20:51.930 [2024-07-16 00:22:10.703480] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123ba00 is same with the state(5) to be set 00:20:51.930 [2024-07-16 00:22:10.703486] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123ba00 is same with the state(5) to be set 00:20:51.930 [2024-07-16 00:22:10.703492] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123ba00 is same with the state(5) to be set 00:20:51.930 [2024-07-16 00:22:10.703498] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123ba00 is same with the state(5) to be set 00:20:51.930 [2024-07-16 00:22:10.703504] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123ba00 is same with the state(5) to be set 00:20:51.930 [2024-07-16 00:22:10.703510] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123ba00 is same with the state(5) to be set 00:20:51.930 [2024-07-16 00:22:10.703516] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123ba00 is same with the state(5) to be set 00:20:51.930 [2024-07-16 00:22:10.705265] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057f70 is same with the state(5) to be set 00:20:51.930 [2024-07-16 00:22:10.705284] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057f70 is same with the state(5) to be set 00:20:51.930 [2024-07-16 00:22:10.705292] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057f70 is same with the state(5) to be set 00:20:51.930 [2024-07-16 00:22:10.705299] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057f70 is same with the state(5) to be set 00:20:51.930 [2024-07-16 00:22:10.705305] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057f70 is same with the state(5) to be set 00:20:51.930 [2024-07-16 00:22:10.705311] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057f70 is same with the state(5) to be set 00:20:51.930 [2024-07-16 00:22:10.705318] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057f70 is same with the state(5) to be set 00:20:51.930 [2024-07-16 00:22:10.705327] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057f70 is same with the state(5) to be set 00:20:51.930 [2024-07-16 00:22:10.705333] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057f70 is same with the state(5) to be set 00:20:51.930 [2024-07-16 00:22:10.705339] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057f70 is same with the state(5) to be set 00:20:51.930 [2024-07-16 00:22:10.705345] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057f70 is same with the state(5) to be set 00:20:51.930 [2024-07-16 00:22:10.705352] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057f70 is same with the state(5) to be set 00:20:51.930 [2024-07-16 00:22:10.705358] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057f70 is same with the state(5) to be set 00:20:51.930 [2024-07-16 00:22:10.705364] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057f70 is same with the state(5) to be set 00:20:51.930 [2024-07-16 00:22:10.705370] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057f70 is same with the state(5) to be set 00:20:51.930 [2024-07-16 00:22:10.705376] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057f70 is same with the state(5) to be set 00:20:51.930 [2024-07-16 00:22:10.705383] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057f70 is same with the state(5) to be set 00:20:51.930 [2024-07-16 00:22:10.705389] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057f70 is same with the state(5) to be set 00:20:51.930 [2024-07-16 00:22:10.705395] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057f70 is same with the state(5) to be set 00:20:51.930 [2024-07-16 00:22:10.705401] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057f70 is same with the state(5) to be set 00:20:51.930 [2024-07-16 00:22:10.705407] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057f70 is same with the state(5) to be set 00:20:51.930 [2024-07-16 00:22:10.705414] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057f70 is same with the state(5) to be set 00:20:51.930 [2024-07-16 00:22:10.705421] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057f70 is same with the state(5) to be set 00:20:51.930 [2024-07-16 00:22:10.705427] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057f70 is same with the state(5) to be set 00:20:51.930 [2024-07-16 00:22:10.705433] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057f70 is same with the state(5) to be set 00:20:51.930 [2024-07-16 00:22:10.705439] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057f70 is same with the state(5) to be set 00:20:51.930 [2024-07-16 00:22:10.705445] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057f70 is same with the state(5) to be set 00:20:51.930 [2024-07-16 00:22:10.705451] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057f70 is same with the state(5) to be set 00:20:51.930 [2024-07-16 00:22:10.705457] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057f70 is same with the state(5) to be set 00:20:51.930 [2024-07-16 00:22:10.705463] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057f70 is same with the state(5) to be set 00:20:51.930 [2024-07-16 00:22:10.705469] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057f70 is same with the state(5) to be set 00:20:51.930 [2024-07-16 00:22:10.705475] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057f70 is same with the state(5) to be set 00:20:51.930 [2024-07-16 00:22:10.705481] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057f70 is same with the state(5) to be set 00:20:51.930 [2024-07-16 00:22:10.705488] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057f70 is same with the state(5) to be set 00:20:51.930 [2024-07-16 00:22:10.705495] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057f70 is same with the state(5) to be set 00:20:51.930 [2024-07-16 00:22:10.705502] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057f70 is same with the state(5) to be set 00:20:51.930 [2024-07-16 00:22:10.705509] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057f70 is same with the state(5) to be set 00:20:51.930 [2024-07-16 00:22:10.705514] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057f70 is same with the state(5) to be set 00:20:51.930 [2024-07-16 00:22:10.705521] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057f70 is same with the state(5) to be set 00:20:51.930 [2024-07-16 00:22:10.705527] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057f70 is same with the state(5) to be set 00:20:51.930 [2024-07-16 00:22:10.705533] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057f70 is same with the state(5) to be set 00:20:51.930 [2024-07-16 00:22:10.705540] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057f70 is same with the state(5) to be set 00:20:51.930 [2024-07-16 00:22:10.705546] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057f70 is same with the state(5) to be set 00:20:51.930 [2024-07-16 00:22:10.705552] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057f70 is same with the state(5) to be set 00:20:51.930 [2024-07-16 00:22:10.705558] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057f70 is same with the state(5) to be set 00:20:51.930 [2024-07-16 00:22:10.705564] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057f70 is same with the state(5) to be set 00:20:51.930 [2024-07-16 00:22:10.705570] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057f70 is same with the state(5) to be set 00:20:51.930 [2024-07-16 00:22:10.705577] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057f70 is same with the state(5) to be set 00:20:51.930 [2024-07-16 00:22:10.705583] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057f70 is same with the state(5) to be set 00:20:51.930 [2024-07-16 00:22:10.705589] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057f70 is same with the state(5) to be set 00:20:51.930 [2024-07-16 00:22:10.705595] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057f70 is same with the state(5) to be set 00:20:51.930 [2024-07-16 00:22:10.705601] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057f70 is same with the state(5) to be set 00:20:51.930 [2024-07-16 00:22:10.705607] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057f70 is same with the state(5) to be set 00:20:51.930 [2024-07-16 00:22:10.705613] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057f70 is same with the state(5) to be set 00:20:51.930 [2024-07-16 00:22:10.705619] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057f70 is same with the state(5) to be set 00:20:51.930 [2024-07-16 00:22:10.705626] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057f70 is same with the state(5) to be set 00:20:51.930 [2024-07-16 00:22:10.705633] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057f70 is same with the state(5) to be set 00:20:51.930 [2024-07-16 00:22:10.705639] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057f70 is same with the state(5) to be set 00:20:51.930 [2024-07-16 00:22:10.705645] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057f70 is same with the state(5) to be set 00:20:51.930 [2024-07-16 00:22:10.705652] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057f70 is same with the state(5) to be set 00:20:51.930 [2024-07-16 00:22:10.705659] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057f70 is same with the state(5) to be set 00:20:51.930 [2024-07-16 00:22:10.705665] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057f70 is same with the state(5) to be set 00:20:51.930 [2024-07-16 00:22:10.705673] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1057f70 is same with the state(5) to be set 00:20:51.930 [2024-07-16 00:22:10.706785] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1058410 is same with the state(5) to be set 00:20:51.931 [2024-07-16 00:22:10.706810] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1058410 is same with the state(5) to be set 00:20:51.931 [2024-07-16 00:22:10.706818] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1058410 is same with the state(5) to be set 00:20:51.931 [2024-07-16 00:22:10.706824] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1058410 is same with the state(5) to be set 00:20:51.931 [2024-07-16 00:22:10.706830] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1058410 is same with the state(5) to be set 00:20:51.931 [2024-07-16 00:22:10.706838] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1058410 is same with the state(5) to be set 00:20:51.931 [2024-07-16 00:22:10.706844] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1058410 is same with the state(5) to be set 00:20:51.931 [2024-07-16 00:22:10.706850] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1058410 is same with the state(5) to be set 00:20:51.931 [2024-07-16 00:22:10.706856] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1058410 is same with the state(5) to be set 00:20:51.931 [2024-07-16 00:22:10.706863] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1058410 is same with the state(5) to be set 00:20:51.931 [2024-07-16 00:22:10.706869] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1058410 is same with the state(5) to be set 00:20:51.931 [2024-07-16 00:22:10.706876] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1058410 is same with the state(5) to be set 00:20:51.931 [2024-07-16 00:22:10.706882] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1058410 is same with the state(5) to be set 00:20:51.931 [2024-07-16 00:22:10.706887] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1058410 is same with the state(5) to be set 00:20:51.931 [2024-07-16 00:22:10.706893] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1058410 is same with the state(5) to be set 00:20:51.931 [2024-07-16 00:22:10.706900] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1058410 is same with the state(5) to be set 00:20:51.931 [2024-07-16 00:22:10.706906] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1058410 is same with the state(5) to be set 00:20:51.931 [2024-07-16 00:22:10.706912] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1058410 is same with the state(5) to be set 00:20:51.931 [2024-07-16 00:22:10.706919] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1058410 is same with the state(5) to be set 00:20:51.931 [2024-07-16 00:22:10.706926] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1058410 is same with the state(5) to be set 00:20:51.931 [2024-07-16 00:22:10.706932] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1058410 is same with the state(5) to be set 00:20:51.931 [2024-07-16 00:22:10.706938] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1058410 is same with the state(5) to be set 00:20:51.931 [2024-07-16 00:22:10.706944] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1058410 is same with the state(5) to be set 00:20:51.931 [2024-07-16 00:22:10.706951] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1058410 is same with the state(5) to be set 00:20:51.931 [2024-07-16 00:22:10.706956] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1058410 is same with the state(5) to be set 00:20:51.931 [2024-07-16 00:22:10.706963] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1058410 is same with the state(5) to be set 00:20:51.931 [2024-07-16 00:22:10.706974] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1058410 is same with the state(5) to be set 00:20:51.931 [2024-07-16 00:22:10.706980] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1058410 is same with the state(5) to be set 00:20:51.931 [2024-07-16 00:22:10.706986] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1058410 is same with the state(5) to be set 00:20:51.931 [2024-07-16 00:22:10.706992] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1058410 is same with the state(5) to be set 00:20:51.931 [2024-07-16 00:22:10.706998] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1058410 is same with the state(5) to be set 00:20:51.931 [2024-07-16 00:22:10.707004] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1058410 is same with the state(5) to be set 00:20:51.931 [2024-07-16 00:22:10.707010] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1058410 is same with the state(5) to be set 00:20:51.931 [2024-07-16 00:22:10.707017] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1058410 is same with the state(5) to be set 00:20:51.931 [2024-07-16 00:22:10.707023] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1058410 is same with the state(5) to be set 00:20:51.931 [2024-07-16 00:22:10.707030] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1058410 is same with the state(5) to be set 00:20:51.931 [2024-07-16 00:22:10.707036] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1058410 is same with the state(5) to be set 00:20:51.931 [2024-07-16 00:22:10.707042] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1058410 is same with the state(5) to be set 00:20:51.931 [2024-07-16 00:22:10.707048] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1058410 is same with the state(5) to be set 00:20:51.931 [2024-07-16 00:22:10.707054] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1058410 is same with the state(5) to be set 00:20:51.931 [2024-07-16 00:22:10.707059] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1058410 is same with the state(5) to be set 00:20:51.931 [2024-07-16 00:22:10.707065] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1058410 is same with the state(5) to be set 00:20:51.931 [2024-07-16 00:22:10.707072] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1058410 is same with the state(5) to be set 00:20:51.931 [2024-07-16 00:22:10.707078] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1058410 is same with the state(5) to be set 00:20:51.931 [2024-07-16 00:22:10.707084] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1058410 is same with the state(5) to be set 00:20:51.931 [2024-07-16 00:22:10.707090] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1058410 is same with the state(5) to be set 00:20:51.931 [2024-07-16 00:22:10.707096] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1058410 is same with the state(5) to be set 00:20:51.931 [2024-07-16 00:22:10.707102] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1058410 is same with the state(5) to be set 00:20:51.931 [2024-07-16 00:22:10.707108] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1058410 is same with the state(5) to be set 00:20:51.931 [2024-07-16 00:22:10.707113] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1058410 is same with the state(5) to be set 00:20:51.931 [2024-07-16 00:22:10.707125] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1058410 is same with the state(5) to be set 00:20:51.931 [2024-07-16 00:22:10.707131] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1058410 is same with the state(5) to be set 00:20:51.931 [2024-07-16 00:22:10.707137] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1058410 is same with the state(5) to be set 00:20:51.931 [2024-07-16 00:22:10.707145] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1058410 is same with the state(5) to be set 00:20:51.931 [2024-07-16 00:22:10.707152] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1058410 is same with the state(5) to be set 00:20:51.931 [2024-07-16 00:22:10.707158] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1058410 is same with the state(5) to be set 00:20:51.931 [2024-07-16 00:22:10.707164] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1058410 is same with the state(5) to be set 00:20:51.931 [2024-07-16 00:22:10.707170] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1058410 is same with the state(5) to be set 00:20:51.931 [2024-07-16 00:22:10.707176] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1058410 is same with the state(5) to be set 00:20:51.931 [2024-07-16 00:22:10.707182] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1058410 is same with the state(5) to be set 00:20:51.931 [2024-07-16 00:22:10.707189] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1058410 is same with the state(5) to be set 00:20:51.931 [2024-07-16 00:22:10.707194] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1058410 is same with the state(5) to be set 00:20:51.931 [2024-07-16 00:22:10.707201] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1058410 is same with the state(5) to be set 00:20:51.931 [2024-07-16 00:22:10.708009] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10588d0 is same with the state(5) to be set 00:20:51.931 [2024-07-16 00:22:10.708035] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10588d0 is same with the state(5) to be set 00:20:51.931 [2024-07-16 00:22:10.708043] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10588d0 is same with the state(5) to be set 00:20:51.931 [2024-07-16 00:22:10.708049] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10588d0 is same with the state(5) to be set 00:20:51.931 [2024-07-16 00:22:10.708055] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10588d0 is same with the state(5) to be set 00:20:51.931 [2024-07-16 00:22:10.708062] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10588d0 is same with the state(5) to be set 00:20:51.931 [2024-07-16 00:22:10.708068] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10588d0 is same with the state(5) to be set 00:20:51.931 [2024-07-16 00:22:10.708074] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10588d0 is same with the state(5) to be set 00:20:51.931 [2024-07-16 00:22:10.708080] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10588d0 is same with the state(5) to be set 00:20:51.931 [2024-07-16 00:22:10.708408] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1058c40 is same with the state(5) to be set 00:20:51.931 [2024-07-16 00:22:10.708425] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1058c40 is same with the state(5) to be set 00:20:51.931 [2024-07-16 00:22:10.708432] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1058c40 is same with the state(5) to be set 00:20:51.931 [2024-07-16 00:22:10.708438] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1058c40 is same with the state(5) to be set 00:20:51.931 [2024-07-16 00:22:10.708445] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1058c40 is same with the state(5) to be set 00:20:51.931 [2024-07-16 00:22:10.708451] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1058c40 is same with the state(5) to be set 00:20:51.931 [2024-07-16 00:22:10.708457] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1058c40 is same with the state(5) to be set 00:20:51.931 [2024-07-16 00:22:10.708463] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1058c40 is same with the state(5) to be set 00:20:51.931 [2024-07-16 00:22:10.708472] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1058c40 is same with the state(5) to be set 00:20:51.931 [2024-07-16 00:22:10.708479] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1058c40 is same with the state(5) to be set 00:20:51.931 [2024-07-16 00:22:10.708485] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1058c40 is same with the state(5) to be set 00:20:51.931 [2024-07-16 00:22:10.708491] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1058c40 is same with the state(5) to be set 00:20:51.931 [2024-07-16 00:22:10.708497] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1058c40 is same with the state(5) to be set 00:20:51.931 [2024-07-16 00:22:10.708503] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1058c40 is same with the state(5) to be set 00:20:51.931 [2024-07-16 00:22:10.708509] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1058c40 is same with the state(5) to be set 00:20:51.931 [2024-07-16 00:22:10.709025] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10590e0 is same with the state(5) to be set 00:20:51.931 [2024-07-16 00:22:10.709035] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10590e0 is same with the state(5) to be set 00:20:51.931 [2024-07-16 00:22:10.710433] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059a40 is same with the state(5) to be set 00:20:51.932 [2024-07-16 00:22:10.710445] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059a40 is same with the state(5) to be set 00:20:51.932 [2024-07-16 00:22:10.710451] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059a40 is same with the state(5) to be set 00:20:51.932 [2024-07-16 00:22:10.710458] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059a40 is same with the state(5) to be set 00:20:51.932 [2024-07-16 00:22:10.710464] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059a40 is same with the state(5) to be set 00:20:51.932 [2024-07-16 00:22:10.710470] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059a40 is same with the state(5) to be set 00:20:51.932 [2024-07-16 00:22:10.710476] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059a40 is same with the state(5) to be set 00:20:51.932 [2024-07-16 00:22:10.710482] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059a40 is same with the state(5) to be set 00:20:51.932 [2024-07-16 00:22:10.710489] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059a40 is same with the state(5) to be set 00:20:51.932 [2024-07-16 00:22:10.710495] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059a40 is same with the state(5) to be set 00:20:51.932 [2024-07-16 00:22:10.710501] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059a40 is same with the state(5) to be set 00:20:51.932 [2024-07-16 00:22:10.710507] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059a40 is same with the state(5) to be set 00:20:51.932 [2024-07-16 00:22:10.710512] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059a40 is same with the state(5) to be set 00:20:51.932 [2024-07-16 00:22:10.710518] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059a40 is same with the state(5) to be set 00:20:51.932 [2024-07-16 00:22:10.710524] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059a40 is same with the state(5) to be set 00:20:51.932 [2024-07-16 00:22:10.710530] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059a40 is same with the state(5) to be set 00:20:51.932 [2024-07-16 00:22:10.710536] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059a40 is same with the state(5) to be set 00:20:51.932 [2024-07-16 00:22:10.710542] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059a40 is same with the state(5) to be set 00:20:51.932 [2024-07-16 00:22:10.710552] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059a40 is same with the state(5) to be set 00:20:51.932 [2024-07-16 00:22:10.710559] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059a40 is same with the state(5) to be set 00:20:51.932 [2024-07-16 00:22:10.710564] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059a40 is same with the state(5) to be set 00:20:51.932 [2024-07-16 00:22:10.710570] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059a40 is same with the state(5) to be set 00:20:51.932 [2024-07-16 00:22:10.710576] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059a40 is same with the state(5) to be set 00:20:51.932 [2024-07-16 00:22:10.710582] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059a40 is same with the state(5) to be set 00:20:51.932 [2024-07-16 00:22:10.710589] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059a40 is same with the state(5) to be set 00:20:51.932 [2024-07-16 00:22:10.710595] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059a40 is same with the state(5) to be set 00:20:51.932 [2024-07-16 00:22:10.710601] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059a40 is same with the state(5) to be set 00:20:51.932 [2024-07-16 00:22:10.710607] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059a40 is same with the state(5) to be set 00:20:51.932 [2024-07-16 00:22:10.710613] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059a40 is same with the state(5) to be set 00:20:51.932 [2024-07-16 00:22:10.710619] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059a40 is same with the state(5) to be set 00:20:51.932 [2024-07-16 00:22:10.710624] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059a40 is same with the state(5) to be set 00:20:51.932 [2024-07-16 00:22:10.710631] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059a40 is same with the state(5) to be set 00:20:51.932 [2024-07-16 00:22:10.710637] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059a40 is same with the state(5) to be set 00:20:51.932 [2024-07-16 00:22:10.710643] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059a40 is same with the state(5) to be set 00:20:51.932 [2024-07-16 00:22:10.710649] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059a40 is same with the state(5) to be set 00:20:51.932 [2024-07-16 00:22:10.710655] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059a40 is same with the state(5) to be set 00:20:51.932 [2024-07-16 00:22:10.710661] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059a40 is same with the state(5) to be set 00:20:51.932 [2024-07-16 00:22:10.710667] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059a40 is same with the state(5) to be set 00:20:51.932 [2024-07-16 00:22:10.710673] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059a40 is same with the state(5) to be set 00:20:51.932 [2024-07-16 00:22:10.710679] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059a40 is same with the state(5) to be set 00:20:51.932 [2024-07-16 00:22:10.710685] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059a40 is same with the state(5) to be set 00:20:51.932 [2024-07-16 00:22:10.710692] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059a40 is same with the state(5) to be set 00:20:51.932 [2024-07-16 00:22:10.710699] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059a40 is same with the state(5) to be set 00:20:51.932 [2024-07-16 00:22:10.710705] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059a40 is same with the state(5) to be set 00:20:51.932 [2024-07-16 00:22:10.710711] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059a40 is same with the state(5) to be set 00:20:51.932 [2024-07-16 00:22:10.710719] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059a40 is same with the state(5) to be set 00:20:51.932 [2024-07-16 00:22:10.710725] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059a40 is same with the state(5) to be set 00:20:51.932 [2024-07-16 00:22:10.710731] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059a40 is same with the state(5) to be set 00:20:51.932 [2024-07-16 00:22:10.710736] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059a40 is same with the state(5) to be set 00:20:51.932 [2024-07-16 00:22:10.710742] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059a40 is same with the state(5) to be set 00:20:51.932 [2024-07-16 00:22:10.710749] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059a40 is same with the state(5) to be set 00:20:51.932 [2024-07-16 00:22:10.710755] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059a40 is same with the state(5) to be set 00:20:51.932 [2024-07-16 00:22:10.710761] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059a40 is same with the state(5) to be set 00:20:51.932 [2024-07-16 00:22:10.710767] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059a40 is same with the state(5) to be set 00:20:51.932 [2024-07-16 00:22:10.710773] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059a40 is same with the state(5) to be set 00:20:51.932 [2024-07-16 00:22:10.710779] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059a40 is same with the state(5) to be set 00:20:51.932 [2024-07-16 00:22:10.710785] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059a40 is same with the state(5) to be set 00:20:51.932 [2024-07-16 00:22:10.710792] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059a40 is same with the state(5) to be set 00:20:51.932 [2024-07-16 00:22:10.710798] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059a40 is same with the state(5) to be set 00:20:51.932 [2024-07-16 00:22:10.710804] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059a40 is same with the state(5) to be set 00:20:51.932 [2024-07-16 00:22:10.710810] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059a40 is same with the state(5) to be set 00:20:51.932 [2024-07-16 00:22:10.710815] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059a40 is same with the state(5) to be set 00:20:51.932 [2024-07-16 00:22:10.710821] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059a40 is same with the state(5) to be set 00:20:51.932 [2024-07-16 00:22:10.711383] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059ee0 is same with the state(5) to be set 00:20:51.932 [2024-07-16 00:22:10.711395] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059ee0 is same with the state(5) to be set 00:20:51.932 [2024-07-16 00:22:10.711401] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059ee0 is same with the state(5) to be set 00:20:51.932 [2024-07-16 00:22:10.711407] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059ee0 is same with the state(5) to be set 00:20:51.932 [2024-07-16 00:22:10.711413] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059ee0 is same with the state(5) to be set 00:20:51.932 [2024-07-16 00:22:10.711420] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059ee0 is same with the state(5) to be set 00:20:51.932 [2024-07-16 00:22:10.711426] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059ee0 is same with the state(5) to be set 00:20:51.932 [2024-07-16 00:22:10.711432] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059ee0 is same with the state(5) to be set 00:20:51.932 [2024-07-16 00:22:10.711438] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059ee0 is same with the state(5) to be set 00:20:51.932 [2024-07-16 00:22:10.711446] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059ee0 is same with the state(5) to be set 00:20:51.933 [2024-07-16 00:22:10.711452] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059ee0 is same with the state(5) to be set 00:20:51.933 [2024-07-16 00:22:10.711458] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059ee0 is same with the state(5) to be set 00:20:51.933 [2024-07-16 00:22:10.711464] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059ee0 is same with the state(5) to be set 00:20:51.933 [2024-07-16 00:22:10.711470] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059ee0 is same with the state(5) to be set 00:20:51.933 [2024-07-16 00:22:10.711477] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059ee0 is same with the state(5) to be set 00:20:51.933 [2024-07-16 00:22:10.711483] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059ee0 is same with the state(5) to be set 00:20:51.933 [2024-07-16 00:22:10.711488] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059ee0 is same with the state(5) to be set 00:20:51.933 [2024-07-16 00:22:10.711494] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059ee0 is same with the state(5) to be set 00:20:51.933 [2024-07-16 00:22:10.711502] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059ee0 is same with the state(5) to be set 00:20:51.933 [2024-07-16 00:22:10.711507] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059ee0 is same with the state(5) to be set 00:20:51.933 [2024-07-16 00:22:10.711513] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059ee0 is same with the state(5) to be set 00:20:51.933 [2024-07-16 00:22:10.711519] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059ee0 is same with the state(5) to be set 00:20:51.933 [2024-07-16 00:22:10.711525] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059ee0 is same with the state(5) to be set 00:20:51.933 [2024-07-16 00:22:10.711530] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059ee0 is same with the state(5) to be set 00:20:51.933 [2024-07-16 00:22:10.711536] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059ee0 is same with the state(5) to be set 00:20:51.933 [2024-07-16 00:22:10.711542] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059ee0 is same with the state(5) to be set 00:20:51.933 [2024-07-16 00:22:10.711548] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059ee0 is same with the state(5) to be set 00:20:51.933 [2024-07-16 00:22:10.711554] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059ee0 is same with the state(5) to be set 00:20:51.933 [2024-07-16 00:22:10.711559] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059ee0 is same with the state(5) to be set 00:20:51.933 [2024-07-16 00:22:10.711566] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059ee0 is same with the state(5) to be set 00:20:51.933 [2024-07-16 00:22:10.711572] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059ee0 is same with the state(5) to be set 00:20:51.933 [2024-07-16 00:22:10.711578] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059ee0 is same with the state(5) to be set 00:20:51.933 [2024-07-16 00:22:10.711584] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059ee0 is same with the state(5) to be set 00:20:51.933 [2024-07-16 00:22:10.711589] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059ee0 is same with the state(5) to be set 00:20:51.933 [2024-07-16 00:22:10.711595] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059ee0 is same with the state(5) to be set 00:20:51.933 [2024-07-16 00:22:10.711601] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059ee0 is same with the state(5) to be set 00:20:51.933 [2024-07-16 00:22:10.711608] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059ee0 is same with the state(5) to be set 00:20:51.933 [2024-07-16 00:22:10.711614] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059ee0 is same with the state(5) to be set 00:20:51.933 [2024-07-16 00:22:10.711619] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059ee0 is same with the state(5) to be set 00:20:51.933 [2024-07-16 00:22:10.711625] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059ee0 is same with the state(5) to be set 00:20:51.933 [2024-07-16 00:22:10.711631] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059ee0 is same with the state(5) to be set 00:20:51.933 [2024-07-16 00:22:10.711637] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059ee0 is same with the state(5) to be set 00:20:51.933 [2024-07-16 00:22:10.711642] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059ee0 is same with the state(5) to be set 00:20:51.933 [2024-07-16 00:22:10.711648] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059ee0 is same with the state(5) to be set 00:20:51.933 [2024-07-16 00:22:10.711653] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059ee0 is same with the state(5) to be set 00:20:51.933 [2024-07-16 00:22:10.711659] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059ee0 is same with the state(5) to be set 00:20:51.933 [2024-07-16 00:22:10.711665] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059ee0 is same with the state(5) to be set 00:20:51.933 [2024-07-16 00:22:10.711671] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059ee0 is same with the state(5) to be set 00:20:51.933 [2024-07-16 00:22:10.711677] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059ee0 is same with the state(5) to be set 00:20:51.933 [2024-07-16 00:22:10.711683] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059ee0 is same with the state(5) to be set 00:20:51.933 [2024-07-16 00:22:10.711689] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059ee0 is same with the state(5) to be set 00:20:51.933 [2024-07-16 00:22:10.711695] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059ee0 is same with the state(5) to be set 00:20:51.933 [2024-07-16 00:22:10.711701] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059ee0 is same with the state(5) to be set 00:20:51.933 [2024-07-16 00:22:10.711707] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059ee0 is same with the state(5) to be set 00:20:51.933 [2024-07-16 00:22:10.711712] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059ee0 is same with the state(5) to be set 00:20:51.933 [2024-07-16 00:22:10.711719] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059ee0 is same with the state(5) to be set 00:20:51.933 [2024-07-16 00:22:10.711724] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059ee0 is same with the state(5) to be set 00:20:51.933 [2024-07-16 00:22:10.711731] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059ee0 is same with the state(5) to be set 00:20:51.933 [2024-07-16 00:22:10.711736] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059ee0 is same with the state(5) to be set 00:20:51.933 [2024-07-16 00:22:10.711742] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059ee0 is same with the state(5) to be set 00:20:51.933 [2024-07-16 00:22:10.711748] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059ee0 is same with the state(5) to be set 00:20:51.933 [2024-07-16 00:22:10.711754] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059ee0 is same with the state(5) to be set 00:20:51.933 [2024-07-16 00:22:10.711759] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059ee0 is same with the state(5) to be set 00:20:51.933 [2024-07-16 00:22:10.716915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.933 [2024-07-16 00:22:10.716952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.933 [2024-07-16 00:22:10.716968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.933 [2024-07-16 00:22:10.716975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.933 [2024-07-16 00:22:10.716984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.933 [2024-07-16 00:22:10.716991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.933 [2024-07-16 00:22:10.716999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.933 [2024-07-16 00:22:10.717006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.933 [2024-07-16 00:22:10.717014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.933 [2024-07-16 00:22:10.717020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.933 [2024-07-16 00:22:10.717028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.933 [2024-07-16 00:22:10.717034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.933 [2024-07-16 00:22:10.717043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.933 [2024-07-16 00:22:10.717049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.933 [2024-07-16 00:22:10.717057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.933 [2024-07-16 00:22:10.717064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.933 [2024-07-16 00:22:10.717072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.933 [2024-07-16 00:22:10.717078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.933 [2024-07-16 00:22:10.717086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.933 [2024-07-16 00:22:10.717092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.933 [2024-07-16 00:22:10.717100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.933 [2024-07-16 00:22:10.717106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.933 [2024-07-16 00:22:10.717114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.933 [2024-07-16 00:22:10.717120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.933 [2024-07-16 00:22:10.717128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.933 [2024-07-16 00:22:10.717134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.933 [2024-07-16 00:22:10.717143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.933 [2024-07-16 00:22:10.717150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.933 [2024-07-16 00:22:10.717158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.933 [2024-07-16 00:22:10.717165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.933 [2024-07-16 00:22:10.717172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.933 [2024-07-16 00:22:10.717178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.933 [2024-07-16 00:22:10.717186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.933 [2024-07-16 00:22:10.717193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.933 [2024-07-16 00:22:10.717201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.934 [2024-07-16 00:22:10.717207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.934 [2024-07-16 00:22:10.717215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.934 [2024-07-16 00:22:10.717222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.934 [2024-07-16 00:22:10.717236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.934 [2024-07-16 00:22:10.717242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.934 [2024-07-16 00:22:10.717251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.934 [2024-07-16 00:22:10.717257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.934 [2024-07-16 00:22:10.717266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.934 [2024-07-16 00:22:10.717272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.934 [2024-07-16 00:22:10.717280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.934 [2024-07-16 00:22:10.717287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.934 [2024-07-16 00:22:10.717295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.934 [2024-07-16 00:22:10.717301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.934 [2024-07-16 00:22:10.717309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.934 [2024-07-16 00:22:10.717316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.934 [2024-07-16 00:22:10.717324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.934 [2024-07-16 00:22:10.717332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.934 [2024-07-16 00:22:10.717340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.934 [2024-07-16 00:22:10.717347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.934 [2024-07-16 00:22:10.717354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.934 [2024-07-16 00:22:10.717360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.934 [2024-07-16 00:22:10.717368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.934 [2024-07-16 00:22:10.717375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.934 [2024-07-16 00:22:10.717383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.934 [2024-07-16 00:22:10.717390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.934 [2024-07-16 00:22:10.717398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.934 [2024-07-16 00:22:10.717404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.934 [2024-07-16 00:22:10.717412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.934 [2024-07-16 00:22:10.717419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.934 [2024-07-16 00:22:10.717427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.934 [2024-07-16 00:22:10.717433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.934 [2024-07-16 00:22:10.717441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.934 [2024-07-16 00:22:10.717448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.934 [2024-07-16 00:22:10.717456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.934 [2024-07-16 00:22:10.717462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.934 [2024-07-16 00:22:10.717470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.934 [2024-07-16 00:22:10.717477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.934 [2024-07-16 00:22:10.717485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.934 [2024-07-16 00:22:10.717491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.934 [2024-07-16 00:22:10.717499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.934 [2024-07-16 00:22:10.717507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.934 [2024-07-16 00:22:10.717515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.934 [2024-07-16 00:22:10.717521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.934 [2024-07-16 00:22:10.717529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.934 [2024-07-16 00:22:10.717535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.934 [2024-07-16 00:22:10.717543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.934 [2024-07-16 00:22:10.717549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.934 [2024-07-16 00:22:10.717557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.934 [2024-07-16 00:22:10.717564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.934 [2024-07-16 00:22:10.717572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.934 [2024-07-16 00:22:10.717578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.934 [2024-07-16 00:22:10.717586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.934 [2024-07-16 00:22:10.717592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.934 [2024-07-16 00:22:10.717600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.934 [2024-07-16 00:22:10.717606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.934 [2024-07-16 00:22:10.717614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.934 [2024-07-16 00:22:10.717620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.934 [2024-07-16 00:22:10.717628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.934 [2024-07-16 00:22:10.717634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.934 [2024-07-16 00:22:10.717642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.934 [2024-07-16 00:22:10.717648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.934 [2024-07-16 00:22:10.717656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.934 [2024-07-16 00:22:10.717663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.934 [2024-07-16 00:22:10.717671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.934 [2024-07-16 00:22:10.717677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.934 [2024-07-16 00:22:10.717687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.934 [2024-07-16 00:22:10.717693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.934 [2024-07-16 00:22:10.717701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.934 [2024-07-16 00:22:10.717707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.934 [2024-07-16 00:22:10.717716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.934 [2024-07-16 00:22:10.717722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.934 [2024-07-16 00:22:10.717730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.934 [2024-07-16 00:22:10.717736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.934 [2024-07-16 00:22:10.717744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.934 [2024-07-16 00:22:10.717750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.934 [2024-07-16 00:22:10.717759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.934 [2024-07-16 00:22:10.717765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.934 [2024-07-16 00:22:10.717773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.934 [2024-07-16 00:22:10.717779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.934 [2024-07-16 00:22:10.717787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.934 [2024-07-16 00:22:10.717793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.934 [2024-07-16 00:22:10.717801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.934 [2024-07-16 00:22:10.717807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.934 [2024-07-16 00:22:10.717815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.934 [2024-07-16 00:22:10.717821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.934 [2024-07-16 00:22:10.717828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.935 [2024-07-16 00:22:10.717835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.935 [2024-07-16 00:22:10.717842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.935 [2024-07-16 00:22:10.717848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.935 [2024-07-16 00:22:10.717856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.935 [2024-07-16 00:22:10.717862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.935 [2024-07-16 00:22:10.717872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.935 [2024-07-16 00:22:10.717878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.935 [2024-07-16 00:22:10.717905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:51.935 [2024-07-16 00:22:10.717959] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x229bde0 was disconnected and freed. reset controller. 00:20:51.935 [2024-07-16 00:22:10.718166] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.935 [2024-07-16 00:22:10.718183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.935 [2024-07-16 00:22:10.718191] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.935 [2024-07-16 00:22:10.718197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.935 [2024-07-16 00:22:10.718204] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.935 [2024-07-16 00:22:10.718211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.935 [2024-07-16 00:22:10.718218] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.935 [2024-07-16 00:22:10.718230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.935 [2024-07-16 00:22:10.718237] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23240d0 is same with the state(5) to be set 00:20:51.935 [2024-07-16 00:22:10.718263] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.935 [2024-07-16 00:22:10.718271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.935 [2024-07-16 00:22:10.718278] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.935 [2024-07-16 00:22:10.718285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.935 [2024-07-16 00:22:10.718292] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.935 [2024-07-16 00:22:10.718298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.935 [2024-07-16 00:22:10.718305] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.935 [2024-07-16 00:22:10.718311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.935 [2024-07-16 00:22:10.718317] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219fbf0 is same with the state(5) to be set 00:20:51.935 [2024-07-16 00:22:10.718340] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.935 [2024-07-16 00:22:10.718348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.935 [2024-07-16 00:22:10.718354] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.935 [2024-07-16 00:22:10.718363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.935 [2024-07-16 00:22:10.718371] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.935 [2024-07-16 00:22:10.718377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.935 [2024-07-16 00:22:10.718384] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.935 [2024-07-16 00:22:10.718390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.935 [2024-07-16 00:22:10.718396] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b190 is same with the state(5) to be set 00:20:51.935 [2024-07-16 00:22:10.718417] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.935 [2024-07-16 00:22:10.718425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.935 [2024-07-16 00:22:10.718432] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.935 [2024-07-16 00:22:10.718438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.935 [2024-07-16 00:22:10.718445] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.935 [2024-07-16 00:22:10.718452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.935 [2024-07-16 00:22:10.718459] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.935 [2024-07-16 00:22:10.718465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.935 [2024-07-16 00:22:10.718471] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21951d0 is same with the state(5) to be set 00:20:51.935 [2024-07-16 00:22:10.718494] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.935 [2024-07-16 00:22:10.718501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.935 [2024-07-16 00:22:10.718508] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.935 [2024-07-16 00:22:10.718514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.935 [2024-07-16 00:22:10.718521] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.935 [2024-07-16 00:22:10.718528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.935 [2024-07-16 00:22:10.718535] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.935 [2024-07-16 00:22:10.718541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.935 [2024-07-16 00:22:10.718547] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cb30 is same with the state(5) to be set 00:20:51.935 [2024-07-16 00:22:10.718571] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.935 [2024-07-16 00:22:10.718579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.935 [2024-07-16 00:22:10.718590] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.935 [2024-07-16 00:22:10.718596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.935 [2024-07-16 00:22:10.718604] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.935 [2024-07-16 00:22:10.718610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.935 [2024-07-16 00:22:10.718617] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.935 [2024-07-16 00:22:10.718623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.935 [2024-07-16 00:22:10.718629] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232d050 is same with the state(5) to be set 00:20:51.935 [2024-07-16 00:22:10.718653] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.935 [2024-07-16 00:22:10.718661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.935 [2024-07-16 00:22:10.718668] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.935 [2024-07-16 00:22:10.718674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.935 [2024-07-16 00:22:10.718682] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.935 [2024-07-16 00:22:10.718687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.935 [2024-07-16 00:22:10.718694] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.935 [2024-07-16 00:22:10.718701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.935 [2024-07-16 00:22:10.718706] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2158c70 is same with the state(5) to be set 00:20:51.935 [2024-07-16 00:22:10.718728] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.935 [2024-07-16 00:22:10.718735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.935 [2024-07-16 00:22:10.718742] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.935 [2024-07-16 00:22:10.718748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.935 [2024-07-16 00:22:10.718755] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.935 [2024-07-16 00:22:10.718761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.935 [2024-07-16 00:22:10.718768] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.935 [2024-07-16 00:22:10.718774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.935 [2024-07-16 00:22:10.718780] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23248d0 is same with the state(5) to be set 00:20:51.935 [2024-07-16 00:22:10.718804] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.935 [2024-07-16 00:22:10.718811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.935 [2024-07-16 00:22:10.718818] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.935 [2024-07-16 00:22:10.718825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.935 [2024-07-16 00:22:10.718831] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.936 [2024-07-16 00:22:10.718837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.936 [2024-07-16 00:22:10.718844] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.936 [2024-07-16 00:22:10.718851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.936 [2024-07-16 00:22:10.718857] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca7340 is same with the state(5) to be set 00:20:51.936 [2024-07-16 00:22:10.718879] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.936 [2024-07-16 00:22:10.718886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.936 [2024-07-16 00:22:10.718893] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.936 [2024-07-16 00:22:10.718899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.936 [2024-07-16 00:22:10.718906] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.936 [2024-07-16 00:22:10.718912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.936 [2024-07-16 00:22:10.718919] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.936 [2024-07-16 00:22:10.718925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.936 [2024-07-16 00:22:10.718931] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230d8b0 is same with the state(5) to be set 00:20:51.936 [2024-07-16 00:22:10.719395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.936 [2024-07-16 00:22:10.719418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.936 [2024-07-16 00:22:10.719431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.936 [2024-07-16 00:22:10.719438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.936 [2024-07-16 00:22:10.719446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.936 [2024-07-16 00:22:10.719453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.936 [2024-07-16 00:22:10.719462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.936 [2024-07-16 00:22:10.719468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.936 [2024-07-16 00:22:10.719480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.936 [2024-07-16 00:22:10.719486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.936 [2024-07-16 00:22:10.719494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.936 [2024-07-16 00:22:10.719501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.936 [2024-07-16 00:22:10.719509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.936 [2024-07-16 00:22:10.719515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.936 [2024-07-16 00:22:10.719523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.936 [2024-07-16 00:22:10.719530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.936 [2024-07-16 00:22:10.719538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.936 [2024-07-16 00:22:10.719544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.936 [2024-07-16 00:22:10.719554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.936 [2024-07-16 00:22:10.719560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.936 [2024-07-16 00:22:10.719569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.936 [2024-07-16 00:22:10.719575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.936 [2024-07-16 00:22:10.719583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.936 [2024-07-16 00:22:10.719589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.936 [2024-07-16 00:22:10.719597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.936 [2024-07-16 00:22:10.719603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.936 [2024-07-16 00:22:10.719611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.936 [2024-07-16 00:22:10.719617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.936 [2024-07-16 00:22:10.719625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.936 [2024-07-16 00:22:10.719631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.936 [2024-07-16 00:22:10.719639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.936 [2024-07-16 00:22:10.719646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.936 [2024-07-16 00:22:10.719653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.936 [2024-07-16 00:22:10.719665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.936 [2024-07-16 00:22:10.719673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.936 [2024-07-16 00:22:10.719679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.936 [2024-07-16 00:22:10.719687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.936 [2024-07-16 00:22:10.719694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.936 [2024-07-16 00:22:10.719702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.936 [2024-07-16 00:22:10.719709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.936 [2024-07-16 00:22:10.719717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.936 [2024-07-16 00:22:10.719723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.936 [2024-07-16 00:22:10.719731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.936 [2024-07-16 00:22:10.719738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.936 [2024-07-16 00:22:10.719745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.936 [2024-07-16 00:22:10.719752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.936 [2024-07-16 00:22:10.719760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.936 [2024-07-16 00:22:10.719767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.936 [2024-07-16 00:22:10.719775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.936 [2024-07-16 00:22:10.719781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.936 [2024-07-16 00:22:10.719789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.936 [2024-07-16 00:22:10.719795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.936 [2024-07-16 00:22:10.719803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.936 [2024-07-16 00:22:10.719810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.936 [2024-07-16 00:22:10.719818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.936 [2024-07-16 00:22:10.719825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.936 [2024-07-16 00:22:10.719834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.936 [2024-07-16 00:22:10.719840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.936 [2024-07-16 00:22:10.719850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.936 [2024-07-16 00:22:10.719856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.936 [2024-07-16 00:22:10.719864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.936 [2024-07-16 00:22:10.719871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.936 [2024-07-16 00:22:10.719878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.936 [2024-07-16 00:22:10.719885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.936 [2024-07-16 00:22:10.719892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.936 [2024-07-16 00:22:10.719898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.936 [2024-07-16 00:22:10.719906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.937 [2024-07-16 00:22:10.719913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.937 [2024-07-16 00:22:10.719920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.937 [2024-07-16 00:22:10.719927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.937 [2024-07-16 00:22:10.719935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.937 [2024-07-16 00:22:10.719942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.937 [2024-07-16 00:22:10.719950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.937 [2024-07-16 00:22:10.719956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.937 [2024-07-16 00:22:10.719965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.937 [2024-07-16 00:22:10.719971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.937 [2024-07-16 00:22:10.719979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.937 [2024-07-16 00:22:10.719985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.937 [2024-07-16 00:22:10.719993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.937 [2024-07-16 00:22:10.719999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.937 [2024-07-16 00:22:10.720007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.937 [2024-07-16 00:22:10.720015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.937 [2024-07-16 00:22:10.720023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.937 [2024-07-16 00:22:10.720031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.937 [2024-07-16 00:22:10.720039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.937 [2024-07-16 00:22:10.725640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.937 [2024-07-16 00:22:10.725652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.937 [2024-07-16 00:22:10.725660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.937 [2024-07-16 00:22:10.725667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.937 [2024-07-16 00:22:10.725674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.937 [2024-07-16 00:22:10.725682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.937 [2024-07-16 00:22:10.725689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.937 [2024-07-16 00:22:10.725697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.937 [2024-07-16 00:22:10.725703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.937 [2024-07-16 00:22:10.725711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.937 [2024-07-16 00:22:10.725718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.937 [2024-07-16 00:22:10.725726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.937 [2024-07-16 00:22:10.725733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.937 [2024-07-16 00:22:10.725741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.937 [2024-07-16 00:22:10.725747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.937 [2024-07-16 00:22:10.725755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.937 [2024-07-16 00:22:10.725761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.937 [2024-07-16 00:22:10.725769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.937 [2024-07-16 00:22:10.725775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.937 [2024-07-16 00:22:10.725783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.937 [2024-07-16 00:22:10.725789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.937 [2024-07-16 00:22:10.725797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.937 [2024-07-16 00:22:10.725805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.937 [2024-07-16 00:22:10.725813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.937 [2024-07-16 00:22:10.725819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.937 [2024-07-16 00:22:10.725827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.937 [2024-07-16 00:22:10.725833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.937 [2024-07-16 00:22:10.725841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.937 [2024-07-16 00:22:10.725848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.937 [2024-07-16 00:22:10.725856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.937 [2024-07-16 00:22:10.725863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.937 [2024-07-16 00:22:10.725871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.937 [2024-07-16 00:22:10.725877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.937 [2024-07-16 00:22:10.725885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.937 [2024-07-16 00:22:10.725892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.937 [2024-07-16 00:22:10.725899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.937 [2024-07-16 00:22:10.725905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.937 [2024-07-16 00:22:10.725913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.937 [2024-07-16 00:22:10.725920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.937 [2024-07-16 00:22:10.725927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.937 [2024-07-16 00:22:10.725934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.937 [2024-07-16 00:22:10.725942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.937 [2024-07-16 00:22:10.725948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.937 [2024-07-16 00:22:10.726017] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2154040 was disconnected and freed. reset controller. 00:20:51.937 [2024-07-16 00:22:10.726088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.937 [2024-07-16 00:22:10.726096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.937 [2024-07-16 00:22:10.726106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.937 [2024-07-16 00:22:10.726113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.937 [2024-07-16 00:22:10.726123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.937 [2024-07-16 00:22:10.726130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.937 [2024-07-16 00:22:10.726138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.937 [2024-07-16 00:22:10.726144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.937 [2024-07-16 00:22:10.726152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.937 [2024-07-16 00:22:10.726159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.937 [2024-07-16 00:22:10.726167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.937 [2024-07-16 00:22:10.726173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.937 [2024-07-16 00:22:10.726181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.937 [2024-07-16 00:22:10.726187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.937 [2024-07-16 00:22:10.726196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.938 [2024-07-16 00:22:10.726202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.938 [2024-07-16 00:22:10.726210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.938 [2024-07-16 00:22:10.726216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.938 [2024-07-16 00:22:10.726228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.938 [2024-07-16 00:22:10.726235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.938 [2024-07-16 00:22:10.726243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.938 [2024-07-16 00:22:10.726249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.938 [2024-07-16 00:22:10.726257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.938 [2024-07-16 00:22:10.726263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.938 [2024-07-16 00:22:10.726272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.938 [2024-07-16 00:22:10.726278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.938 [2024-07-16 00:22:10.726286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.938 [2024-07-16 00:22:10.726292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.938 [2024-07-16 00:22:10.726300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.938 [2024-07-16 00:22:10.726309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.938 [2024-07-16 00:22:10.726317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.938 [2024-07-16 00:22:10.726323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.938 [2024-07-16 00:22:10.726331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.938 [2024-07-16 00:22:10.726337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.938 [2024-07-16 00:22:10.726345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.938 [2024-07-16 00:22:10.726352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.938 [2024-07-16 00:22:10.726360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.938 [2024-07-16 00:22:10.726366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.938 [2024-07-16 00:22:10.726375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.938 [2024-07-16 00:22:10.726381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.938 [2024-07-16 00:22:10.726389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.938 [2024-07-16 00:22:10.726395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.938 [2024-07-16 00:22:10.726404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.938 [2024-07-16 00:22:10.726410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.938 [2024-07-16 00:22:10.726418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.938 [2024-07-16 00:22:10.726424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.938 [2024-07-16 00:22:10.726432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.938 [2024-07-16 00:22:10.726438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.938 [2024-07-16 00:22:10.726446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.938 [2024-07-16 00:22:10.726453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.938 [2024-07-16 00:22:10.726461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.938 [2024-07-16 00:22:10.726467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.938 [2024-07-16 00:22:10.726475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.938 [2024-07-16 00:22:10.726481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.938 [2024-07-16 00:22:10.726490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.938 [2024-07-16 00:22:10.726497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.938 [2024-07-16 00:22:10.726505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.938 [2024-07-16 00:22:10.726511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.938 [2024-07-16 00:22:10.726519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.938 [2024-07-16 00:22:10.726525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.938 [2024-07-16 00:22:10.726533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.938 [2024-07-16 00:22:10.726539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.938 [2024-07-16 00:22:10.726547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.938 [2024-07-16 00:22:10.726553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.938 [2024-07-16 00:22:10.726561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.938 [2024-07-16 00:22:10.726568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.938 [2024-07-16 00:22:10.726575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.938 [2024-07-16 00:22:10.726582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.938 [2024-07-16 00:22:10.726590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.938 [2024-07-16 00:22:10.726596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.938 [2024-07-16 00:22:10.726604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.938 [2024-07-16 00:22:10.726610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.938 [2024-07-16 00:22:10.726618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.938 [2024-07-16 00:22:10.726625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.938 [2024-07-16 00:22:10.726632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.938 [2024-07-16 00:22:10.726639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.938 [2024-07-16 00:22:10.726647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.938 [2024-07-16 00:22:10.726653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.938 [2024-07-16 00:22:10.726661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.938 [2024-07-16 00:22:10.726668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.938 [2024-07-16 00:22:10.726676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.938 [2024-07-16 00:22:10.726683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.938 [2024-07-16 00:22:10.726690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.938 [2024-07-16 00:22:10.726697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.938 [2024-07-16 00:22:10.726705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.938 [2024-07-16 00:22:10.726711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.938 [2024-07-16 00:22:10.726719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.938 [2024-07-16 00:22:10.726726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.938 [2024-07-16 00:22:10.726733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.938 [2024-07-16 00:22:10.726740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.938 [2024-07-16 00:22:10.726748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.938 [2024-07-16 00:22:10.726754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.938 [2024-07-16 00:22:10.726761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.938 [2024-07-16 00:22:10.726768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.938 [2024-07-16 00:22:10.726775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.938 [2024-07-16 00:22:10.726782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.938 [2024-07-16 00:22:10.726789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.938 [2024-07-16 00:22:10.726796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.938 [2024-07-16 00:22:10.726804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.938 [2024-07-16 00:22:10.726810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.938 [2024-07-16 00:22:10.726818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.939 [2024-07-16 00:22:10.726824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.939 [2024-07-16 00:22:10.726832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.939 [2024-07-16 00:22:10.726838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.939 [2024-07-16 00:22:10.726848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.939 [2024-07-16 00:22:10.726854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.939 [2024-07-16 00:22:10.726862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.939 [2024-07-16 00:22:10.726868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.939 [2024-07-16 00:22:10.726877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.939 [2024-07-16 00:22:10.726883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.939 [2024-07-16 00:22:10.726891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.939 [2024-07-16 00:22:10.726897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.939 [2024-07-16 00:22:10.726904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.939 [2024-07-16 00:22:10.726911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.939 [2024-07-16 00:22:10.726919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.939 [2024-07-16 00:22:10.726925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.939 [2024-07-16 00:22:10.726933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.939 [2024-07-16 00:22:10.726940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.939 [2024-07-16 00:22:10.726947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.939 [2024-07-16 00:22:10.726954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.939 [2024-07-16 00:22:10.726961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.939 [2024-07-16 00:22:10.726968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.939 [2024-07-16 00:22:10.726976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.939 [2024-07-16 00:22:10.726982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.939 [2024-07-16 00:22:10.726990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.939 [2024-07-16 00:22:10.726996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.939 [2024-07-16 00:22:10.727004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.939 [2024-07-16 00:22:10.727010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.939 [2024-07-16 00:22:10.727068] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x229a910 was disconnected and freed. reset controller. 00:20:51.939 [2024-07-16 00:22:10.728320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.939 [2024-07-16 00:22:10.728342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.939 [2024-07-16 00:22:10.728356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.939 [2024-07-16 00:22:10.728363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.939 [2024-07-16 00:22:10.728372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.939 [2024-07-16 00:22:10.728379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.939 [2024-07-16 00:22:10.728387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.939 [2024-07-16 00:22:10.728394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.939 [2024-07-16 00:22:10.728402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.939 [2024-07-16 00:22:10.728409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.939 [2024-07-16 00:22:10.728417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.939 [2024-07-16 00:22:10.728423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.939 [2024-07-16 00:22:10.728432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.939 [2024-07-16 00:22:10.728439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.939 [2024-07-16 00:22:10.728447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.939 [2024-07-16 00:22:10.728454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.939 [2024-07-16 00:22:10.728462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.939 [2024-07-16 00:22:10.728468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.939 [2024-07-16 00:22:10.728476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.939 [2024-07-16 00:22:10.728482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.939 [2024-07-16 00:22:10.728490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.939 [2024-07-16 00:22:10.728497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.939 [2024-07-16 00:22:10.728505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.939 [2024-07-16 00:22:10.728512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.939 [2024-07-16 00:22:10.728520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.939 [2024-07-16 00:22:10.728530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.939 [2024-07-16 00:22:10.728539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.939 [2024-07-16 00:22:10.728545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.939 [2024-07-16 00:22:10.728553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.939 [2024-07-16 00:22:10.728559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.939 [2024-07-16 00:22:10.728568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.939 [2024-07-16 00:22:10.728574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.939 [2024-07-16 00:22:10.728582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.939 [2024-07-16 00:22:10.728588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.939 [2024-07-16 00:22:10.728597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.939 [2024-07-16 00:22:10.728603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.939 [2024-07-16 00:22:10.728611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.939 [2024-07-16 00:22:10.728617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.939 [2024-07-16 00:22:10.728625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.939 [2024-07-16 00:22:10.728632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.939 [2024-07-16 00:22:10.728640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.939 [2024-07-16 00:22:10.728646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.939 [2024-07-16 00:22:10.728654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.939 [2024-07-16 00:22:10.728660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.939 [2024-07-16 00:22:10.728668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.939 [2024-07-16 00:22:10.728675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.939 [2024-07-16 00:22:10.728682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.939 [2024-07-16 00:22:10.728689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.939 [2024-07-16 00:22:10.728697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.939 [2024-07-16 00:22:10.728703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.939 [2024-07-16 00:22:10.728712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.939 [2024-07-16 00:22:10.728719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.939 [2024-07-16 00:22:10.728727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.939 [2024-07-16 00:22:10.728733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.939 [2024-07-16 00:22:10.728742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.939 [2024-07-16 00:22:10.728748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.939 [2024-07-16 00:22:10.728756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.940 [2024-07-16 00:22:10.728762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.940 [2024-07-16 00:22:10.728770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.940 [2024-07-16 00:22:10.728776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.940 [2024-07-16 00:22:10.728784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.940 [2024-07-16 00:22:10.728791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.940 [2024-07-16 00:22:10.728799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.940 [2024-07-16 00:22:10.728805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.940 [2024-07-16 00:22:10.728813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.940 [2024-07-16 00:22:10.728819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.940 [2024-07-16 00:22:10.728829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.940 [2024-07-16 00:22:10.728836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.940 [2024-07-16 00:22:10.728844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.940 [2024-07-16 00:22:10.728850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.940 [2024-07-16 00:22:10.728858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.940 [2024-07-16 00:22:10.728865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.940 [2024-07-16 00:22:10.728873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.940 [2024-07-16 00:22:10.728879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.940 [2024-07-16 00:22:10.728887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.940 [2024-07-16 00:22:10.728895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.940 [2024-07-16 00:22:10.728903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.940 [2024-07-16 00:22:10.728910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.940 [2024-07-16 00:22:10.728917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.940 [2024-07-16 00:22:10.728923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.940 [2024-07-16 00:22:10.728932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.940 [2024-07-16 00:22:10.728938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.940 [2024-07-16 00:22:10.728946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.940 [2024-07-16 00:22:10.728953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.940 [2024-07-16 00:22:10.728961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.940 [2024-07-16 00:22:10.728967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.940 [2024-07-16 00:22:10.728975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.940 [2024-07-16 00:22:10.728981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.940 [2024-07-16 00:22:10.728989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.940 [2024-07-16 00:22:10.728996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.940 [2024-07-16 00:22:10.729004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.940 [2024-07-16 00:22:10.729010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.940 [2024-07-16 00:22:10.729018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.940 [2024-07-16 00:22:10.729025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.940 [2024-07-16 00:22:10.729032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.940 [2024-07-16 00:22:10.729039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.940 [2024-07-16 00:22:10.729047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.940 [2024-07-16 00:22:10.729053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.940 [2024-07-16 00:22:10.729062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.940 [2024-07-16 00:22:10.729068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.940 [2024-07-16 00:22:10.729077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.940 [2024-07-16 00:22:10.729083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.940 [2024-07-16 00:22:10.729092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.940 [2024-07-16 00:22:10.729098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.940 [2024-07-16 00:22:10.729105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.940 [2024-07-16 00:22:10.729112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.940 [2024-07-16 00:22:10.729120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.940 [2024-07-16 00:22:10.729126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.940 [2024-07-16 00:22:10.729134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.940 [2024-07-16 00:22:10.729140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.940 [2024-07-16 00:22:10.729148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.940 [2024-07-16 00:22:10.729154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.940 [2024-07-16 00:22:10.729162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.940 [2024-07-16 00:22:10.729168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.940 [2024-07-16 00:22:10.729176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.940 [2024-07-16 00:22:10.729182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.940 [2024-07-16 00:22:10.729190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.940 [2024-07-16 00:22:10.729197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.940 [2024-07-16 00:22:10.729205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.940 [2024-07-16 00:22:10.729212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.940 [2024-07-16 00:22:10.729220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.940 [2024-07-16 00:22:10.729231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.940 [2024-07-16 00:22:10.729240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.940 [2024-07-16 00:22:10.729246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.940 [2024-07-16 00:22:10.729254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.940 [2024-07-16 00:22:10.729262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.940 [2024-07-16 00:22:10.729270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.940 [2024-07-16 00:22:10.729276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.940 [2024-07-16 00:22:10.729361] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2152b70 was disconnected and freed. reset controller. 00:20:51.940 [2024-07-16 00:22:10.731309] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:20:51.940 [2024-07-16 00:22:10.731341] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:20:51.940 [2024-07-16 00:22:10.731355] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca7340 (9): Bad file descriptor 00:20:51.940 [2024-07-16 00:22:10.731366] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b190 (9): Bad file descriptor 00:20:51.940 [2024-07-16 00:22:10.731377] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23240d0 (9): Bad file descriptor 00:20:51.940 [2024-07-16 00:22:10.731394] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x219fbf0 (9): Bad file descriptor 00:20:51.940 [2024-07-16 00:22:10.731409] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21951d0 (9): Bad file descriptor 00:20:51.940 [2024-07-16 00:22:10.731420] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x219cb30 (9): Bad file descriptor 00:20:51.940 [2024-07-16 00:22:10.731432] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x232d050 (9): Bad file descriptor 00:20:51.940 [2024-07-16 00:22:10.731445] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2158c70 (9): Bad file descriptor 00:20:51.940 [2024-07-16 00:22:10.731458] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23248d0 (9): Bad file descriptor 00:20:51.940 [2024-07-16 00:22:10.731472] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x230d8b0 (9): Bad file descriptor 00:20:51.940 [2024-07-16 00:22:10.732938] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:20:51.940 [2024-07-16 00:22:10.732964] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:20:51.941 [2024-07-16 00:22:10.733045] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:51.941 [2024-07-16 00:22:10.733649] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:51.941 [2024-07-16 00:22:10.733700] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:51.941 [2024-07-16 00:22:10.733742] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:51.941 [2024-07-16 00:22:10.733975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:51.941 [2024-07-16 00:22:10.733988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b190 with addr=10.0.0.2, port=4420 00:20:51.941 [2024-07-16 00:22:10.733997] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b190 is same with the state(5) to be set 00:20:51.941 [2024-07-16 00:22:10.734286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:51.941 [2024-07-16 00:22:10.734298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca7340 with addr=10.0.0.2, port=4420 00:20:51.941 [2024-07-16 00:22:10.734305] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca7340 is same with the state(5) to be set 00:20:51.941 [2024-07-16 00:22:10.734532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:51.941 [2024-07-16 00:22:10.734542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219cb30 with addr=10.0.0.2, port=4420 00:20:51.941 [2024-07-16 00:22:10.734556] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cb30 is same with the state(5) to be set 00:20:51.941 [2024-07-16 00:22:10.734753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:51.941 [2024-07-16 00:22:10.734762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21951d0 with addr=10.0.0.2, port=4420 00:20:51.941 [2024-07-16 00:22:10.734769] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21951d0 is same with the state(5) to be set 00:20:51.941 [2024-07-16 00:22:10.734824] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:51.941 [2024-07-16 00:22:10.734868] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:51.941 [2024-07-16 00:22:10.735156] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b190 (9): Bad file descriptor 00:20:51.941 [2024-07-16 00:22:10.735169] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca7340 (9): Bad file descriptor 00:20:51.941 [2024-07-16 00:22:10.735178] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x219cb30 (9): Bad file descriptor 00:20:51.941 [2024-07-16 00:22:10.735186] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21951d0 (9): Bad file descriptor 00:20:51.941 [2024-07-16 00:22:10.735253] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:20:51.941 [2024-07-16 00:22:10.735263] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:20:51.941 [2024-07-16 00:22:10.735271] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:20:51.941 [2024-07-16 00:22:10.735283] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:20:51.941 [2024-07-16 00:22:10.735289] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:20:51.941 [2024-07-16 00:22:10.735295] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:20:51.941 [2024-07-16 00:22:10.735304] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:20:51.941 [2024-07-16 00:22:10.735310] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:20:51.941 [2024-07-16 00:22:10.735316] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:20:51.941 [2024-07-16 00:22:10.735326] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:20:51.941 [2024-07-16 00:22:10.735331] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:20:51.941 [2024-07-16 00:22:10.735337] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:20:51.941 [2024-07-16 00:22:10.735377] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:51.941 [2024-07-16 00:22:10.735384] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:51.941 [2024-07-16 00:22:10.735390] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:51.941 [2024-07-16 00:22:10.735395] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:51.941 [2024-07-16 00:22:10.741438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.941 [2024-07-16 00:22:10.741454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.941 [2024-07-16 00:22:10.741468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.941 [2024-07-16 00:22:10.741475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.941 [2024-07-16 00:22:10.741487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.941 [2024-07-16 00:22:10.741493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.941 [2024-07-16 00:22:10.741502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.941 [2024-07-16 00:22:10.741508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.941 [2024-07-16 00:22:10.741516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.941 [2024-07-16 00:22:10.741524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.941 [2024-07-16 00:22:10.741532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.941 [2024-07-16 00:22:10.741538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.941 [2024-07-16 00:22:10.741546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.941 [2024-07-16 00:22:10.741552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.941 [2024-07-16 00:22:10.741560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.941 [2024-07-16 00:22:10.741567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.941 [2024-07-16 00:22:10.741575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.941 [2024-07-16 00:22:10.741581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.941 [2024-07-16 00:22:10.741589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.941 [2024-07-16 00:22:10.741595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.941 [2024-07-16 00:22:10.741604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.941 [2024-07-16 00:22:10.741610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.941 [2024-07-16 00:22:10.741618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.941 [2024-07-16 00:22:10.741624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.941 [2024-07-16 00:22:10.741632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.941 [2024-07-16 00:22:10.741639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.941 [2024-07-16 00:22:10.741647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.941 [2024-07-16 00:22:10.741653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.941 [2024-07-16 00:22:10.741662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.941 [2024-07-16 00:22:10.741669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.941 [2024-07-16 00:22:10.741678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.941 [2024-07-16 00:22:10.741684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.941 [2024-07-16 00:22:10.741692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.941 [2024-07-16 00:22:10.741699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.941 [2024-07-16 00:22:10.741706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.941 [2024-07-16 00:22:10.741713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.941 [2024-07-16 00:22:10.741721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.941 [2024-07-16 00:22:10.741727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.941 [2024-07-16 00:22:10.741735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.941 [2024-07-16 00:22:10.741742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.942 [2024-07-16 00:22:10.741750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.942 [2024-07-16 00:22:10.741756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.942 [2024-07-16 00:22:10.741764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.942 [2024-07-16 00:22:10.741771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.942 [2024-07-16 00:22:10.741779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.942 [2024-07-16 00:22:10.741786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.942 [2024-07-16 00:22:10.741794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.942 [2024-07-16 00:22:10.741800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.942 [2024-07-16 00:22:10.741808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.942 [2024-07-16 00:22:10.741815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.942 [2024-07-16 00:22:10.741823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.942 [2024-07-16 00:22:10.741829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.942 [2024-07-16 00:22:10.741837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.942 [2024-07-16 00:22:10.741844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.942 [2024-07-16 00:22:10.741853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.942 [2024-07-16 00:22:10.741859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.942 [2024-07-16 00:22:10.741868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.942 [2024-07-16 00:22:10.741874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.942 [2024-07-16 00:22:10.741882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.942 [2024-07-16 00:22:10.741889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.942 [2024-07-16 00:22:10.741897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.942 [2024-07-16 00:22:10.741903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.942 [2024-07-16 00:22:10.741911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.942 [2024-07-16 00:22:10.741918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.942 [2024-07-16 00:22:10.741926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.942 [2024-07-16 00:22:10.741932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.942 [2024-07-16 00:22:10.741940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.942 [2024-07-16 00:22:10.741946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.942 [2024-07-16 00:22:10.741954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.942 [2024-07-16 00:22:10.741961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.942 [2024-07-16 00:22:10.741969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.942 [2024-07-16 00:22:10.741975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.942 [2024-07-16 00:22:10.741984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.942 [2024-07-16 00:22:10.741990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.942 [2024-07-16 00:22:10.741998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.942 [2024-07-16 00:22:10.742005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.942 [2024-07-16 00:22:10.742013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.942 [2024-07-16 00:22:10.742019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.942 [2024-07-16 00:22:10.742028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.942 [2024-07-16 00:22:10.742036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.942 [2024-07-16 00:22:10.742044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.942 [2024-07-16 00:22:10.742050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.942 [2024-07-16 00:22:10.742058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.942 [2024-07-16 00:22:10.742065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.942 [2024-07-16 00:22:10.742073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.942 [2024-07-16 00:22:10.742079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.942 [2024-07-16 00:22:10.742087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.942 [2024-07-16 00:22:10.742093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.942 [2024-07-16 00:22:10.742101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.942 [2024-07-16 00:22:10.742108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.942 [2024-07-16 00:22:10.742116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.942 [2024-07-16 00:22:10.742123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.942 [2024-07-16 00:22:10.742131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.942 [2024-07-16 00:22:10.742137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.942 [2024-07-16 00:22:10.742145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.942 [2024-07-16 00:22:10.742151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.942 [2024-07-16 00:22:10.742159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.942 [2024-07-16 00:22:10.742166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.942 [2024-07-16 00:22:10.742174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.942 [2024-07-16 00:22:10.742180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.942 [2024-07-16 00:22:10.742188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.942 [2024-07-16 00:22:10.742194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.942 [2024-07-16 00:22:10.742203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.942 [2024-07-16 00:22:10.742209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.942 [2024-07-16 00:22:10.742219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.942 [2024-07-16 00:22:10.742229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.942 [2024-07-16 00:22:10.742237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.942 [2024-07-16 00:22:10.742244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.942 [2024-07-16 00:22:10.742253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.942 [2024-07-16 00:22:10.742259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.942 [2024-07-16 00:22:10.742267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.942 [2024-07-16 00:22:10.742273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.942 [2024-07-16 00:22:10.742282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.942 [2024-07-16 00:22:10.742288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.942 [2024-07-16 00:22:10.742297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.942 [2024-07-16 00:22:10.742304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.942 [2024-07-16 00:22:10.742312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.942 [2024-07-16 00:22:10.742318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.942 [2024-07-16 00:22:10.742327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.942 [2024-07-16 00:22:10.742333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.942 [2024-07-16 00:22:10.742341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.942 [2024-07-16 00:22:10.742347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.942 [2024-07-16 00:22:10.742355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.942 [2024-07-16 00:22:10.742362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.943 [2024-07-16 00:22:10.742370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.943 [2024-07-16 00:22:10.742377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.943 [2024-07-16 00:22:10.742385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.943 [2024-07-16 00:22:10.742391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.943 [2024-07-16 00:22:10.742398] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2299ea0 is same with the state(5) to be set 00:20:51.943 [2024-07-16 00:22:10.743419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.943 [2024-07-16 00:22:10.743431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.943 [2024-07-16 00:22:10.743442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.943 [2024-07-16 00:22:10.743448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.943 [2024-07-16 00:22:10.743457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.943 [2024-07-16 00:22:10.743464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.943 [2024-07-16 00:22:10.743472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.943 [2024-07-16 00:22:10.743479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.943 [2024-07-16 00:22:10.743487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.943 [2024-07-16 00:22:10.743493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.943 [2024-07-16 00:22:10.743501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.943 [2024-07-16 00:22:10.743508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.943 [2024-07-16 00:22:10.743516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.943 [2024-07-16 00:22:10.743522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.943 [2024-07-16 00:22:10.743531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.943 [2024-07-16 00:22:10.743537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.943 [2024-07-16 00:22:10.743546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.943 [2024-07-16 00:22:10.743552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.943 [2024-07-16 00:22:10.743560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.943 [2024-07-16 00:22:10.743567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.943 [2024-07-16 00:22:10.743575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.943 [2024-07-16 00:22:10.743582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.943 [2024-07-16 00:22:10.743590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.943 [2024-07-16 00:22:10.743596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.943 [2024-07-16 00:22:10.743605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.943 [2024-07-16 00:22:10.743614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.943 [2024-07-16 00:22:10.743622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.943 [2024-07-16 00:22:10.743629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.943 [2024-07-16 00:22:10.743637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.943 [2024-07-16 00:22:10.743653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.943 [2024-07-16 00:22:10.743661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.943 [2024-07-16 00:22:10.743667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.943 [2024-07-16 00:22:10.743675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.943 [2024-07-16 00:22:10.743682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.943 [2024-07-16 00:22:10.743690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.943 [2024-07-16 00:22:10.743696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.943 [2024-07-16 00:22:10.743704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.943 [2024-07-16 00:22:10.743710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.943 [2024-07-16 00:22:10.743718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.943 [2024-07-16 00:22:10.743724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.943 [2024-07-16 00:22:10.743732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.943 [2024-07-16 00:22:10.743739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.943 [2024-07-16 00:22:10.743747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.943 [2024-07-16 00:22:10.743754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.943 [2024-07-16 00:22:10.743761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.943 [2024-07-16 00:22:10.743768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.943 [2024-07-16 00:22:10.743776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.943 [2024-07-16 00:22:10.743782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.943 [2024-07-16 00:22:10.743790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.943 [2024-07-16 00:22:10.743797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.943 [2024-07-16 00:22:10.743808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.943 [2024-07-16 00:22:10.743815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.943 [2024-07-16 00:22:10.743823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.943 [2024-07-16 00:22:10.743831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.943 [2024-07-16 00:22:10.743839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.943 [2024-07-16 00:22:10.743846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.943 [2024-07-16 00:22:10.743854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.943 [2024-07-16 00:22:10.743861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.943 [2024-07-16 00:22:10.743869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.943 [2024-07-16 00:22:10.743877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.943 [2024-07-16 00:22:10.743886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.943 [2024-07-16 00:22:10.743893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.943 [2024-07-16 00:22:10.743901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.943 [2024-07-16 00:22:10.743907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.943 [2024-07-16 00:22:10.743915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.943 [2024-07-16 00:22:10.743921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.943 [2024-07-16 00:22:10.743930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.943 [2024-07-16 00:22:10.743937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.943 [2024-07-16 00:22:10.743945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.943 [2024-07-16 00:22:10.743951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.943 [2024-07-16 00:22:10.743959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.943 [2024-07-16 00:22:10.743966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.943 [2024-07-16 00:22:10.743974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.943 [2024-07-16 00:22:10.743980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.943 [2024-07-16 00:22:10.743988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.943 [2024-07-16 00:22:10.743996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.943 [2024-07-16 00:22:10.744004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.943 [2024-07-16 00:22:10.744011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.943 [2024-07-16 00:22:10.744019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.943 [2024-07-16 00:22:10.744025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.943 [2024-07-16 00:22:10.744033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.944 [2024-07-16 00:22:10.744040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.944 [2024-07-16 00:22:10.744048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.944 [2024-07-16 00:22:10.744054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.944 [2024-07-16 00:22:10.744062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.944 [2024-07-16 00:22:10.744069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.944 [2024-07-16 00:22:10.744077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.944 [2024-07-16 00:22:10.744084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.944 [2024-07-16 00:22:10.744092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.944 [2024-07-16 00:22:10.744098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.944 [2024-07-16 00:22:10.744106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.944 [2024-07-16 00:22:10.744113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.944 [2024-07-16 00:22:10.744121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.944 [2024-07-16 00:22:10.744127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.944 [2024-07-16 00:22:10.744135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.944 [2024-07-16 00:22:10.744141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.944 [2024-07-16 00:22:10.744149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.944 [2024-07-16 00:22:10.744155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.944 [2024-07-16 00:22:10.744163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.944 [2024-07-16 00:22:10.744170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.944 [2024-07-16 00:22:10.744179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.944 [2024-07-16 00:22:10.744185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.944 [2024-07-16 00:22:10.744193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.944 [2024-07-16 00:22:10.744200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.944 [2024-07-16 00:22:10.744208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.944 [2024-07-16 00:22:10.744214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.944 [2024-07-16 00:22:10.744222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.944 [2024-07-16 00:22:10.744234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.944 [2024-07-16 00:22:10.744242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.944 [2024-07-16 00:22:10.744248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.944 [2024-07-16 00:22:10.744256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.944 [2024-07-16 00:22:10.744262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.944 [2024-07-16 00:22:10.744271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.944 [2024-07-16 00:22:10.744277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.944 [2024-07-16 00:22:10.744285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.944 [2024-07-16 00:22:10.744292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.944 [2024-07-16 00:22:10.744300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.944 [2024-07-16 00:22:10.744307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.944 [2024-07-16 00:22:10.744315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.944 [2024-07-16 00:22:10.744321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.944 [2024-07-16 00:22:10.744329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.944 [2024-07-16 00:22:10.744336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.944 [2024-07-16 00:22:10.744343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.944 [2024-07-16 00:22:10.744349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.944 [2024-07-16 00:22:10.744358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.944 [2024-07-16 00:22:10.744365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.944 [2024-07-16 00:22:10.744373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.944 [2024-07-16 00:22:10.744379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.944 [2024-07-16 00:22:10.744387] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2220490 is same with the state(5) to be set 00:20:51.944 [2024-07-16 00:22:10.745397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.944 [2024-07-16 00:22:10.745412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.944 [2024-07-16 00:22:10.745422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.944 [2024-07-16 00:22:10.745428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.944 [2024-07-16 00:22:10.745437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.944 [2024-07-16 00:22:10.745443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.944 [2024-07-16 00:22:10.745451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.944 [2024-07-16 00:22:10.745458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.944 [2024-07-16 00:22:10.745466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.944 [2024-07-16 00:22:10.745472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.944 [2024-07-16 00:22:10.745481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.944 [2024-07-16 00:22:10.745487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.944 [2024-07-16 00:22:10.745495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.944 [2024-07-16 00:22:10.745502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.944 [2024-07-16 00:22:10.745510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.944 [2024-07-16 00:22:10.745516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.944 [2024-07-16 00:22:10.745524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.944 [2024-07-16 00:22:10.745531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.944 [2024-07-16 00:22:10.745539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.944 [2024-07-16 00:22:10.745546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.944 [2024-07-16 00:22:10.745555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.944 [2024-07-16 00:22:10.745561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.944 [2024-07-16 00:22:10.745572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.944 [2024-07-16 00:22:10.745579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.944 [2024-07-16 00:22:10.745588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.944 [2024-07-16 00:22:10.745594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.944 [2024-07-16 00:22:10.745604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.944 [2024-07-16 00:22:10.745610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.944 [2024-07-16 00:22:10.745619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.944 [2024-07-16 00:22:10.745625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.944 [2024-07-16 00:22:10.745633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.944 [2024-07-16 00:22:10.745640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.944 [2024-07-16 00:22:10.745648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.944 [2024-07-16 00:22:10.745654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.944 [2024-07-16 00:22:10.745662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.944 [2024-07-16 00:22:10.745669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.944 [2024-07-16 00:22:10.745677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.945 [2024-07-16 00:22:10.745683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.945 [2024-07-16 00:22:10.745691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.945 [2024-07-16 00:22:10.745698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.945 [2024-07-16 00:22:10.745706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.945 [2024-07-16 00:22:10.745712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.945 [2024-07-16 00:22:10.745721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.945 [2024-07-16 00:22:10.745727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.945 [2024-07-16 00:22:10.745735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.945 [2024-07-16 00:22:10.745742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.945 [2024-07-16 00:22:10.745749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.945 [2024-07-16 00:22:10.745757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.945 [2024-07-16 00:22:10.745765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.945 [2024-07-16 00:22:10.745772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.945 [2024-07-16 00:22:10.745779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.945 [2024-07-16 00:22:10.745786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.945 [2024-07-16 00:22:10.745794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.945 [2024-07-16 00:22:10.745801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.945 [2024-07-16 00:22:10.745809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.945 [2024-07-16 00:22:10.745815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.945 [2024-07-16 00:22:10.745823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.945 [2024-07-16 00:22:10.745829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.945 [2024-07-16 00:22:10.745837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.945 [2024-07-16 00:22:10.745843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.945 [2024-07-16 00:22:10.745851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.945 [2024-07-16 00:22:10.745858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.945 [2024-07-16 00:22:10.745866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.945 [2024-07-16 00:22:10.745872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.945 [2024-07-16 00:22:10.745880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.945 [2024-07-16 00:22:10.745886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.945 [2024-07-16 00:22:10.745895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.945 [2024-07-16 00:22:10.745901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.945 [2024-07-16 00:22:10.745909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.945 [2024-07-16 00:22:10.745916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.945 [2024-07-16 00:22:10.745924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.945 [2024-07-16 00:22:10.745930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.945 [2024-07-16 00:22:10.745940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.945 [2024-07-16 00:22:10.745946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.945 [2024-07-16 00:22:10.745954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.945 [2024-07-16 00:22:10.745961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.945 [2024-07-16 00:22:10.745969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.945 [2024-07-16 00:22:10.745975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.945 [2024-07-16 00:22:10.745983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.945 [2024-07-16 00:22:10.745989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.945 [2024-07-16 00:22:10.745997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.945 [2024-07-16 00:22:10.746004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.945 [2024-07-16 00:22:10.746012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.945 [2024-07-16 00:22:10.746018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.945 [2024-07-16 00:22:10.746027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.945 [2024-07-16 00:22:10.746033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.945 [2024-07-16 00:22:10.746042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.945 [2024-07-16 00:22:10.746048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.945 [2024-07-16 00:22:10.746056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.945 [2024-07-16 00:22:10.746063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.945 [2024-07-16 00:22:10.746071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.945 [2024-07-16 00:22:10.746077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.946 [2024-07-16 00:22:10.746085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.946 [2024-07-16 00:22:10.746091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.946 [2024-07-16 00:22:10.746099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.946 [2024-07-16 00:22:10.746105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.946 [2024-07-16 00:22:10.746114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.946 [2024-07-16 00:22:10.746124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.946 [2024-07-16 00:22:10.746133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.946 [2024-07-16 00:22:10.746139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.946 [2024-07-16 00:22:10.746147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.946 [2024-07-16 00:22:10.746154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.946 [2024-07-16 00:22:10.746162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.946 [2024-07-16 00:22:10.746168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.946 [2024-07-16 00:22:10.746177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.946 [2024-07-16 00:22:10.746183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.946 [2024-07-16 00:22:10.746192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.946 [2024-07-16 00:22:10.746199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.946 [2024-07-16 00:22:10.746206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.946 [2024-07-16 00:22:10.746213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.946 [2024-07-16 00:22:10.746221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.946 [2024-07-16 00:22:10.746231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.946 [2024-07-16 00:22:10.746239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.946 [2024-07-16 00:22:10.746246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.946 [2024-07-16 00:22:10.746254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.946 [2024-07-16 00:22:10.746261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.946 [2024-07-16 00:22:10.746269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.946 [2024-07-16 00:22:10.746275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.946 [2024-07-16 00:22:10.746283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.946 [2024-07-16 00:22:10.746293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.946 [2024-07-16 00:22:10.746301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.946 [2024-07-16 00:22:10.746307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.946 [2024-07-16 00:22:10.746317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.946 [2024-07-16 00:22:10.746323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.946 [2024-07-16 00:22:10.746331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.946 [2024-07-16 00:22:10.746338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.946 [2024-07-16 00:22:10.746346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.946 [2024-07-16 00:22:10.746352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.946 [2024-07-16 00:22:10.746359] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2221920 is same with the state(5) to be set 00:20:51.946 [2024-07-16 00:22:10.747374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.946 [2024-07-16 00:22:10.747387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.946 [2024-07-16 00:22:10.747398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.946 [2024-07-16 00:22:10.747405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.946 [2024-07-16 00:22:10.747413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.946 [2024-07-16 00:22:10.747420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.946 [2024-07-16 00:22:10.747428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.946 [2024-07-16 00:22:10.747435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.946 [2024-07-16 00:22:10.747443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.946 [2024-07-16 00:22:10.747449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.946 [2024-07-16 00:22:10.747458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.946 [2024-07-16 00:22:10.747464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.946 [2024-07-16 00:22:10.747472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.946 [2024-07-16 00:22:10.747479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.946 [2024-07-16 00:22:10.747486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.946 [2024-07-16 00:22:10.747493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.946 [2024-07-16 00:22:10.747501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.946 [2024-07-16 00:22:10.747507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.946 [2024-07-16 00:22:10.747517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.946 [2024-07-16 00:22:10.747524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.946 [2024-07-16 00:22:10.747532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.946 [2024-07-16 00:22:10.747539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.946 [2024-07-16 00:22:10.747547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.946 [2024-07-16 00:22:10.747554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.946 [2024-07-16 00:22:10.747562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.946 [2024-07-16 00:22:10.747569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.946 [2024-07-16 00:22:10.747577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.946 [2024-07-16 00:22:10.747583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.946 [2024-07-16 00:22:10.747591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.946 [2024-07-16 00:22:10.747598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.946 [2024-07-16 00:22:10.747606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.946 [2024-07-16 00:22:10.747612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.946 [2024-07-16 00:22:10.747620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.946 [2024-07-16 00:22:10.747627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.946 [2024-07-16 00:22:10.747634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.946 [2024-07-16 00:22:10.747641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.946 [2024-07-16 00:22:10.747649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.946 [2024-07-16 00:22:10.747655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.946 [2024-07-16 00:22:10.747663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.946 [2024-07-16 00:22:10.747670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.946 [2024-07-16 00:22:10.747678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.946 [2024-07-16 00:22:10.747684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.946 [2024-07-16 00:22:10.747693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.946 [2024-07-16 00:22:10.747701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.947 [2024-07-16 00:22:10.747709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.947 [2024-07-16 00:22:10.747716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.947 [2024-07-16 00:22:10.747724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.947 [2024-07-16 00:22:10.747730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.947 [2024-07-16 00:22:10.747738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.947 [2024-07-16 00:22:10.747745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.947 [2024-07-16 00:22:10.747753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.947 [2024-07-16 00:22:10.747759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.947 [2024-07-16 00:22:10.747767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.947 [2024-07-16 00:22:10.747773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.947 [2024-07-16 00:22:10.747781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.947 [2024-07-16 00:22:10.747788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.947 [2024-07-16 00:22:10.747796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.947 [2024-07-16 00:22:10.747803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.947 [2024-07-16 00:22:10.747811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.947 [2024-07-16 00:22:10.747817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.947 [2024-07-16 00:22:10.747825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.947 [2024-07-16 00:22:10.747831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.947 [2024-07-16 00:22:10.747840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.947 [2024-07-16 00:22:10.747846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.947 [2024-07-16 00:22:10.747854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.947 [2024-07-16 00:22:10.747860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.947 [2024-07-16 00:22:10.747868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.947 [2024-07-16 00:22:10.747875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.947 [2024-07-16 00:22:10.747884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.947 [2024-07-16 00:22:10.747890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.947 [2024-07-16 00:22:10.747898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.947 [2024-07-16 00:22:10.747905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.947 [2024-07-16 00:22:10.747913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.947 [2024-07-16 00:22:10.747919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.947 [2024-07-16 00:22:10.747928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.947 [2024-07-16 00:22:10.747934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.947 [2024-07-16 00:22:10.747942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.947 [2024-07-16 00:22:10.747948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.947 [2024-07-16 00:22:10.747956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.947 [2024-07-16 00:22:10.747962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.947 [2024-07-16 00:22:10.747970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.947 [2024-07-16 00:22:10.747977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.947 [2024-07-16 00:22:10.747985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.947 [2024-07-16 00:22:10.747991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.947 [2024-07-16 00:22:10.747999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.947 [2024-07-16 00:22:10.748005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.947 [2024-07-16 00:22:10.748013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.947 [2024-07-16 00:22:10.748020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.947 [2024-07-16 00:22:10.748027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.947 [2024-07-16 00:22:10.748034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.947 [2024-07-16 00:22:10.748044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.947 [2024-07-16 00:22:10.748050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.947 [2024-07-16 00:22:10.748058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.947 [2024-07-16 00:22:10.748066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.947 [2024-07-16 00:22:10.748073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.947 [2024-07-16 00:22:10.748080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.947 [2024-07-16 00:22:10.748088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.947 [2024-07-16 00:22:10.748094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.947 [2024-07-16 00:22:10.748103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.947 [2024-07-16 00:22:10.748109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.947 [2024-07-16 00:22:10.748117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.947 [2024-07-16 00:22:10.748124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.947 [2024-07-16 00:22:10.748131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.947 [2024-07-16 00:22:10.748138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.947 [2024-07-16 00:22:10.748146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.947 [2024-07-16 00:22:10.748153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.947 [2024-07-16 00:22:10.748161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.947 [2024-07-16 00:22:10.748167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.947 [2024-07-16 00:22:10.748175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.947 [2024-07-16 00:22:10.748182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.947 [2024-07-16 00:22:10.748190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.947 [2024-07-16 00:22:10.748196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.947 [2024-07-16 00:22:10.748204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.947 [2024-07-16 00:22:10.748211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.947 [2024-07-16 00:22:10.748219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.947 [2024-07-16 00:22:10.748233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.947 [2024-07-16 00:22:10.748241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.947 [2024-07-16 00:22:10.748248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.947 [2024-07-16 00:22:10.748257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.947 [2024-07-16 00:22:10.748264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.947 [2024-07-16 00:22:10.748271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.947 [2024-07-16 00:22:10.748278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.947 [2024-07-16 00:22:10.748287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.947 [2024-07-16 00:22:10.748293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.947 [2024-07-16 00:22:10.748301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.948 [2024-07-16 00:22:10.748308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.948 [2024-07-16 00:22:10.748316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.948 [2024-07-16 00:22:10.748322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.948 [2024-07-16 00:22:10.748329] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229d2b0 is same with the state(5) to be set 00:20:51.948 [2024-07-16 00:22:10.749328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.948 [2024-07-16 00:22:10.749341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.948 [2024-07-16 00:22:10.749353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.948 [2024-07-16 00:22:10.749360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.948 [2024-07-16 00:22:10.749370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.948 [2024-07-16 00:22:10.749377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.948 [2024-07-16 00:22:10.749385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.948 [2024-07-16 00:22:10.749391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.948 [2024-07-16 00:22:10.749399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.948 [2024-07-16 00:22:10.749408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.948 [2024-07-16 00:22:10.749417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.948 [2024-07-16 00:22:10.749423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.948 [2024-07-16 00:22:10.749431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.948 [2024-07-16 00:22:10.749439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.948 [2024-07-16 00:22:10.749451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.948 [2024-07-16 00:22:10.749457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.948 [2024-07-16 00:22:10.749465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.948 [2024-07-16 00:22:10.749473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.948 [2024-07-16 00:22:10.749481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.948 [2024-07-16 00:22:10.749487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.948 [2024-07-16 00:22:10.749495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.948 [2024-07-16 00:22:10.749502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.948 [2024-07-16 00:22:10.749511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.948 [2024-07-16 00:22:10.749517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.948 [2024-07-16 00:22:10.749524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.948 [2024-07-16 00:22:10.749532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.948 [2024-07-16 00:22:10.749540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.948 [2024-07-16 00:22:10.749546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.948 [2024-07-16 00:22:10.749554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.948 [2024-07-16 00:22:10.749561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.948 [2024-07-16 00:22:10.749571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.948 [2024-07-16 00:22:10.749578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.948 [2024-07-16 00:22:10.749586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.948 [2024-07-16 00:22:10.749593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.948 [2024-07-16 00:22:10.749602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.948 [2024-07-16 00:22:10.749608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.948 [2024-07-16 00:22:10.749616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.948 [2024-07-16 00:22:10.749623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.948 [2024-07-16 00:22:10.749631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.948 [2024-07-16 00:22:10.749640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.948 [2024-07-16 00:22:10.749649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.948 [2024-07-16 00:22:10.749655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.948 [2024-07-16 00:22:10.749663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.948 [2024-07-16 00:22:10.749669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.948 [2024-07-16 00:22:10.749679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.948 [2024-07-16 00:22:10.749685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.948 [2024-07-16 00:22:10.749694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.948 [2024-07-16 00:22:10.749700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.948 [2024-07-16 00:22:10.749708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.948 [2024-07-16 00:22:10.749715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.948 [2024-07-16 00:22:10.749724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.948 [2024-07-16 00:22:10.749730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.948 [2024-07-16 00:22:10.749738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.948 [2024-07-16 00:22:10.749746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.948 [2024-07-16 00:22:10.749755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.948 [2024-07-16 00:22:10.749763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.948 [2024-07-16 00:22:10.749771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.948 [2024-07-16 00:22:10.749777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.948 [2024-07-16 00:22:10.749785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.948 [2024-07-16 00:22:10.749791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.948 [2024-07-16 00:22:10.749799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.948 [2024-07-16 00:22:10.749805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.948 [2024-07-16 00:22:10.749814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.948 [2024-07-16 00:22:10.749824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.948 [2024-07-16 00:22:10.749837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.948 [2024-07-16 00:22:10.749844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.948 [2024-07-16 00:22:10.749852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.949 [2024-07-16 00:22:10.749860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.949 [2024-07-16 00:22:10.749868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.949 [2024-07-16 00:22:10.749874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.949 [2024-07-16 00:22:10.749882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.949 [2024-07-16 00:22:10.749888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.949 [2024-07-16 00:22:10.749896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.949 [2024-07-16 00:22:10.749903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.949 [2024-07-16 00:22:10.749911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.949 [2024-07-16 00:22:10.749917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.949 [2024-07-16 00:22:10.749925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.949 [2024-07-16 00:22:10.749931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.949 [2024-07-16 00:22:10.749940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.949 [2024-07-16 00:22:10.749946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.949 [2024-07-16 00:22:10.749954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.949 [2024-07-16 00:22:10.749960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.949 [2024-07-16 00:22:10.749969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.949 [2024-07-16 00:22:10.749975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.949 [2024-07-16 00:22:10.749983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.949 [2024-07-16 00:22:10.749989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.949 [2024-07-16 00:22:10.749997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.949 [2024-07-16 00:22:10.750003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.949 [2024-07-16 00:22:10.750011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.949 [2024-07-16 00:22:10.750019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.949 [2024-07-16 00:22:10.750027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.949 [2024-07-16 00:22:10.750034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.949 [2024-07-16 00:22:10.750043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.949 [2024-07-16 00:22:10.750049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.949 [2024-07-16 00:22:10.750058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.949 [2024-07-16 00:22:10.750064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.949 [2024-07-16 00:22:10.750072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.949 [2024-07-16 00:22:10.750078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.949 [2024-07-16 00:22:10.750086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.949 [2024-07-16 00:22:10.750092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.949 [2024-07-16 00:22:10.750100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.949 [2024-07-16 00:22:10.750106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.949 [2024-07-16 00:22:10.750114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.949 [2024-07-16 00:22:10.750121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.949 [2024-07-16 00:22:10.750128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.949 [2024-07-16 00:22:10.750135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.949 [2024-07-16 00:22:10.750143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.949 [2024-07-16 00:22:10.750149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.949 [2024-07-16 00:22:10.750157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.949 [2024-07-16 00:22:10.750163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.949 [2024-07-16 00:22:10.750171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.949 [2024-07-16 00:22:10.750177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.949 [2024-07-16 00:22:10.750187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.949 [2024-07-16 00:22:10.750194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.949 [2024-07-16 00:22:10.750201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.949 [2024-07-16 00:22:10.750212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.949 [2024-07-16 00:22:10.750220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.949 [2024-07-16 00:22:10.750231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.949 [2024-07-16 00:22:10.750240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.949 [2024-07-16 00:22:10.750246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.949 [2024-07-16 00:22:10.750254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.949 [2024-07-16 00:22:10.750261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.949 [2024-07-16 00:22:10.750269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.949 [2024-07-16 00:22:10.750275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.949 [2024-07-16 00:22:10.750284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.949 [2024-07-16 00:22:10.750290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.949 [2024-07-16 00:22:10.750298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.949 [2024-07-16 00:22:10.750304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.949 [2024-07-16 00:22:10.750311] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2281a60 is same with the state(5) to be set 00:20:51.949 [2024-07-16 00:22:10.752472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.949 [2024-07-16 00:22:10.752491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.949 [2024-07-16 00:22:10.752504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.949 [2024-07-16 00:22:10.752511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.949 [2024-07-16 00:22:10.752520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.949 [2024-07-16 00:22:10.752527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.949 [2024-07-16 00:22:10.752535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.949 [2024-07-16 00:22:10.752541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.949 [2024-07-16 00:22:10.752549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.949 [2024-07-16 00:22:10.752556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.949 [2024-07-16 00:22:10.752564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.949 [2024-07-16 00:22:10.752574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.949 [2024-07-16 00:22:10.752582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.949 [2024-07-16 00:22:10.752588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.949 [2024-07-16 00:22:10.752597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.949 [2024-07-16 00:22:10.752604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.949 [2024-07-16 00:22:10.752612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.949 [2024-07-16 00:22:10.752619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.949 [2024-07-16 00:22:10.752627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.950 [2024-07-16 00:22:10.752634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.950 [2024-07-16 00:22:10.752642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.950 [2024-07-16 00:22:10.752648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.950 [2024-07-16 00:22:10.752656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.950 [2024-07-16 00:22:10.752663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.950 [2024-07-16 00:22:10.752671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.950 [2024-07-16 00:22:10.752677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.950 [2024-07-16 00:22:10.752686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.950 [2024-07-16 00:22:10.752692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.950 [2024-07-16 00:22:10.752700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.950 [2024-07-16 00:22:10.752706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.950 [2024-07-16 00:22:10.752714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.950 [2024-07-16 00:22:10.752720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.950 [2024-07-16 00:22:10.752728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.950 [2024-07-16 00:22:10.752735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.950 [2024-07-16 00:22:10.752743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.950 [2024-07-16 00:22:10.752750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.950 [2024-07-16 00:22:10.752759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.950 [2024-07-16 00:22:10.752765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.950 [2024-07-16 00:22:10.752773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.950 [2024-07-16 00:22:10.752780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.950 [2024-07-16 00:22:10.752788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.950 [2024-07-16 00:22:10.752794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.950 [2024-07-16 00:22:10.752802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.950 [2024-07-16 00:22:10.752808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.950 [2024-07-16 00:22:10.752816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.950 [2024-07-16 00:22:10.752822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.950 [2024-07-16 00:22:10.752830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.950 [2024-07-16 00:22:10.752837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.950 [2024-07-16 00:22:10.752845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.950 [2024-07-16 00:22:10.752851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.950 [2024-07-16 00:22:10.752859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.950 [2024-07-16 00:22:10.752866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.950 [2024-07-16 00:22:10.752873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.950 [2024-07-16 00:22:10.752880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.950 [2024-07-16 00:22:10.752888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.950 [2024-07-16 00:22:10.752894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.950 [2024-07-16 00:22:10.752902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.950 [2024-07-16 00:22:10.752908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.950 [2024-07-16 00:22:10.752916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.950 [2024-07-16 00:22:10.752922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.950 [2024-07-16 00:22:10.752930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.950 [2024-07-16 00:22:10.752937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.950 [2024-07-16 00:22:10.752945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.950 [2024-07-16 00:22:10.752952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.950 [2024-07-16 00:22:10.752960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.950 [2024-07-16 00:22:10.752966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.950 [2024-07-16 00:22:10.752973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.950 [2024-07-16 00:22:10.752980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.950 [2024-07-16 00:22:10.752988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.950 [2024-07-16 00:22:10.752994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.950 [2024-07-16 00:22:10.753002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.950 [2024-07-16 00:22:10.753008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.950 [2024-07-16 00:22:10.753016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.950 [2024-07-16 00:22:10.753023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.951 [2024-07-16 00:22:10.753031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.951 [2024-07-16 00:22:10.753037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.951 [2024-07-16 00:22:10.753045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.951 [2024-07-16 00:22:10.753051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.951 [2024-07-16 00:22:10.753059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.951 [2024-07-16 00:22:10.753065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.951 [2024-07-16 00:22:10.753073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.951 [2024-07-16 00:22:10.753079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.951 [2024-07-16 00:22:10.753088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.951 [2024-07-16 00:22:10.753094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.951 [2024-07-16 00:22:10.753102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.951 [2024-07-16 00:22:10.753108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.951 [2024-07-16 00:22:10.753118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.951 [2024-07-16 00:22:10.753124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.951 [2024-07-16 00:22:10.753132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.951 [2024-07-16 00:22:10.753138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.951 [2024-07-16 00:22:10.753146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.951 [2024-07-16 00:22:10.753153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.951 [2024-07-16 00:22:10.753160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.951 [2024-07-16 00:22:10.753167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.951 [2024-07-16 00:22:10.753175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.951 [2024-07-16 00:22:10.753181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.951 [2024-07-16 00:22:10.753189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.951 [2024-07-16 00:22:10.753195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.951 [2024-07-16 00:22:10.753203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.951 [2024-07-16 00:22:10.753209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.951 [2024-07-16 00:22:10.753217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.951 [2024-07-16 00:22:10.753227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.951 [2024-07-16 00:22:10.753236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.951 [2024-07-16 00:22:10.753242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.951 [2024-07-16 00:22:10.753251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.951 [2024-07-16 00:22:10.753257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.951 [2024-07-16 00:22:10.753265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.951 [2024-07-16 00:22:10.753271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.951 [2024-07-16 00:22:10.753279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.951 [2024-07-16 00:22:10.753286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.951 [2024-07-16 00:22:10.753294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.951 [2024-07-16 00:22:10.753302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.951 [2024-07-16 00:22:10.753310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.951 [2024-07-16 00:22:10.753316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.951 [2024-07-16 00:22:10.753324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.951 [2024-07-16 00:22:10.753330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.951 [2024-07-16 00:22:10.753338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.951 [2024-07-16 00:22:10.753345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.951 [2024-07-16 00:22:10.753353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.951 [2024-07-16 00:22:10.753359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.951 [2024-07-16 00:22:10.753367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.951 [2024-07-16 00:22:10.753374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.951 [2024-07-16 00:22:10.753382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.951 [2024-07-16 00:22:10.753388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.951 [2024-07-16 00:22:10.753396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.951 [2024-07-16 00:22:10.753402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.951 [2024-07-16 00:22:10.753411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.951 [2024-07-16 00:22:10.753417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.951 [2024-07-16 00:22:10.753424] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2282ef0 is same with the state(5) to be set 00:20:51.951 [2024-07-16 00:22:10.755205] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:51.951 [2024-07-16 00:22:10.755228] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:20:51.951 [2024-07-16 00:22:10.755238] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:20:51.951 [2024-07-16 00:22:10.755246] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:20:51.951 [2024-07-16 00:22:10.755316] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:51.951 [2024-07-16 00:22:10.755331] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:51.951 [2024-07-16 00:22:10.755394] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:20:52.211 task offset: 26496 on job bdev=Nvme7n1 fails 00:20:52.211 00:20:52.211 Latency(us) 00:20:52.211 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:52.211 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:52.211 Job: Nvme1n1 ended in about 0.79 seconds with error 00:20:52.211 Verification LBA range: start 0x0 length 0x400 00:20:52.211 Nvme1n1 : 0.79 162.96 10.18 81.48 0.00 258944.07 17894.18 217921.45 00:20:52.211 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:52.211 Job: Nvme2n1 ended in about 0.79 seconds with error 00:20:52.211 Verification LBA range: start 0x0 length 0x400 00:20:52.211 Nvme2n1 : 0.79 162.55 10.16 81.27 0.00 254366.05 18350.08 220656.86 00:20:52.211 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:52.211 Job: Nvme3n1 ended in about 0.79 seconds with error 00:20:52.211 Verification LBA range: start 0x0 length 0x400 00:20:52.211 Nvme3n1 : 0.79 162.14 10.13 81.07 0.00 249773.49 16868.40 210627.01 00:20:52.211 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:52.211 Job: Nvme4n1 ended in about 0.77 seconds with error 00:20:52.212 Verification LBA range: start 0x0 length 0x400 00:20:52.212 Nvme4n1 : 0.77 247.83 15.49 82.61 0.00 179578.77 12480.33 196038.12 00:20:52.212 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:52.212 Job: Nvme5n1 ended in about 0.77 seconds with error 00:20:52.212 Verification LBA range: start 0x0 length 0x400 00:20:52.212 Nvme5n1 : 0.77 248.56 15.54 82.85 0.00 175050.24 13506.11 217921.45 00:20:52.212 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:52.212 Job: Nvme6n1 ended in about 0.77 seconds with error 00:20:52.212 Verification LBA range: start 0x0 length 0x400 00:20:52.212 Nvme6n1 : 0.77 248.27 15.52 82.76 0.00 171202.11 13905.03 212450.62 00:20:52.212 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:52.212 Job: Nvme7n1 ended in about 0.77 seconds with error 00:20:52.212 Verification LBA range: start 0x0 length 0x400 00:20:52.212 Nvme7n1 : 0.77 249.30 15.58 83.10 0.00 166466.67 10428.77 217921.45 00:20:52.212 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:52.212 Job: Nvme8n1 ended in about 0.79 seconds with error 00:20:52.212 Verification LBA range: start 0x0 length 0x400 00:20:52.212 Nvme8n1 : 0.79 161.74 10.11 80.87 0.00 223835.42 19147.91 231598.53 00:20:52.212 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:52.212 Job: Nvme9n1 ended in about 0.79 seconds with error 00:20:52.212 Verification LBA range: start 0x0 length 0x400 00:20:52.212 Nvme9n1 : 0.79 161.34 10.08 80.67 0.00 219217.62 21883.33 246187.41 00:20:52.212 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:52.212 Job: Nvme10n1 ended in about 0.80 seconds with error 00:20:52.212 Verification LBA range: start 0x0 length 0x400 00:20:52.212 Nvme10n1 : 0.80 160.71 10.04 80.35 0.00 214986.57 32824.99 225215.89 00:20:52.212 =================================================================================================================== 00:20:52.212 Total : 1965.39 122.84 817.04 0.00 206840.03 10428.77 246187.41 00:20:52.212 [2024-07-16 00:22:10.780325] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:52.212 [2024-07-16 00:22:10.780368] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:20:52.212 [2024-07-16 00:22:10.780658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:52.212 [2024-07-16 00:22:10.780676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2158c70 with addr=10.0.0.2, port=4420 00:20:52.212 [2024-07-16 00:22:10.780685] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2158c70 is same with the state(5) to be set 00:20:52.212 [2024-07-16 00:22:10.780934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:52.212 [2024-07-16 00:22:10.780944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23248d0 with addr=10.0.0.2, port=4420 00:20:52.212 [2024-07-16 00:22:10.780957] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23248d0 is same with the state(5) to be set 00:20:52.212 [2024-07-16 00:22:10.781256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:52.212 [2024-07-16 00:22:10.781266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x232d050 with addr=10.0.0.2, port=4420 00:20:52.212 [2024-07-16 00:22:10.781273] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232d050 is same with the state(5) to be set 00:20:52.212 [2024-07-16 00:22:10.781470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:52.212 [2024-07-16 00:22:10.781479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219fbf0 with addr=10.0.0.2, port=4420 00:20:52.212 [2024-07-16 00:22:10.781486] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219fbf0 is same with the state(5) to be set 00:20:52.212 [2024-07-16 00:22:10.782869] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:20:52.212 [2024-07-16 00:22:10.782883] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:20:52.212 [2024-07-16 00:22:10.782893] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:20:52.212 [2024-07-16 00:22:10.782902] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:20:52.212 [2024-07-16 00:22:10.783249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:52.212 [2024-07-16 00:22:10.783263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23240d0 with addr=10.0.0.2, port=4420 00:20:52.212 [2024-07-16 00:22:10.783271] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23240d0 is same with the state(5) to be set 00:20:52.212 [2024-07-16 00:22:10.783498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:52.212 [2024-07-16 00:22:10.783508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x230d8b0 with addr=10.0.0.2, port=4420 00:20:52.212 [2024-07-16 00:22:10.783514] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230d8b0 is same with the state(5) to be set 00:20:52.212 [2024-07-16 00:22:10.783525] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2158c70 (9): Bad file descriptor 00:20:52.212 [2024-07-16 00:22:10.783536] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23248d0 (9): Bad file descriptor 00:20:52.212 [2024-07-16 00:22:10.783545] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x232d050 (9): Bad file descriptor 00:20:52.212 [2024-07-16 00:22:10.783553] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x219fbf0 (9): Bad file descriptor 00:20:52.212 [2024-07-16 00:22:10.783585] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:52.212 [2024-07-16 00:22:10.783595] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:52.212 [2024-07-16 00:22:10.783608] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:52.212 [2024-07-16 00:22:10.783617] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:52.212 [2024-07-16 00:22:10.783899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:52.212 [2024-07-16 00:22:10.783910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21951d0 with addr=10.0.0.2, port=4420 00:20:52.212 [2024-07-16 00:22:10.783917] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21951d0 is same with the state(5) to be set 00:20:52.212 [2024-07-16 00:22:10.784107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:52.212 [2024-07-16 00:22:10.784117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219cb30 with addr=10.0.0.2, port=4420 00:20:52.212 [2024-07-16 00:22:10.784128] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cb30 is same with the state(5) to be set 00:20:52.212 [2024-07-16 00:22:10.784403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:52.212 [2024-07-16 00:22:10.784414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca7340 with addr=10.0.0.2, port=4420 00:20:52.212 [2024-07-16 00:22:10.784421] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca7340 is same with the state(5) to be set 00:20:52.212 [2024-07-16 00:22:10.784680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:52.212 [2024-07-16 00:22:10.784690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b190 with addr=10.0.0.2, port=4420 00:20:52.212 [2024-07-16 00:22:10.784696] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b190 is same with the state(5) to be set 00:20:52.212 [2024-07-16 00:22:10.784705] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23240d0 (9): Bad file descriptor 00:20:52.212 [2024-07-16 00:22:10.784714] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x230d8b0 (9): Bad file descriptor 00:20:52.212 [2024-07-16 00:22:10.784721] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:52.212 [2024-07-16 00:22:10.784727] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:52.212 [2024-07-16 00:22:10.784734] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:52.212 [2024-07-16 00:22:10.784746] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:20:52.212 [2024-07-16 00:22:10.784752] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:20:52.212 [2024-07-16 00:22:10.784758] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:20:52.212 [2024-07-16 00:22:10.784768] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:20:52.212 [2024-07-16 00:22:10.784774] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:20:52.212 [2024-07-16 00:22:10.784780] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:20:52.212 [2024-07-16 00:22:10.784789] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:20:52.212 [2024-07-16 00:22:10.784794] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:20:52.212 [2024-07-16 00:22:10.784800] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:20:52.212 [2024-07-16 00:22:10.784866] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:52.212 [2024-07-16 00:22:10.784874] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:52.212 [2024-07-16 00:22:10.784879] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:52.212 [2024-07-16 00:22:10.784884] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:52.212 [2024-07-16 00:22:10.784891] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21951d0 (9): Bad file descriptor 00:20:52.212 [2024-07-16 00:22:10.784899] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x219cb30 (9): Bad file descriptor 00:20:52.212 [2024-07-16 00:22:10.784907] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca7340 (9): Bad file descriptor 00:20:52.212 [2024-07-16 00:22:10.784915] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b190 (9): Bad file descriptor 00:20:52.213 [2024-07-16 00:22:10.784922] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:20:52.213 [2024-07-16 00:22:10.784930] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:20:52.213 [2024-07-16 00:22:10.784936] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:20:52.213 [2024-07-16 00:22:10.784944] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:20:52.213 [2024-07-16 00:22:10.784950] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:20:52.213 [2024-07-16 00:22:10.784956] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:20:52.213 [2024-07-16 00:22:10.784981] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:52.213 [2024-07-16 00:22:10.784987] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:52.213 [2024-07-16 00:22:10.784992] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:20:52.213 [2024-07-16 00:22:10.784998] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:20:52.213 [2024-07-16 00:22:10.785004] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:20:52.213 [2024-07-16 00:22:10.785012] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:20:52.213 [2024-07-16 00:22:10.785018] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:20:52.213 [2024-07-16 00:22:10.785023] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:20:52.213 [2024-07-16 00:22:10.785031] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:20:52.213 [2024-07-16 00:22:10.785036] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:20:52.213 [2024-07-16 00:22:10.785042] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:20:52.213 [2024-07-16 00:22:10.785051] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:20:52.213 [2024-07-16 00:22:10.785057] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:20:52.213 [2024-07-16 00:22:10.785063] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:20:52.213 [2024-07-16 00:22:10.785088] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:52.213 [2024-07-16 00:22:10.785095] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:52.213 [2024-07-16 00:22:10.785100] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:52.213 [2024-07-16 00:22:10.785105] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:52.472 00:22:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:20:52.472 00:22:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:20:53.410 00:22:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 1571355 00:20:53.410 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (1571355) - No such process 00:20:53.410 00:22:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:20:53.410 00:22:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:20:53.410 00:22:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:20:53.410 00:22:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:53.410 00:22:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:53.410 00:22:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:20:53.410 00:22:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:53.410 00:22:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:20:53.410 00:22:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:53.410 00:22:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:20:53.410 00:22:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:53.410 00:22:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:53.410 rmmod nvme_tcp 00:20:53.410 rmmod nvme_fabrics 00:20:53.410 rmmod nvme_keyring 00:20:53.410 00:22:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:53.410 00:22:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:20:53.410 00:22:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:20:53.410 00:22:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:20:53.410 00:22:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:53.410 00:22:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:53.410 00:22:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:53.410 00:22:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:53.410 00:22:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:53.410 00:22:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:53.410 00:22:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:53.410 00:22:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:55.970 00:22:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:55.970 00:20:55.970 real 0m8.027s 00:20:55.970 user 0m20.400s 00:20:55.970 sys 0m1.262s 00:20:55.970 00:22:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1118 -- # xtrace_disable 00:20:55.970 00:22:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:55.970 ************************************ 00:20:55.970 END TEST nvmf_shutdown_tc3 00:20:55.970 ************************************ 00:20:55.970 00:22:14 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1136 -- # return 0 00:20:55.970 00:22:14 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:20:55.970 00:20:55.970 real 0m31.279s 00:20:55.970 user 1m18.955s 00:20:55.970 sys 0m8.254s 00:20:55.970 00:22:14 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1118 -- # xtrace_disable 00:20:55.970 00:22:14 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:55.970 ************************************ 00:20:55.970 END TEST nvmf_shutdown 00:20:55.970 ************************************ 00:20:55.970 00:22:14 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:20:55.970 00:22:14 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:20:55.970 00:22:14 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:55.970 00:22:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:55.970 00:22:14 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:20:55.970 00:22:14 nvmf_tcp -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:55.970 00:22:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:55.970 00:22:14 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:20:55.970 00:22:14 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:20:55.970 00:22:14 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:20:55.970 00:22:14 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:20:55.970 00:22:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:55.970 ************************************ 00:20:55.970 START TEST nvmf_multicontroller 00:20:55.970 ************************************ 00:20:55.970 00:22:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:20:55.970 * Looking for test storage... 00:20:55.970 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:55.970 00:22:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:55.970 00:22:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:20:55.970 00:22:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:55.970 00:22:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:55.970 00:22:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:55.970 00:22:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:55.970 00:22:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:55.970 00:22:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:55.970 00:22:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:55.970 00:22:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:55.970 00:22:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:55.970 00:22:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:55.970 00:22:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:55.971 00:22:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:20:55.971 00:22:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:55.971 00:22:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:55.971 00:22:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:55.971 00:22:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:55.971 00:22:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:55.971 00:22:14 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:55.971 00:22:14 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:55.971 00:22:14 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:55.971 00:22:14 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:55.971 00:22:14 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:55.971 00:22:14 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:55.971 00:22:14 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:20:55.971 00:22:14 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:55.971 00:22:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:20:55.971 00:22:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:55.971 00:22:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:55.971 00:22:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:55.971 00:22:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:55.971 00:22:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:55.971 00:22:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:55.971 00:22:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:55.971 00:22:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:55.971 00:22:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:55.971 00:22:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:55.971 00:22:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:20:55.971 00:22:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:20:55.971 00:22:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:55.971 00:22:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:20:55.971 00:22:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:20:55.971 00:22:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:55.971 00:22:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:55.971 00:22:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:55.971 00:22:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:55.971 00:22:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:55.971 00:22:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:55.971 00:22:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:55.971 00:22:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:55.971 00:22:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:55.971 00:22:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:55.971 00:22:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:20:55.971 00:22:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:01.269 00:22:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:01.269 00:22:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:21:01.269 00:22:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:01.269 00:22:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:01.269 00:22:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:01.269 00:22:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:01.269 00:22:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:01.269 00:22:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:21:01.269 00:22:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:01.269 00:22:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:21:01.269 00:22:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:21:01.269 00:22:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:21:01.269 00:22:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:21:01.269 00:22:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:21:01.269 00:22:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:21:01.269 00:22:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:01.269 00:22:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:01.269 00:22:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:01.269 00:22:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:01.269 00:22:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:01.269 00:22:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:01.269 00:22:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:01.269 00:22:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:01.269 00:22:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:01.269 00:22:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:01.269 00:22:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:01.269 00:22:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:01.269 00:22:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:01.269 00:22:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:01.269 00:22:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:01.269 00:22:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:01.269 00:22:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:01.269 00:22:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:01.269 00:22:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:01.269 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:01.269 00:22:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:01.269 00:22:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:01.269 00:22:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:01.269 00:22:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:01.269 00:22:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:01.269 00:22:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:01.269 00:22:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:01.269 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:01.269 00:22:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:01.269 00:22:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:01.269 00:22:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:01.269 00:22:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:01.269 00:22:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:01.269 00:22:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:01.269 00:22:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:01.269 00:22:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:01.269 00:22:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:01.269 00:22:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:01.269 00:22:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:01.269 00:22:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:01.269 00:22:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:01.269 00:22:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:01.269 00:22:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:01.269 00:22:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:01.270 Found net devices under 0000:86:00.0: cvl_0_0 00:21:01.270 00:22:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:01.270 00:22:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:01.270 00:22:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:01.270 00:22:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:01.270 00:22:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:01.270 00:22:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:01.270 00:22:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:01.270 00:22:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:01.270 00:22:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:01.270 Found net devices under 0000:86:00.1: cvl_0_1 00:21:01.270 00:22:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:01.270 00:22:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:01.270 00:22:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:21:01.270 00:22:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:01.270 00:22:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:01.270 00:22:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:01.270 00:22:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:01.270 00:22:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:01.270 00:22:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:01.270 00:22:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:01.270 00:22:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:01.270 00:22:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:01.270 00:22:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:01.270 00:22:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:01.270 00:22:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:01.270 00:22:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:01.270 00:22:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:01.270 00:22:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:01.270 00:22:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:01.270 00:22:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:01.270 00:22:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:01.270 00:22:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:01.270 00:22:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:01.270 00:22:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:01.270 00:22:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:01.270 00:22:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:01.270 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:01.270 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.193 ms 00:21:01.270 00:21:01.270 --- 10.0.0.2 ping statistics --- 00:21:01.270 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:01.270 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:21:01.270 00:22:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:01.270 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:01.270 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.253 ms 00:21:01.270 00:21:01.270 --- 10.0.0.1 ping statistics --- 00:21:01.270 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:01.270 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:21:01.270 00:22:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:01.270 00:22:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:21:01.270 00:22:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:01.270 00:22:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:01.270 00:22:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:01.270 00:22:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:01.270 00:22:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:01.270 00:22:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:01.270 00:22:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:01.270 00:22:19 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:21:01.270 00:22:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:01.270 00:22:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:01.270 00:22:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:01.270 00:22:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:01.270 00:22:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=1575398 00:21:01.270 00:22:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 1575398 00:21:01.270 00:22:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@823 -- # '[' -z 1575398 ']' 00:21:01.270 00:22:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:01.270 00:22:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@828 -- # local max_retries=100 00:21:01.270 00:22:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:01.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:01.270 00:22:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # xtrace_disable 00:21:01.270 00:22:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:01.270 [2024-07-16 00:22:19.862766] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:21:01.270 [2024-07-16 00:22:19.862812] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:01.270 [2024-07-16 00:22:19.919750] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:01.270 [2024-07-16 00:22:19.999431] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:01.270 [2024-07-16 00:22:19.999468] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:01.270 [2024-07-16 00:22:19.999475] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:01.270 [2024-07-16 00:22:19.999481] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:01.270 [2024-07-16 00:22:19.999487] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:01.270 [2024-07-16 00:22:19.999582] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:01.270 [2024-07-16 00:22:19.999610] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:01.270 [2024-07-16 00:22:19.999612] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:02.206 00:22:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:21:02.206 00:22:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@856 -- # return 0 00:21:02.206 00:22:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:02.206 00:22:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:02.206 00:22:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:02.206 00:22:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:02.206 00:22:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:02.206 00:22:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:02.206 00:22:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:02.206 [2024-07-16 00:22:20.732276] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:02.206 00:22:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:02.206 00:22:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:02.206 00:22:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:02.206 00:22:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:02.206 Malloc0 00:21:02.206 00:22:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:02.206 00:22:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:02.206 00:22:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:02.206 00:22:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:02.206 00:22:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:02.206 00:22:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:02.206 00:22:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:02.206 00:22:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:02.206 00:22:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:02.206 00:22:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:02.206 00:22:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:02.206 00:22:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:02.206 [2024-07-16 00:22:20.792894] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:02.206 00:22:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:02.206 00:22:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:02.206 00:22:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:02.206 00:22:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:02.206 [2024-07-16 00:22:20.800836] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:02.206 00:22:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:02.206 00:22:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:02.206 00:22:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:02.206 00:22:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:02.206 Malloc1 00:21:02.206 00:22:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:02.206 00:22:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:21:02.206 00:22:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:02.206 00:22:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:02.206 00:22:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:02.206 00:22:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:21:02.206 00:22:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:02.206 00:22:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:02.206 00:22:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:02.206 00:22:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:21:02.206 00:22:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:02.206 00:22:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:02.206 00:22:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:02.206 00:22:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:21:02.206 00:22:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:02.206 00:22:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:02.206 00:22:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:02.206 00:22:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=1575647 00:21:02.206 00:22:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:21:02.206 00:22:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:02.206 00:22:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 1575647 /var/tmp/bdevperf.sock 00:21:02.206 00:22:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@823 -- # '[' -z 1575647 ']' 00:21:02.206 00:22:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:02.206 00:22:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@828 -- # local max_retries=100 00:21:02.206 00:22:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:02.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:02.206 00:22:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # xtrace_disable 00:21:02.206 00:22:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:03.143 00:22:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:21:03.143 00:22:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@856 -- # return 0 00:21:03.143 00:22:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:21:03.143 00:22:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:03.143 00:22:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:03.143 NVMe0n1 00:21:03.143 00:22:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:03.143 00:22:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:03.143 00:22:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:21:03.143 00:22:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:03.143 00:22:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:03.143 00:22:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:03.143 1 00:21:03.143 00:22:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:21:03.143 00:22:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@642 -- # local es=0 00:21:03.143 00:22:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@644 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:21:03.143 00:22:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@630 -- # local arg=rpc_cmd 00:21:03.143 00:22:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:21:03.143 00:22:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@634 -- # type -t rpc_cmd 00:21:03.143 00:22:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:21:03.143 00:22:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@645 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:21:03.143 00:22:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:03.143 00:22:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:03.143 request: 00:21:03.143 { 00:21:03.143 "name": "NVMe0", 00:21:03.143 "trtype": "tcp", 00:21:03.143 "traddr": "10.0.0.2", 00:21:03.143 "adrfam": "ipv4", 00:21:03.143 "trsvcid": "4420", 00:21:03.143 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:03.143 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:21:03.143 "hostaddr": "10.0.0.2", 00:21:03.143 "hostsvcid": "60000", 00:21:03.143 "prchk_reftag": false, 00:21:03.143 "prchk_guard": false, 00:21:03.143 "hdgst": false, 00:21:03.143 "ddgst": false, 00:21:03.143 "method": "bdev_nvme_attach_controller", 00:21:03.143 "req_id": 1 00:21:03.143 } 00:21:03.143 Got JSON-RPC error response 00:21:03.143 response: 00:21:03.143 { 00:21:03.143 "code": -114, 00:21:03.143 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:21:03.143 } 00:21:03.143 00:22:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@581 -- # [[ 1 == 0 ]] 00:21:03.143 00:22:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@645 -- # es=1 00:21:03.143 00:22:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:21:03.143 00:22:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:21:03.143 00:22:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:21:03.143 00:22:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:21:03.143 00:22:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@642 -- # local es=0 00:21:03.143 00:22:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@644 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:21:03.143 00:22:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@630 -- # local arg=rpc_cmd 00:21:03.143 00:22:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:21:03.143 00:22:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@634 -- # type -t rpc_cmd 00:21:03.143 00:22:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:21:03.143 00:22:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@645 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:21:03.143 00:22:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:03.143 00:22:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:03.143 request: 00:21:03.143 { 00:21:03.143 "name": "NVMe0", 00:21:03.143 "trtype": "tcp", 00:21:03.143 "traddr": "10.0.0.2", 00:21:03.143 "adrfam": "ipv4", 00:21:03.143 "trsvcid": "4420", 00:21:03.143 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:03.143 "hostaddr": "10.0.0.2", 00:21:03.143 "hostsvcid": "60000", 00:21:03.143 "prchk_reftag": false, 00:21:03.143 "prchk_guard": false, 00:21:03.143 "hdgst": false, 00:21:03.143 "ddgst": false, 00:21:03.143 "method": "bdev_nvme_attach_controller", 00:21:03.143 "req_id": 1 00:21:03.143 } 00:21:03.143 Got JSON-RPC error response 00:21:03.143 response: 00:21:03.143 { 00:21:03.143 "code": -114, 00:21:03.143 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:21:03.143 } 00:21:03.143 00:22:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@581 -- # [[ 1 == 0 ]] 00:21:03.143 00:22:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@645 -- # es=1 00:21:03.143 00:22:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:21:03.143 00:22:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:21:03.143 00:22:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:21:03.143 00:22:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:21:03.143 00:22:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@642 -- # local es=0 00:21:03.143 00:22:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@644 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:21:03.143 00:22:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@630 -- # local arg=rpc_cmd 00:21:03.143 00:22:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:21:03.143 00:22:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@634 -- # type -t rpc_cmd 00:21:03.143 00:22:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:21:03.143 00:22:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@645 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:21:03.143 00:22:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:03.143 00:22:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:03.143 request: 00:21:03.143 { 00:21:03.143 "name": "NVMe0", 00:21:03.143 "trtype": "tcp", 00:21:03.143 "traddr": "10.0.0.2", 00:21:03.143 "adrfam": "ipv4", 00:21:03.143 "trsvcid": "4420", 00:21:03.143 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:03.143 "hostaddr": "10.0.0.2", 00:21:03.143 "hostsvcid": "60000", 00:21:03.143 "prchk_reftag": false, 00:21:03.143 "prchk_guard": false, 00:21:03.143 "hdgst": false, 00:21:03.143 "ddgst": false, 00:21:03.143 "multipath": "disable", 00:21:03.143 "method": "bdev_nvme_attach_controller", 00:21:03.143 "req_id": 1 00:21:03.143 } 00:21:03.143 Got JSON-RPC error response 00:21:03.143 response: 00:21:03.143 { 00:21:03.143 "code": -114, 00:21:03.143 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:21:03.143 } 00:21:03.143 00:22:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@581 -- # [[ 1 == 0 ]] 00:21:03.143 00:22:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@645 -- # es=1 00:21:03.143 00:22:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:21:03.143 00:22:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:21:03.143 00:22:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:21:03.143 00:22:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:21:03.143 00:22:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@642 -- # local es=0 00:21:03.143 00:22:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@644 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:21:03.143 00:22:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@630 -- # local arg=rpc_cmd 00:21:03.143 00:22:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:21:03.143 00:22:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@634 -- # type -t rpc_cmd 00:21:03.143 00:22:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:21:03.143 00:22:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@645 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:21:03.143 00:22:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:03.143 00:22:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:03.143 request: 00:21:03.143 { 00:21:03.143 "name": "NVMe0", 00:21:03.143 "trtype": "tcp", 00:21:03.143 "traddr": "10.0.0.2", 00:21:03.143 "adrfam": "ipv4", 00:21:03.143 "trsvcid": "4420", 00:21:03.143 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:03.143 "hostaddr": "10.0.0.2", 00:21:03.143 "hostsvcid": "60000", 00:21:03.143 "prchk_reftag": false, 00:21:03.143 "prchk_guard": false, 00:21:03.143 "hdgst": false, 00:21:03.143 "ddgst": false, 00:21:03.143 "multipath": "failover", 00:21:03.143 "method": "bdev_nvme_attach_controller", 00:21:03.143 "req_id": 1 00:21:03.143 } 00:21:03.143 Got JSON-RPC error response 00:21:03.143 response: 00:21:03.143 { 00:21:03.143 "code": -114, 00:21:03.143 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:21:03.143 } 00:21:03.143 00:22:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@581 -- # [[ 1 == 0 ]] 00:21:03.143 00:22:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@645 -- # es=1 00:21:03.143 00:22:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:21:03.144 00:22:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:21:03.144 00:22:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:21:03.144 00:22:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:03.144 00:22:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:03.144 00:22:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:03.144 00:21:03.144 00:22:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:03.144 00:22:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:03.144 00:22:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:03.144 00:22:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:03.144 00:22:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:03.144 00:22:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:21:03.144 00:22:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:03.144 00:22:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:03.403 00:21:03.403 00:22:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:03.403 00:22:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:03.403 00:22:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:21:03.403 00:22:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:03.403 00:22:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:03.403 00:22:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:03.403 00:22:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:21:03.403 00:22:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:04.339 0 00:21:04.339 00:22:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:21:04.339 00:22:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:04.339 00:22:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:04.597 00:22:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:04.597 00:22:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 1575647 00:21:04.597 00:22:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@942 -- # '[' -z 1575647 ']' 00:21:04.597 00:22:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@946 -- # kill -0 1575647 00:21:04.597 00:22:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@947 -- # uname 00:21:04.598 00:22:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:21:04.598 00:22:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1575647 00:21:04.598 00:22:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:21:04.598 00:22:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:21:04.598 00:22:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1575647' 00:21:04.598 killing process with pid 1575647 00:21:04.598 00:22:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@961 -- # kill 1575647 00:21:04.598 00:22:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # wait 1575647 00:21:04.598 00:22:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:04.598 00:22:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:04.598 00:22:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:04.598 00:22:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:04.598 00:22:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:04.598 00:22:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:04.598 00:22:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:04.598 00:22:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:04.598 00:22:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:21:04.598 00:22:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:04.598 00:22:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1606 -- # read -r file 00:21:04.873 00:22:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:21:04.873 00:22:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # sort -u 00:21:04.873 00:22:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1607 -- # cat 00:21:04.873 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:21:04.873 [2024-07-16 00:22:20.901680] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:21:04.873 [2024-07-16 00:22:20.901732] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1575647 ] 00:21:04.873 [2024-07-16 00:22:20.956208] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:04.873 [2024-07-16 00:22:21.036935] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:04.873 [2024-07-16 00:22:22.047486] bdev.c:4613:bdev_name_add: *ERROR*: Bdev name a01703d4-04f5-416e-9591-352263252a69 already exists 00:21:04.873 [2024-07-16 00:22:22.047516] bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:a01703d4-04f5-416e-9591-352263252a69 alias for bdev NVMe1n1 00:21:04.873 [2024-07-16 00:22:22.047524] bdev_nvme.c:4317:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:21:04.873 Running I/O for 1 seconds... 00:21:04.873 00:21:04.873 Latency(us) 00:21:04.873 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:04.873 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:21:04.873 NVMe0n1 : 1.01 24289.48 94.88 0.00 0.00 5257.71 3262.55 9858.89 00:21:04.873 =================================================================================================================== 00:21:04.873 Total : 24289.48 94.88 0.00 0.00 5257.71 3262.55 9858.89 00:21:04.873 Received shutdown signal, test time was about 1.000000 seconds 00:21:04.873 00:21:04.873 Latency(us) 00:21:04.873 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:04.873 =================================================================================================================== 00:21:04.873 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:04.873 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:21:04.873 00:22:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:04.873 00:22:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1606 -- # read -r file 00:21:04.873 00:22:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:21:04.873 00:22:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:04.873 00:22:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:21:04.873 00:22:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:04.873 00:22:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:21:04.873 00:22:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:04.873 00:22:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:04.873 rmmod nvme_tcp 00:21:04.873 rmmod nvme_fabrics 00:21:04.873 rmmod nvme_keyring 00:21:04.873 00:22:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:04.873 00:22:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:21:04.873 00:22:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:21:04.873 00:22:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 1575398 ']' 00:21:04.873 00:22:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 1575398 00:21:04.873 00:22:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@942 -- # '[' -z 1575398 ']' 00:21:04.873 00:22:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@946 -- # kill -0 1575398 00:21:04.873 00:22:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@947 -- # uname 00:21:04.874 00:22:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:21:04.874 00:22:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1575398 00:21:04.874 00:22:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:21:04.874 00:22:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:21:04.874 00:22:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1575398' 00:21:04.874 killing process with pid 1575398 00:21:04.874 00:22:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@961 -- # kill 1575398 00:21:04.874 00:22:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # wait 1575398 00:21:05.132 00:22:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:05.132 00:22:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:05.132 00:22:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:05.132 00:22:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:05.132 00:22:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:05.132 00:22:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:05.132 00:22:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:05.132 00:22:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:07.037 00:22:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:07.038 00:21:07.038 real 0m11.446s 00:21:07.038 user 0m15.784s 00:21:07.038 sys 0m4.661s 00:21:07.038 00:22:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1118 -- # xtrace_disable 00:21:07.038 00:22:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:07.038 ************************************ 00:21:07.038 END TEST nvmf_multicontroller 00:21:07.038 ************************************ 00:21:07.297 00:22:25 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:21:07.297 00:22:25 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:07.297 00:22:25 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:21:07.297 00:22:25 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:21:07.297 00:22:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:07.297 ************************************ 00:21:07.297 START TEST nvmf_aer 00:21:07.297 ************************************ 00:21:07.297 00:22:25 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:07.297 * Looking for test storage... 00:21:07.297 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:07.297 00:22:26 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:07.297 00:22:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:21:07.297 00:22:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:07.297 00:22:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:07.297 00:22:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:07.297 00:22:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:07.297 00:22:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:07.297 00:22:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:07.297 00:22:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:07.297 00:22:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:07.297 00:22:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:07.297 00:22:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:07.297 00:22:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:07.297 00:22:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:07.297 00:22:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:07.297 00:22:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:07.297 00:22:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:07.297 00:22:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:07.297 00:22:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:07.297 00:22:26 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:07.297 00:22:26 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:07.297 00:22:26 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:07.297 00:22:26 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:07.297 00:22:26 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:07.297 00:22:26 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:07.297 00:22:26 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:21:07.297 00:22:26 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:07.297 00:22:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:21:07.297 00:22:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:07.297 00:22:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:07.297 00:22:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:07.297 00:22:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:07.297 00:22:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:07.297 00:22:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:07.297 00:22:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:07.297 00:22:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:07.297 00:22:26 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:21:07.297 00:22:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:07.297 00:22:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:07.297 00:22:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:07.297 00:22:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:07.297 00:22:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:07.297 00:22:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:07.297 00:22:26 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:07.297 00:22:26 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:07.297 00:22:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:07.297 00:22:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:07.297 00:22:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:21:07.297 00:22:26 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:12.570 00:22:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:12.570 00:22:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:21:12.570 00:22:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:12.570 00:22:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:12.570 00:22:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:12.570 00:22:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:12.570 00:22:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:12.570 00:22:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:21:12.570 00:22:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:12.570 00:22:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:21:12.570 00:22:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:21:12.570 00:22:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:21:12.570 00:22:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:21:12.570 00:22:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:21:12.570 00:22:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:21:12.570 00:22:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:12.571 00:22:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:12.571 00:22:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:12.571 00:22:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:12.571 00:22:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:12.571 00:22:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:12.571 00:22:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:12.571 00:22:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:12.571 00:22:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:12.571 00:22:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:12.571 00:22:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:12.571 00:22:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:12.571 00:22:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:12.571 00:22:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:12.571 00:22:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:12.571 00:22:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:12.571 00:22:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:12.571 00:22:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:12.571 00:22:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:12.571 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:12.571 00:22:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:12.571 00:22:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:12.571 00:22:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:12.571 00:22:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:12.571 00:22:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:12.571 00:22:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:12.571 00:22:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:12.571 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:12.571 00:22:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:12.571 00:22:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:12.571 00:22:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:12.571 00:22:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:12.571 00:22:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:12.571 00:22:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:12.571 00:22:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:12.571 00:22:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:12.571 00:22:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:12.571 00:22:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:12.571 00:22:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:12.571 00:22:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:12.571 00:22:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:12.571 00:22:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:12.571 00:22:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:12.571 00:22:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:12.571 Found net devices under 0000:86:00.0: cvl_0_0 00:21:12.571 00:22:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:12.571 00:22:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:12.571 00:22:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:12.571 00:22:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:12.571 00:22:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:12.571 00:22:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:12.571 00:22:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:12.571 00:22:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:12.571 00:22:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:12.571 Found net devices under 0000:86:00.1: cvl_0_1 00:21:12.571 00:22:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:12.571 00:22:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:12.571 00:22:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:21:12.571 00:22:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:12.571 00:22:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:12.571 00:22:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:12.571 00:22:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:12.571 00:22:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:12.571 00:22:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:12.571 00:22:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:12.571 00:22:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:12.571 00:22:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:12.571 00:22:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:12.571 00:22:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:12.571 00:22:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:12.571 00:22:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:12.571 00:22:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:12.571 00:22:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:12.571 00:22:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:12.830 00:22:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:12.830 00:22:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:12.830 00:22:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:12.830 00:22:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:12.830 00:22:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:12.830 00:22:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:12.830 00:22:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:12.830 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:12.830 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.192 ms 00:21:12.830 00:21:12.830 --- 10.0.0.2 ping statistics --- 00:21:12.830 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:12.830 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:21:12.830 00:22:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:12.830 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:12.830 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.097 ms 00:21:12.830 00:21:12.830 --- 10.0.0.1 ping statistics --- 00:21:12.830 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:12.830 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:21:12.830 00:22:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:12.830 00:22:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:21:12.830 00:22:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:12.830 00:22:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:12.830 00:22:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:12.830 00:22:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:12.830 00:22:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:12.830 00:22:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:12.830 00:22:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:12.830 00:22:31 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:21:12.830 00:22:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:12.830 00:22:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:12.831 00:22:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:12.831 00:22:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=1579627 00:21:12.831 00:22:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 1579627 00:21:12.831 00:22:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:12.831 00:22:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@823 -- # '[' -z 1579627 ']' 00:21:12.831 00:22:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:12.831 00:22:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@828 -- # local max_retries=100 00:21:12.831 00:22:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:12.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:12.831 00:22:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@832 -- # xtrace_disable 00:21:12.831 00:22:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:12.831 [2024-07-16 00:22:31.675992] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:21:12.831 [2024-07-16 00:22:31.676032] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:13.089 [2024-07-16 00:22:31.734775] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:13.089 [2024-07-16 00:22:31.807674] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:13.089 [2024-07-16 00:22:31.807714] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:13.089 [2024-07-16 00:22:31.807720] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:13.089 [2024-07-16 00:22:31.807726] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:13.089 [2024-07-16 00:22:31.807731] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:13.089 [2024-07-16 00:22:31.807796] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:13.089 [2024-07-16 00:22:31.807811] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:13.089 [2024-07-16 00:22:31.807834] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:13.089 [2024-07-16 00:22:31.807835] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:13.655 00:22:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:21:13.655 00:22:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@856 -- # return 0 00:21:13.655 00:22:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:13.655 00:22:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:13.655 00:22:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:13.913 00:22:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:13.913 00:22:32 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:13.913 00:22:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:13.913 00:22:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:13.913 [2024-07-16 00:22:32.518220] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:13.913 00:22:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:13.913 00:22:32 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:21:13.913 00:22:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:13.913 00:22:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:13.913 Malloc0 00:21:13.913 00:22:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:13.913 00:22:32 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:21:13.913 00:22:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:13.913 00:22:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:13.913 00:22:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:13.913 00:22:32 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:13.913 00:22:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:13.913 00:22:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:13.913 00:22:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:13.913 00:22:32 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:13.913 00:22:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:13.913 00:22:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:13.913 [2024-07-16 00:22:32.569810] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:13.913 00:22:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:13.913 00:22:32 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:21:13.913 00:22:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:13.913 00:22:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:13.913 [ 00:21:13.913 { 00:21:13.913 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:13.913 "subtype": "Discovery", 00:21:13.913 "listen_addresses": [], 00:21:13.913 "allow_any_host": true, 00:21:13.913 "hosts": [] 00:21:13.913 }, 00:21:13.913 { 00:21:13.913 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:13.913 "subtype": "NVMe", 00:21:13.913 "listen_addresses": [ 00:21:13.913 { 00:21:13.913 "trtype": "TCP", 00:21:13.913 "adrfam": "IPv4", 00:21:13.913 "traddr": "10.0.0.2", 00:21:13.913 "trsvcid": "4420" 00:21:13.913 } 00:21:13.913 ], 00:21:13.913 "allow_any_host": true, 00:21:13.913 "hosts": [], 00:21:13.913 "serial_number": "SPDK00000000000001", 00:21:13.913 "model_number": "SPDK bdev Controller", 00:21:13.913 "max_namespaces": 2, 00:21:13.913 "min_cntlid": 1, 00:21:13.913 "max_cntlid": 65519, 00:21:13.913 "namespaces": [ 00:21:13.913 { 00:21:13.913 "nsid": 1, 00:21:13.913 "bdev_name": "Malloc0", 00:21:13.913 "name": "Malloc0", 00:21:13.913 "nguid": "CF3612BD9B2741499F0967BA7FD65C42", 00:21:13.913 "uuid": "cf3612bd-9b27-4149-9f09-67ba7fd65c42" 00:21:13.913 } 00:21:13.913 ] 00:21:13.913 } 00:21:13.913 ] 00:21:13.913 00:22:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:13.913 00:22:32 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:21:13.913 00:22:32 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:21:13.913 00:22:32 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=1579682 00:21:13.913 00:22:32 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:21:13.913 00:22:32 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:21:13.913 00:22:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1259 -- # local i=0 00:21:13.913 00:22:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1260 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:13.913 00:22:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1261 -- # '[' 0 -lt 200 ']' 00:21:13.913 00:22:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # i=1 00:21:13.913 00:22:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # sleep 0.1 00:21:13.913 00:22:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1260 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:13.913 00:22:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1261 -- # '[' 1 -lt 200 ']' 00:21:13.913 00:22:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # i=2 00:21:13.913 00:22:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # sleep 0.1 00:21:14.171 00:22:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1260 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:14.171 00:22:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1261 -- # '[' 2 -lt 200 ']' 00:21:14.171 00:22:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # i=3 00:21:14.171 00:22:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # sleep 0.1 00:21:14.171 00:22:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1260 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:14.171 00:22:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:14.171 00:22:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1270 -- # return 0 00:21:14.171 00:22:32 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:21:14.171 00:22:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:14.171 00:22:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:14.171 Malloc1 00:21:14.171 00:22:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:14.171 00:22:32 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:21:14.171 00:22:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:14.171 00:22:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:14.171 00:22:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:14.171 00:22:32 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:21:14.171 00:22:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:14.171 00:22:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:14.171 Asynchronous Event Request test 00:21:14.171 Attaching to 10.0.0.2 00:21:14.171 Attached to 10.0.0.2 00:21:14.171 Registering asynchronous event callbacks... 00:21:14.171 Starting namespace attribute notice tests for all controllers... 00:21:14.171 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:21:14.171 aer_cb - Changed Namespace 00:21:14.171 Cleaning up... 00:21:14.171 [ 00:21:14.171 { 00:21:14.171 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:14.171 "subtype": "Discovery", 00:21:14.171 "listen_addresses": [], 00:21:14.171 "allow_any_host": true, 00:21:14.171 "hosts": [] 00:21:14.171 }, 00:21:14.171 { 00:21:14.171 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:14.171 "subtype": "NVMe", 00:21:14.171 "listen_addresses": [ 00:21:14.171 { 00:21:14.171 "trtype": "TCP", 00:21:14.171 "adrfam": "IPv4", 00:21:14.171 "traddr": "10.0.0.2", 00:21:14.171 "trsvcid": "4420" 00:21:14.171 } 00:21:14.171 ], 00:21:14.171 "allow_any_host": true, 00:21:14.171 "hosts": [], 00:21:14.171 "serial_number": "SPDK00000000000001", 00:21:14.171 "model_number": "SPDK bdev Controller", 00:21:14.171 "max_namespaces": 2, 00:21:14.171 "min_cntlid": 1, 00:21:14.171 "max_cntlid": 65519, 00:21:14.171 "namespaces": [ 00:21:14.171 { 00:21:14.171 "nsid": 1, 00:21:14.171 "bdev_name": "Malloc0", 00:21:14.171 "name": "Malloc0", 00:21:14.171 "nguid": "CF3612BD9B2741499F0967BA7FD65C42", 00:21:14.171 "uuid": "cf3612bd-9b27-4149-9f09-67ba7fd65c42" 00:21:14.171 }, 00:21:14.171 { 00:21:14.171 "nsid": 2, 00:21:14.171 "bdev_name": "Malloc1", 00:21:14.171 "name": "Malloc1", 00:21:14.171 "nguid": "8E4EC308FAE4483D8DB6F8BF8B489AF1", 00:21:14.171 "uuid": "8e4ec308-fae4-483d-8db6-f8bf8b489af1" 00:21:14.171 } 00:21:14.171 ] 00:21:14.171 } 00:21:14.171 ] 00:21:14.171 00:22:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:14.171 00:22:32 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 1579682 00:21:14.171 00:22:32 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:21:14.171 00:22:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:14.171 00:22:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:14.171 00:22:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:14.171 00:22:32 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:21:14.171 00:22:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:14.171 00:22:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:14.428 00:22:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:14.428 00:22:33 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:14.428 00:22:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:14.428 00:22:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:14.428 00:22:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:14.428 00:22:33 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:21:14.428 00:22:33 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:21:14.428 00:22:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:14.428 00:22:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:21:14.428 00:22:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:14.428 00:22:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:21:14.428 00:22:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:14.428 00:22:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:14.428 rmmod nvme_tcp 00:21:14.428 rmmod nvme_fabrics 00:21:14.428 rmmod nvme_keyring 00:21:14.428 00:22:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:14.428 00:22:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:21:14.428 00:22:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:21:14.428 00:22:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 1579627 ']' 00:21:14.428 00:22:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 1579627 00:21:14.428 00:22:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@942 -- # '[' -z 1579627 ']' 00:21:14.428 00:22:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@946 -- # kill -0 1579627 00:21:14.428 00:22:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@947 -- # uname 00:21:14.428 00:22:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:21:14.428 00:22:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1579627 00:21:14.428 00:22:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:21:14.428 00:22:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:21:14.428 00:22:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1579627' 00:21:14.428 killing process with pid 1579627 00:21:14.428 00:22:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@961 -- # kill 1579627 00:21:14.428 00:22:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@966 -- # wait 1579627 00:21:14.686 00:22:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:14.686 00:22:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:14.686 00:22:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:14.686 00:22:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:14.686 00:22:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:14.686 00:22:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:14.686 00:22:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:14.686 00:22:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:16.639 00:22:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:16.639 00:21:16.639 real 0m9.437s 00:21:16.639 user 0m7.543s 00:21:16.639 sys 0m4.644s 00:21:16.639 00:22:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1118 -- # xtrace_disable 00:21:16.639 00:22:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:16.639 ************************************ 00:21:16.639 END TEST nvmf_aer 00:21:16.639 ************************************ 00:21:16.639 00:22:35 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:21:16.639 00:22:35 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:16.639 00:22:35 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:21:16.639 00:22:35 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:21:16.639 00:22:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:16.639 ************************************ 00:21:16.639 START TEST nvmf_async_init 00:21:16.639 ************************************ 00:21:16.639 00:22:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:16.898 * Looking for test storage... 00:21:16.898 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:16.898 00:22:35 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:16.898 00:22:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:21:16.898 00:22:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:16.898 00:22:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:16.898 00:22:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:16.898 00:22:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:16.898 00:22:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:16.898 00:22:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:16.898 00:22:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:16.898 00:22:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:16.898 00:22:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:16.898 00:22:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:16.898 00:22:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:16.898 00:22:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:16.898 00:22:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:16.898 00:22:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:16.898 00:22:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:16.898 00:22:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:16.898 00:22:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:16.898 00:22:35 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:16.898 00:22:35 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:16.898 00:22:35 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:16.898 00:22:35 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.898 00:22:35 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.898 00:22:35 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.898 00:22:35 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:21:16.898 00:22:35 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.898 00:22:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:21:16.898 00:22:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:16.898 00:22:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:16.898 00:22:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:16.898 00:22:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:16.898 00:22:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:16.898 00:22:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:16.898 00:22:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:16.898 00:22:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:16.898 00:22:35 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:21:16.898 00:22:35 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:21:16.898 00:22:35 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:21:16.898 00:22:35 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:21:16.898 00:22:35 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:21:16.898 00:22:35 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:21:16.898 00:22:35 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=7664cd63223a4312954e5d401a363659 00:21:16.898 00:22:35 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:21:16.898 00:22:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:16.898 00:22:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:16.898 00:22:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:16.898 00:22:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:16.898 00:22:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:16.898 00:22:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:16.898 00:22:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:16.898 00:22:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:16.898 00:22:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:16.898 00:22:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:16.898 00:22:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:21:16.898 00:22:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:22.168 00:22:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:22.168 00:22:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:21:22.168 00:22:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:22.168 00:22:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:22.169 00:22:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:22.169 00:22:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:22.169 00:22:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:22.169 00:22:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:21:22.169 00:22:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:22.169 00:22:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:21:22.169 00:22:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:21:22.169 00:22:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:21:22.169 00:22:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:21:22.169 00:22:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:21:22.169 00:22:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:21:22.169 00:22:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:22.169 00:22:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:22.169 00:22:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:22.169 00:22:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:22.169 00:22:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:22.169 00:22:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:22.169 00:22:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:22.169 00:22:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:22.169 00:22:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:22.169 00:22:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:22.169 00:22:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:22.169 00:22:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:22.169 00:22:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:22.169 00:22:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:22.169 00:22:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:22.169 00:22:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:22.169 00:22:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:22.169 00:22:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:22.169 00:22:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:22.169 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:22.169 00:22:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:22.169 00:22:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:22.169 00:22:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:22.169 00:22:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:22.169 00:22:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:22.169 00:22:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:22.169 00:22:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:22.169 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:22.169 00:22:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:22.169 00:22:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:22.169 00:22:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:22.169 00:22:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:22.169 00:22:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:22.169 00:22:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:22.169 00:22:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:22.169 00:22:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:22.169 00:22:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:22.169 00:22:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:22.169 00:22:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:22.169 00:22:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:22.169 00:22:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:22.169 00:22:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:22.169 00:22:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:22.169 00:22:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:22.169 Found net devices under 0000:86:00.0: cvl_0_0 00:21:22.169 00:22:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:22.169 00:22:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:22.169 00:22:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:22.169 00:22:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:22.169 00:22:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:22.169 00:22:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:22.169 00:22:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:22.169 00:22:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:22.169 00:22:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:22.169 Found net devices under 0000:86:00.1: cvl_0_1 00:21:22.169 00:22:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:22.169 00:22:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:22.169 00:22:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:21:22.169 00:22:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:22.169 00:22:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:22.169 00:22:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:22.169 00:22:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:22.169 00:22:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:22.169 00:22:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:22.169 00:22:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:22.169 00:22:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:22.169 00:22:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:22.169 00:22:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:22.169 00:22:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:22.169 00:22:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:22.169 00:22:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:22.169 00:22:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:22.169 00:22:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:22.169 00:22:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:22.169 00:22:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:22.169 00:22:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:22.169 00:22:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:22.169 00:22:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:22.169 00:22:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:22.169 00:22:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:22.169 00:22:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:22.169 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:22.169 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.158 ms 00:21:22.169 00:21:22.169 --- 10.0.0.2 ping statistics --- 00:21:22.169 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:22.169 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:21:22.169 00:22:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:22.169 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:22.169 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.241 ms 00:21:22.169 00:21:22.169 --- 10.0.0.1 ping statistics --- 00:21:22.169 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:22.169 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:21:22.169 00:22:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:22.169 00:22:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:21:22.169 00:22:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:22.169 00:22:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:22.169 00:22:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:22.169 00:22:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:22.169 00:22:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:22.169 00:22:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:22.169 00:22:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:22.428 00:22:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:21:22.428 00:22:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:22.428 00:22:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:22.428 00:22:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:22.428 00:22:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=1583238 00:21:22.428 00:22:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:21:22.428 00:22:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 1583238 00:21:22.428 00:22:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@823 -- # '[' -z 1583238 ']' 00:21:22.428 00:22:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:22.428 00:22:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@828 -- # local max_retries=100 00:21:22.428 00:22:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:22.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:22.428 00:22:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@832 -- # xtrace_disable 00:21:22.428 00:22:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:22.428 [2024-07-16 00:22:41.082673] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:21:22.428 [2024-07-16 00:22:41.082716] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:22.428 [2024-07-16 00:22:41.139696] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:22.428 [2024-07-16 00:22:41.212361] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:22.428 [2024-07-16 00:22:41.212404] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:22.428 [2024-07-16 00:22:41.212411] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:22.428 [2024-07-16 00:22:41.212417] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:22.428 [2024-07-16 00:22:41.212423] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:22.428 [2024-07-16 00:22:41.212443] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:23.363 00:22:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:21:23.363 00:22:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@856 -- # return 0 00:21:23.363 00:22:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:23.363 00:22:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:23.363 00:22:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:23.363 00:22:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:23.363 00:22:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:21:23.363 00:22:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:23.363 00:22:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:23.363 [2024-07-16 00:22:41.923720] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:23.363 00:22:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:23.363 00:22:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:21:23.363 00:22:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:23.363 00:22:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:23.363 null0 00:21:23.363 00:22:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:23.363 00:22:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:21:23.363 00:22:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:23.363 00:22:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:23.363 00:22:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:23.363 00:22:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:21:23.363 00:22:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:23.363 00:22:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:23.363 00:22:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:23.363 00:22:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 7664cd63223a4312954e5d401a363659 00:21:23.363 00:22:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:23.363 00:22:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:23.363 00:22:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:23.363 00:22:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:23.363 00:22:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:23.363 00:22:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:23.363 [2024-07-16 00:22:41.963944] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:23.363 00:22:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:23.363 00:22:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:21:23.363 00:22:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:23.363 00:22:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:23.363 nvme0n1 00:21:23.363 00:22:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:23.363 00:22:42 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:23.363 00:22:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:23.363 00:22:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:23.363 [ 00:21:23.363 { 00:21:23.363 "name": "nvme0n1", 00:21:23.363 "aliases": [ 00:21:23.363 "7664cd63-223a-4312-954e-5d401a363659" 00:21:23.363 ], 00:21:23.364 "product_name": "NVMe disk", 00:21:23.364 "block_size": 512, 00:21:23.364 "num_blocks": 2097152, 00:21:23.364 "uuid": "7664cd63-223a-4312-954e-5d401a363659", 00:21:23.364 "assigned_rate_limits": { 00:21:23.364 "rw_ios_per_sec": 0, 00:21:23.364 "rw_mbytes_per_sec": 0, 00:21:23.364 "r_mbytes_per_sec": 0, 00:21:23.364 "w_mbytes_per_sec": 0 00:21:23.364 }, 00:21:23.364 "claimed": false, 00:21:23.364 "zoned": false, 00:21:23.364 "supported_io_types": { 00:21:23.364 "read": true, 00:21:23.364 "write": true, 00:21:23.364 "unmap": false, 00:21:23.364 "flush": true, 00:21:23.364 "reset": true, 00:21:23.364 "nvme_admin": true, 00:21:23.364 "nvme_io": true, 00:21:23.364 "nvme_io_md": false, 00:21:23.364 "write_zeroes": true, 00:21:23.364 "zcopy": false, 00:21:23.364 "get_zone_info": false, 00:21:23.364 "zone_management": false, 00:21:23.364 "zone_append": false, 00:21:23.364 "compare": true, 00:21:23.364 "compare_and_write": true, 00:21:23.364 "abort": true, 00:21:23.364 "seek_hole": false, 00:21:23.364 "seek_data": false, 00:21:23.364 "copy": true, 00:21:23.364 "nvme_iov_md": false 00:21:23.364 }, 00:21:23.364 "memory_domains": [ 00:21:23.364 { 00:21:23.364 "dma_device_id": "system", 00:21:23.364 "dma_device_type": 1 00:21:23.364 } 00:21:23.364 ], 00:21:23.364 "driver_specific": { 00:21:23.364 "nvme": [ 00:21:23.364 { 00:21:23.364 "trid": { 00:21:23.364 "trtype": "TCP", 00:21:23.364 "adrfam": "IPv4", 00:21:23.364 "traddr": "10.0.0.2", 00:21:23.364 "trsvcid": "4420", 00:21:23.364 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:23.364 }, 00:21:23.364 "ctrlr_data": { 00:21:23.364 "cntlid": 1, 00:21:23.364 "vendor_id": "0x8086", 00:21:23.364 "model_number": "SPDK bdev Controller", 00:21:23.364 "serial_number": "00000000000000000000", 00:21:23.364 "firmware_revision": "24.09", 00:21:23.364 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:23.364 "oacs": { 00:21:23.364 "security": 0, 00:21:23.364 "format": 0, 00:21:23.364 "firmware": 0, 00:21:23.364 "ns_manage": 0 00:21:23.364 }, 00:21:23.364 "multi_ctrlr": true, 00:21:23.364 "ana_reporting": false 00:21:23.364 }, 00:21:23.364 "vs": { 00:21:23.364 "nvme_version": "1.3" 00:21:23.364 }, 00:21:23.364 "ns_data": { 00:21:23.364 "id": 1, 00:21:23.364 "can_share": true 00:21:23.364 } 00:21:23.364 } 00:21:23.364 ], 00:21:23.364 "mp_policy": "active_passive" 00:21:23.364 } 00:21:23.364 } 00:21:23.364 ] 00:21:23.364 00:22:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:23.364 00:22:42 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:21:23.364 00:22:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:23.364 00:22:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:23.364 [2024-07-16 00:22:42.212445] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:23.364 [2024-07-16 00:22:42.212518] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x78f250 (9): Bad file descriptor 00:21:23.623 [2024-07-16 00:22:42.344310] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:23.623 00:22:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:23.623 00:22:42 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:23.623 00:22:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:23.623 00:22:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:23.623 [ 00:21:23.623 { 00:21:23.623 "name": "nvme0n1", 00:21:23.623 "aliases": [ 00:21:23.623 "7664cd63-223a-4312-954e-5d401a363659" 00:21:23.623 ], 00:21:23.623 "product_name": "NVMe disk", 00:21:23.623 "block_size": 512, 00:21:23.623 "num_blocks": 2097152, 00:21:23.623 "uuid": "7664cd63-223a-4312-954e-5d401a363659", 00:21:23.623 "assigned_rate_limits": { 00:21:23.623 "rw_ios_per_sec": 0, 00:21:23.623 "rw_mbytes_per_sec": 0, 00:21:23.623 "r_mbytes_per_sec": 0, 00:21:23.623 "w_mbytes_per_sec": 0 00:21:23.623 }, 00:21:23.623 "claimed": false, 00:21:23.623 "zoned": false, 00:21:23.623 "supported_io_types": { 00:21:23.623 "read": true, 00:21:23.623 "write": true, 00:21:23.623 "unmap": false, 00:21:23.623 "flush": true, 00:21:23.623 "reset": true, 00:21:23.623 "nvme_admin": true, 00:21:23.623 "nvme_io": true, 00:21:23.623 "nvme_io_md": false, 00:21:23.623 "write_zeroes": true, 00:21:23.623 "zcopy": false, 00:21:23.623 "get_zone_info": false, 00:21:23.623 "zone_management": false, 00:21:23.623 "zone_append": false, 00:21:23.623 "compare": true, 00:21:23.623 "compare_and_write": true, 00:21:23.623 "abort": true, 00:21:23.623 "seek_hole": false, 00:21:23.623 "seek_data": false, 00:21:23.623 "copy": true, 00:21:23.623 "nvme_iov_md": false 00:21:23.623 }, 00:21:23.623 "memory_domains": [ 00:21:23.623 { 00:21:23.623 "dma_device_id": "system", 00:21:23.623 "dma_device_type": 1 00:21:23.623 } 00:21:23.623 ], 00:21:23.623 "driver_specific": { 00:21:23.623 "nvme": [ 00:21:23.623 { 00:21:23.623 "trid": { 00:21:23.623 "trtype": "TCP", 00:21:23.623 "adrfam": "IPv4", 00:21:23.623 "traddr": "10.0.0.2", 00:21:23.623 "trsvcid": "4420", 00:21:23.623 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:23.623 }, 00:21:23.623 "ctrlr_data": { 00:21:23.623 "cntlid": 2, 00:21:23.623 "vendor_id": "0x8086", 00:21:23.623 "model_number": "SPDK bdev Controller", 00:21:23.623 "serial_number": "00000000000000000000", 00:21:23.623 "firmware_revision": "24.09", 00:21:23.623 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:23.623 "oacs": { 00:21:23.623 "security": 0, 00:21:23.623 "format": 0, 00:21:23.623 "firmware": 0, 00:21:23.623 "ns_manage": 0 00:21:23.623 }, 00:21:23.623 "multi_ctrlr": true, 00:21:23.623 "ana_reporting": false 00:21:23.623 }, 00:21:23.623 "vs": { 00:21:23.623 "nvme_version": "1.3" 00:21:23.623 }, 00:21:23.623 "ns_data": { 00:21:23.623 "id": 1, 00:21:23.624 "can_share": true 00:21:23.624 } 00:21:23.624 } 00:21:23.624 ], 00:21:23.624 "mp_policy": "active_passive" 00:21:23.624 } 00:21:23.624 } 00:21:23.624 ] 00:21:23.624 00:22:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:23.624 00:22:42 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:23.624 00:22:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:23.624 00:22:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:23.624 00:22:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:23.624 00:22:42 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:21:23.624 00:22:42 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.bm9iK0Ngmp 00:21:23.624 00:22:42 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:23.624 00:22:42 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.bm9iK0Ngmp 00:21:23.624 00:22:42 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:21:23.624 00:22:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:23.624 00:22:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:23.624 00:22:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:23.624 00:22:42 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:21:23.624 00:22:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:23.624 00:22:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:23.624 [2024-07-16 00:22:42.389009] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:23.624 [2024-07-16 00:22:42.389129] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:23.624 00:22:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:23.624 00:22:42 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.bm9iK0Ngmp 00:21:23.624 00:22:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:23.624 00:22:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:23.624 [2024-07-16 00:22:42.397022] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:23.624 00:22:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:23.624 00:22:42 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.bm9iK0Ngmp 00:21:23.624 00:22:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:23.624 00:22:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:23.624 [2024-07-16 00:22:42.405060] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:23.624 [2024-07-16 00:22:42.405100] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:23.624 nvme0n1 00:21:23.624 00:22:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:23.624 00:22:42 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:23.624 00:22:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:23.624 00:22:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:23.884 [ 00:21:23.884 { 00:21:23.884 "name": "nvme0n1", 00:21:23.884 "aliases": [ 00:21:23.884 "7664cd63-223a-4312-954e-5d401a363659" 00:21:23.884 ], 00:21:23.884 "product_name": "NVMe disk", 00:21:23.884 "block_size": 512, 00:21:23.884 "num_blocks": 2097152, 00:21:23.884 "uuid": "7664cd63-223a-4312-954e-5d401a363659", 00:21:23.884 "assigned_rate_limits": { 00:21:23.884 "rw_ios_per_sec": 0, 00:21:23.884 "rw_mbytes_per_sec": 0, 00:21:23.884 "r_mbytes_per_sec": 0, 00:21:23.884 "w_mbytes_per_sec": 0 00:21:23.884 }, 00:21:23.884 "claimed": false, 00:21:23.884 "zoned": false, 00:21:23.884 "supported_io_types": { 00:21:23.884 "read": true, 00:21:23.884 "write": true, 00:21:23.884 "unmap": false, 00:21:23.884 "flush": true, 00:21:23.884 "reset": true, 00:21:23.884 "nvme_admin": true, 00:21:23.884 "nvme_io": true, 00:21:23.884 "nvme_io_md": false, 00:21:23.884 "write_zeroes": true, 00:21:23.884 "zcopy": false, 00:21:23.884 "get_zone_info": false, 00:21:23.884 "zone_management": false, 00:21:23.884 "zone_append": false, 00:21:23.884 "compare": true, 00:21:23.884 "compare_and_write": true, 00:21:23.884 "abort": true, 00:21:23.884 "seek_hole": false, 00:21:23.884 "seek_data": false, 00:21:23.884 "copy": true, 00:21:23.884 "nvme_iov_md": false 00:21:23.884 }, 00:21:23.884 "memory_domains": [ 00:21:23.884 { 00:21:23.884 "dma_device_id": "system", 00:21:23.884 "dma_device_type": 1 00:21:23.884 } 00:21:23.884 ], 00:21:23.884 "driver_specific": { 00:21:23.884 "nvme": [ 00:21:23.884 { 00:21:23.884 "trid": { 00:21:23.884 "trtype": "TCP", 00:21:23.884 "adrfam": "IPv4", 00:21:23.884 "traddr": "10.0.0.2", 00:21:23.884 "trsvcid": "4421", 00:21:23.884 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:23.884 }, 00:21:23.884 "ctrlr_data": { 00:21:23.884 "cntlid": 3, 00:21:23.884 "vendor_id": "0x8086", 00:21:23.884 "model_number": "SPDK bdev Controller", 00:21:23.884 "serial_number": "00000000000000000000", 00:21:23.884 "firmware_revision": "24.09", 00:21:23.884 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:23.884 "oacs": { 00:21:23.884 "security": 0, 00:21:23.884 "format": 0, 00:21:23.884 "firmware": 0, 00:21:23.884 "ns_manage": 0 00:21:23.884 }, 00:21:23.884 "multi_ctrlr": true, 00:21:23.884 "ana_reporting": false 00:21:23.884 }, 00:21:23.884 "vs": { 00:21:23.884 "nvme_version": "1.3" 00:21:23.884 }, 00:21:23.884 "ns_data": { 00:21:23.884 "id": 1, 00:21:23.884 "can_share": true 00:21:23.884 } 00:21:23.884 } 00:21:23.884 ], 00:21:23.884 "mp_policy": "active_passive" 00:21:23.884 } 00:21:23.884 } 00:21:23.884 ] 00:21:23.884 00:22:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:23.884 00:22:42 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:23.884 00:22:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:23.884 00:22:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:23.884 00:22:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:23.884 00:22:42 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.bm9iK0Ngmp 00:21:23.884 00:22:42 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:21:23.884 00:22:42 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:21:23.884 00:22:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:23.884 00:22:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:21:23.884 00:22:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:23.885 00:22:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:21:23.885 00:22:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:23.885 00:22:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:23.885 rmmod nvme_tcp 00:21:23.885 rmmod nvme_fabrics 00:21:23.885 rmmod nvme_keyring 00:21:23.885 00:22:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:23.885 00:22:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:21:23.885 00:22:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:21:23.885 00:22:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 1583238 ']' 00:21:23.885 00:22:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 1583238 00:21:23.885 00:22:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@942 -- # '[' -z 1583238 ']' 00:21:23.885 00:22:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@946 -- # kill -0 1583238 00:21:23.885 00:22:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@947 -- # uname 00:21:23.885 00:22:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:21:23.885 00:22:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1583238 00:21:23.885 00:22:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:21:23.885 00:22:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:21:23.885 00:22:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1583238' 00:21:23.885 killing process with pid 1583238 00:21:23.885 00:22:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@961 -- # kill 1583238 00:21:23.885 [2024-07-16 00:22:42.601714] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:23.885 [2024-07-16 00:22:42.601740] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:23.885 00:22:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@966 -- # wait 1583238 00:21:24.143 00:22:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:24.143 00:22:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:24.143 00:22:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:24.143 00:22:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:24.143 00:22:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:24.143 00:22:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:24.143 00:22:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:24.143 00:22:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:26.051 00:22:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:26.051 00:21:26.051 real 0m9.379s 00:21:26.051 user 0m3.387s 00:21:26.051 sys 0m4.479s 00:21:26.051 00:22:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1118 -- # xtrace_disable 00:21:26.051 00:22:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:26.051 ************************************ 00:21:26.051 END TEST nvmf_async_init 00:21:26.051 ************************************ 00:21:26.051 00:22:44 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:21:26.051 00:22:44 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:21:26.051 00:22:44 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:21:26.051 00:22:44 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:21:26.051 00:22:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:26.311 ************************************ 00:21:26.311 START TEST dma 00:21:26.311 ************************************ 00:21:26.311 00:22:44 nvmf_tcp.dma -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:21:26.311 * Looking for test storage... 00:21:26.311 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:26.311 00:22:44 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:26.311 00:22:44 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:21:26.311 00:22:44 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:26.311 00:22:44 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:26.311 00:22:44 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:26.311 00:22:44 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:26.311 00:22:44 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:26.311 00:22:44 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:26.311 00:22:44 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:26.311 00:22:44 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:26.311 00:22:44 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:26.311 00:22:44 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:26.311 00:22:44 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:26.311 00:22:45 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:26.311 00:22:45 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:26.311 00:22:45 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:26.311 00:22:45 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:26.311 00:22:45 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:26.311 00:22:45 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:26.311 00:22:45 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:26.311 00:22:45 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:26.311 00:22:45 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:26.311 00:22:45 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:26.311 00:22:45 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:26.311 00:22:45 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:26.311 00:22:45 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:21:26.311 00:22:45 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:26.311 00:22:45 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:21:26.311 00:22:45 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:26.311 00:22:45 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:26.311 00:22:45 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:26.311 00:22:45 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:26.311 00:22:45 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:26.311 00:22:45 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:26.311 00:22:45 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:26.311 00:22:45 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:26.311 00:22:45 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:21:26.311 00:22:45 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:21:26.311 00:21:26.311 real 0m0.104s 00:21:26.311 user 0m0.048s 00:21:26.312 sys 0m0.064s 00:21:26.312 00:22:45 nvmf_tcp.dma -- common/autotest_common.sh@1118 -- # xtrace_disable 00:21:26.312 00:22:45 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:21:26.312 ************************************ 00:21:26.312 END TEST dma 00:21:26.312 ************************************ 00:21:26.312 00:22:45 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:21:26.312 00:22:45 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:21:26.312 00:22:45 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:21:26.312 00:22:45 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:21:26.312 00:22:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:26.312 ************************************ 00:21:26.312 START TEST nvmf_identify 00:21:26.312 ************************************ 00:21:26.312 00:22:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:21:26.312 * Looking for test storage... 00:21:26.312 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:26.312 00:22:45 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:26.571 00:22:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:21:26.571 00:22:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:26.571 00:22:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:26.571 00:22:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:26.571 00:22:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:26.571 00:22:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:26.571 00:22:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:26.572 00:22:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:26.572 00:22:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:26.572 00:22:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:26.572 00:22:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:26.572 00:22:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:26.572 00:22:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:26.572 00:22:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:26.572 00:22:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:26.572 00:22:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:26.572 00:22:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:26.572 00:22:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:26.572 00:22:45 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:26.572 00:22:45 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:26.572 00:22:45 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:26.572 00:22:45 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:26.572 00:22:45 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:26.572 00:22:45 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:26.572 00:22:45 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:21:26.572 00:22:45 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:26.572 00:22:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:21:26.572 00:22:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:26.572 00:22:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:26.572 00:22:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:26.572 00:22:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:26.572 00:22:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:26.572 00:22:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:26.572 00:22:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:26.572 00:22:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:26.572 00:22:45 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:26.572 00:22:45 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:26.572 00:22:45 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:21:26.572 00:22:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:26.572 00:22:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:26.572 00:22:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:26.572 00:22:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:26.572 00:22:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:26.572 00:22:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:26.572 00:22:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:26.572 00:22:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:26.572 00:22:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:26.572 00:22:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:26.572 00:22:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:21:26.572 00:22:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:31.849 00:22:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:31.849 00:22:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:21:31.849 00:22:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:31.849 00:22:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:31.849 00:22:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:31.849 00:22:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:31.849 00:22:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:31.849 00:22:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:21:31.849 00:22:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:31.849 00:22:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:21:31.849 00:22:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:21:31.849 00:22:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:21:31.849 00:22:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:21:31.849 00:22:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:21:31.849 00:22:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:21:31.849 00:22:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:31.849 00:22:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:31.849 00:22:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:31.849 00:22:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:31.849 00:22:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:31.849 00:22:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:31.849 00:22:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:31.849 00:22:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:31.849 00:22:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:31.849 00:22:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:31.849 00:22:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:31.849 00:22:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:31.849 00:22:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:31.849 00:22:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:31.849 00:22:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:31.849 00:22:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:31.849 00:22:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:31.849 00:22:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:31.849 00:22:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:31.849 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:31.849 00:22:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:31.849 00:22:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:31.849 00:22:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:31.849 00:22:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:31.849 00:22:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:31.849 00:22:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:31.849 00:22:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:31.849 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:31.849 00:22:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:31.849 00:22:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:31.849 00:22:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:31.849 00:22:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:31.849 00:22:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:31.849 00:22:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:31.849 00:22:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:31.849 00:22:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:31.849 00:22:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:31.849 00:22:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:31.849 00:22:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:31.849 00:22:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:31.849 00:22:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:31.849 00:22:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:31.849 00:22:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:31.849 00:22:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:31.849 Found net devices under 0000:86:00.0: cvl_0_0 00:21:31.849 00:22:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:31.849 00:22:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:31.849 00:22:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:31.849 00:22:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:31.849 00:22:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:31.849 00:22:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:31.849 00:22:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:31.849 00:22:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:31.849 00:22:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:31.849 Found net devices under 0000:86:00.1: cvl_0_1 00:21:31.849 00:22:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:31.849 00:22:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:31.849 00:22:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:21:31.849 00:22:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:31.849 00:22:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:31.849 00:22:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:31.849 00:22:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:31.849 00:22:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:31.849 00:22:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:31.849 00:22:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:31.849 00:22:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:31.849 00:22:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:31.849 00:22:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:31.849 00:22:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:31.849 00:22:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:31.849 00:22:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:31.849 00:22:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:31.849 00:22:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:31.849 00:22:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:31.849 00:22:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:31.849 00:22:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:31.849 00:22:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:31.849 00:22:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:31.849 00:22:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:31.849 00:22:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:31.849 00:22:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:32.109 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:32.109 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.260 ms 00:21:32.109 00:21:32.109 --- 10.0.0.2 ping statistics --- 00:21:32.109 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:32.109 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:21:32.109 00:22:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:32.109 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:32.109 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.249 ms 00:21:32.109 00:21:32.109 --- 10.0.0.1 ping statistics --- 00:21:32.109 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:32.109 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:21:32.109 00:22:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:32.109 00:22:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:21:32.109 00:22:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:32.109 00:22:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:32.109 00:22:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:32.109 00:22:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:32.109 00:22:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:32.109 00:22:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:32.109 00:22:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:32.109 00:22:50 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:21:32.109 00:22:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:32.109 00:22:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:32.109 00:22:50 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1586995 00:21:32.109 00:22:50 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:32.109 00:22:50 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:32.109 00:22:50 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1586995 00:21:32.109 00:22:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@823 -- # '[' -z 1586995 ']' 00:21:32.109 00:22:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:32.109 00:22:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@828 -- # local max_retries=100 00:21:32.109 00:22:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:32.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:32.109 00:22:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@832 -- # xtrace_disable 00:21:32.109 00:22:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:32.109 [2024-07-16 00:22:50.800641] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:21:32.109 [2024-07-16 00:22:50.800682] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:32.109 [2024-07-16 00:22:50.858774] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:32.109 [2024-07-16 00:22:50.940262] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:32.109 [2024-07-16 00:22:50.940300] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:32.109 [2024-07-16 00:22:50.940308] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:32.109 [2024-07-16 00:22:50.940315] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:32.109 [2024-07-16 00:22:50.940320] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:32.109 [2024-07-16 00:22:50.940360] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:32.109 [2024-07-16 00:22:50.940377] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:32.109 [2024-07-16 00:22:50.940486] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:32.109 [2024-07-16 00:22:50.940488] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:33.048 00:22:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:21:33.048 00:22:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@856 -- # return 0 00:21:33.048 00:22:51 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:33.048 00:22:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:33.048 00:22:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:33.048 [2024-07-16 00:22:51.607954] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:33.048 00:22:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:33.048 00:22:51 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:21:33.048 00:22:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:33.048 00:22:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:33.048 00:22:51 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:33.048 00:22:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:33.048 00:22:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:33.048 Malloc0 00:21:33.048 00:22:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:33.048 00:22:51 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:33.048 00:22:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:33.048 00:22:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:33.048 00:22:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:33.049 00:22:51 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:21:33.049 00:22:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:33.049 00:22:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:33.049 00:22:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:33.049 00:22:51 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:33.049 00:22:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:33.049 00:22:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:33.049 [2024-07-16 00:22:51.692018] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:33.049 00:22:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:33.049 00:22:51 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:33.049 00:22:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:33.049 00:22:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:33.049 00:22:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:33.049 00:22:51 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:21:33.049 00:22:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:33.049 00:22:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:33.049 [ 00:21:33.049 { 00:21:33.049 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:33.049 "subtype": "Discovery", 00:21:33.049 "listen_addresses": [ 00:21:33.049 { 00:21:33.049 "trtype": "TCP", 00:21:33.049 "adrfam": "IPv4", 00:21:33.049 "traddr": "10.0.0.2", 00:21:33.049 "trsvcid": "4420" 00:21:33.049 } 00:21:33.049 ], 00:21:33.049 "allow_any_host": true, 00:21:33.049 "hosts": [] 00:21:33.049 }, 00:21:33.049 { 00:21:33.049 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:33.049 "subtype": "NVMe", 00:21:33.049 "listen_addresses": [ 00:21:33.049 { 00:21:33.049 "trtype": "TCP", 00:21:33.049 "adrfam": "IPv4", 00:21:33.049 "traddr": "10.0.0.2", 00:21:33.049 "trsvcid": "4420" 00:21:33.049 } 00:21:33.049 ], 00:21:33.049 "allow_any_host": true, 00:21:33.049 "hosts": [], 00:21:33.049 "serial_number": "SPDK00000000000001", 00:21:33.049 "model_number": "SPDK bdev Controller", 00:21:33.049 "max_namespaces": 32, 00:21:33.049 "min_cntlid": 1, 00:21:33.049 "max_cntlid": 65519, 00:21:33.049 "namespaces": [ 00:21:33.049 { 00:21:33.049 "nsid": 1, 00:21:33.049 "bdev_name": "Malloc0", 00:21:33.049 "name": "Malloc0", 00:21:33.049 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:21:33.049 "eui64": "ABCDEF0123456789", 00:21:33.049 "uuid": "c323e47e-67a3-4744-9318-cdbf5a43f1ae" 00:21:33.049 } 00:21:33.049 ] 00:21:33.049 } 00:21:33.049 ] 00:21:33.049 00:22:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:33.049 00:22:51 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:21:33.049 [2024-07-16 00:22:51.743620] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:21:33.049 [2024-07-16 00:22:51.743652] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1587240 ] 00:21:33.049 [2024-07-16 00:22:51.772777] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:21:33.049 [2024-07-16 00:22:51.772829] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:21:33.049 [2024-07-16 00:22:51.772834] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:21:33.049 [2024-07-16 00:22:51.772847] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:21:33.049 [2024-07-16 00:22:51.772853] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:21:33.049 [2024-07-16 00:22:51.773250] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:21:33.049 [2024-07-16 00:22:51.773278] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1ba8ec0 0 00:21:33.049 [2024-07-16 00:22:51.787233] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:21:33.049 [2024-07-16 00:22:51.787248] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:21:33.049 [2024-07-16 00:22:51.787252] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:21:33.049 [2024-07-16 00:22:51.787255] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:21:33.049 [2024-07-16 00:22:51.787292] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:33.049 [2024-07-16 00:22:51.787298] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:33.049 [2024-07-16 00:22:51.787302] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ba8ec0) 00:21:33.049 [2024-07-16 00:22:51.787316] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:21:33.049 [2024-07-16 00:22:51.787332] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c2be40, cid 0, qid 0 00:21:33.049 [2024-07-16 00:22:51.794233] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:33.049 [2024-07-16 00:22:51.794242] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:33.049 [2024-07-16 00:22:51.794245] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:33.049 [2024-07-16 00:22:51.794249] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c2be40) on tqpair=0x1ba8ec0 00:21:33.049 [2024-07-16 00:22:51.794258] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:21:33.049 [2024-07-16 00:22:51.794263] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:21:33.049 [2024-07-16 00:22:51.794268] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:21:33.049 [2024-07-16 00:22:51.794280] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:33.049 [2024-07-16 00:22:51.794284] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:33.049 [2024-07-16 00:22:51.794287] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ba8ec0) 00:21:33.049 [2024-07-16 00:22:51.794294] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.049 [2024-07-16 00:22:51.794306] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c2be40, cid 0, qid 0 00:21:33.049 [2024-07-16 00:22:51.794490] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:33.049 [2024-07-16 00:22:51.794497] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:33.049 [2024-07-16 00:22:51.794500] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:33.049 [2024-07-16 00:22:51.794503] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c2be40) on tqpair=0x1ba8ec0 00:21:33.049 [2024-07-16 00:22:51.794508] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:21:33.049 [2024-07-16 00:22:51.794515] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:21:33.049 [2024-07-16 00:22:51.794521] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:33.049 [2024-07-16 00:22:51.794525] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:33.049 [2024-07-16 00:22:51.794531] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ba8ec0) 00:21:33.049 [2024-07-16 00:22:51.794537] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.049 [2024-07-16 00:22:51.794547] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c2be40, cid 0, qid 0 00:21:33.049 [2024-07-16 00:22:51.794630] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:33.049 [2024-07-16 00:22:51.794635] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:33.049 [2024-07-16 00:22:51.794638] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:33.049 [2024-07-16 00:22:51.794641] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c2be40) on tqpair=0x1ba8ec0 00:21:33.049 [2024-07-16 00:22:51.794646] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:21:33.049 [2024-07-16 00:22:51.794653] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:21:33.049 [2024-07-16 00:22:51.794659] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:33.049 [2024-07-16 00:22:51.794662] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:33.049 [2024-07-16 00:22:51.794665] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ba8ec0) 00:21:33.049 [2024-07-16 00:22:51.794671] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.049 [2024-07-16 00:22:51.794680] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c2be40, cid 0, qid 0 00:21:33.049 [2024-07-16 00:22:51.794763] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:33.049 [2024-07-16 00:22:51.794769] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:33.049 [2024-07-16 00:22:51.794772] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:33.049 [2024-07-16 00:22:51.794775] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c2be40) on tqpair=0x1ba8ec0 00:21:33.049 [2024-07-16 00:22:51.794779] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:21:33.049 [2024-07-16 00:22:51.794787] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:33.049 [2024-07-16 00:22:51.794791] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:33.049 [2024-07-16 00:22:51.794794] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ba8ec0) 00:21:33.049 [2024-07-16 00:22:51.794800] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.049 [2024-07-16 00:22:51.794809] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c2be40, cid 0, qid 0 00:21:33.049 [2024-07-16 00:22:51.794888] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:33.049 [2024-07-16 00:22:51.794894] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:33.049 [2024-07-16 00:22:51.794897] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:33.049 [2024-07-16 00:22:51.794900] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c2be40) on tqpair=0x1ba8ec0 00:21:33.049 [2024-07-16 00:22:51.794904] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:21:33.049 [2024-07-16 00:22:51.794908] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:21:33.049 [2024-07-16 00:22:51.794915] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:21:33.049 [2024-07-16 00:22:51.795019] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:21:33.049 [2024-07-16 00:22:51.795024] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:21:33.049 [2024-07-16 00:22:51.795034] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:33.049 [2024-07-16 00:22:51.795037] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:33.049 [2024-07-16 00:22:51.795041] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ba8ec0) 00:21:33.050 [2024-07-16 00:22:51.795046] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.050 [2024-07-16 00:22:51.795056] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c2be40, cid 0, qid 0 00:21:33.050 [2024-07-16 00:22:51.795136] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:33.050 [2024-07-16 00:22:51.795142] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:33.050 [2024-07-16 00:22:51.795144] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:33.050 [2024-07-16 00:22:51.795148] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c2be40) on tqpair=0x1ba8ec0 00:21:33.050 [2024-07-16 00:22:51.795151] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:21:33.050 [2024-07-16 00:22:51.795159] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:33.050 [2024-07-16 00:22:51.795163] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:33.050 [2024-07-16 00:22:51.795166] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ba8ec0) 00:21:33.050 [2024-07-16 00:22:51.795172] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.050 [2024-07-16 00:22:51.795181] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c2be40, cid 0, qid 0 00:21:33.050 [2024-07-16 00:22:51.795258] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:33.050 [2024-07-16 00:22:51.795264] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:33.050 [2024-07-16 00:22:51.795267] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:33.050 [2024-07-16 00:22:51.795270] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c2be40) on tqpair=0x1ba8ec0 00:21:33.050 [2024-07-16 00:22:51.795274] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:21:33.050 [2024-07-16 00:22:51.795278] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:21:33.050 [2024-07-16 00:22:51.795284] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:21:33.050 [2024-07-16 00:22:51.795292] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:21:33.050 [2024-07-16 00:22:51.795301] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:33.050 [2024-07-16 00:22:51.795304] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ba8ec0) 00:21:33.050 [2024-07-16 00:22:51.795310] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.050 [2024-07-16 00:22:51.795320] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c2be40, cid 0, qid 0 00:21:33.050 [2024-07-16 00:22:51.795424] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:33.050 [2024-07-16 00:22:51.795429] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:33.050 [2024-07-16 00:22:51.795432] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:33.050 [2024-07-16 00:22:51.795435] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ba8ec0): datao=0, datal=4096, cccid=0 00:21:33.050 [2024-07-16 00:22:51.795440] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c2be40) on tqpair(0x1ba8ec0): expected_datao=0, payload_size=4096 00:21:33.050 [2024-07-16 00:22:51.795445] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:33.050 [2024-07-16 00:22:51.795481] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:33.050 [2024-07-16 00:22:51.795486] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:33.050 [2024-07-16 00:22:51.795539] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:33.050 [2024-07-16 00:22:51.795545] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:33.050 [2024-07-16 00:22:51.795548] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:33.050 [2024-07-16 00:22:51.795551] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c2be40) on tqpair=0x1ba8ec0 00:21:33.050 [2024-07-16 00:22:51.795558] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:21:33.050 [2024-07-16 00:22:51.795565] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:21:33.050 [2024-07-16 00:22:51.795569] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:21:33.050 [2024-07-16 00:22:51.795573] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:21:33.050 [2024-07-16 00:22:51.795577] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:21:33.050 [2024-07-16 00:22:51.795581] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:21:33.050 [2024-07-16 00:22:51.795589] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:21:33.050 [2024-07-16 00:22:51.795596] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:33.050 [2024-07-16 00:22:51.795599] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:33.050 [2024-07-16 00:22:51.795602] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ba8ec0) 00:21:33.050 [2024-07-16 00:22:51.795609] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:33.050 [2024-07-16 00:22:51.795619] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c2be40, cid 0, qid 0 00:21:33.050 [2024-07-16 00:22:51.795702] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:33.050 [2024-07-16 00:22:51.795708] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:33.050 [2024-07-16 00:22:51.795710] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:33.050 [2024-07-16 00:22:51.795713] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c2be40) on tqpair=0x1ba8ec0 00:21:33.050 [2024-07-16 00:22:51.795721] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:33.050 [2024-07-16 00:22:51.795724] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:33.050 [2024-07-16 00:22:51.795727] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ba8ec0) 00:21:33.050 [2024-07-16 00:22:51.795733] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:33.050 [2024-07-16 00:22:51.795738] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:33.050 [2024-07-16 00:22:51.795741] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:33.050 [2024-07-16 00:22:51.795744] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1ba8ec0) 00:21:33.050 [2024-07-16 00:22:51.795749] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:33.050 [2024-07-16 00:22:51.795754] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:33.050 [2024-07-16 00:22:51.795757] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:33.050 [2024-07-16 00:22:51.795760] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1ba8ec0) 00:21:33.050 [2024-07-16 00:22:51.795765] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:33.050 [2024-07-16 00:22:51.795772] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:33.050 [2024-07-16 00:22:51.795776] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:33.050 [2024-07-16 00:22:51.795779] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ba8ec0) 00:21:33.050 [2024-07-16 00:22:51.795784] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:33.050 [2024-07-16 00:22:51.795788] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:21:33.050 [2024-07-16 00:22:51.795798] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:21:33.050 [2024-07-16 00:22:51.795804] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:33.050 [2024-07-16 00:22:51.795807] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ba8ec0) 00:21:33.050 [2024-07-16 00:22:51.795813] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.050 [2024-07-16 00:22:51.795824] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c2be40, cid 0, qid 0 00:21:33.050 [2024-07-16 00:22:51.795829] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c2bfc0, cid 1, qid 0 00:21:33.050 [2024-07-16 00:22:51.795832] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c2c140, cid 2, qid 0 00:21:33.050 [2024-07-16 00:22:51.795837] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c2c2c0, cid 3, qid 0 00:21:33.050 [2024-07-16 00:22:51.795841] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c2c440, cid 4, qid 0 00:21:33.050 [2024-07-16 00:22:51.795953] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:33.050 [2024-07-16 00:22:51.795959] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:33.050 [2024-07-16 00:22:51.795962] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:33.050 [2024-07-16 00:22:51.795965] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c2c440) on tqpair=0x1ba8ec0 00:21:33.050 [2024-07-16 00:22:51.795969] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:21:33.050 [2024-07-16 00:22:51.795974] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:21:33.050 [2024-07-16 00:22:51.795983] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:33.050 [2024-07-16 00:22:51.795987] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ba8ec0) 00:21:33.050 [2024-07-16 00:22:51.795992] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.050 [2024-07-16 00:22:51.796002] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c2c440, cid 4, qid 0 00:21:33.050 [2024-07-16 00:22:51.796108] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:33.050 [2024-07-16 00:22:51.796114] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:33.050 [2024-07-16 00:22:51.796117] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:33.050 [2024-07-16 00:22:51.796121] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ba8ec0): datao=0, datal=4096, cccid=4 00:21:33.050 [2024-07-16 00:22:51.796125] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c2c440) on tqpair(0x1ba8ec0): expected_datao=0, payload_size=4096 00:21:33.050 [2024-07-16 00:22:51.796129] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:33.050 [2024-07-16 00:22:51.796134] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:33.050 [2024-07-16 00:22:51.796138] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:33.050 [2024-07-16 00:22:51.839233] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:33.050 [2024-07-16 00:22:51.839247] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:33.050 [2024-07-16 00:22:51.839250] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:33.050 [2024-07-16 00:22:51.839254] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c2c440) on tqpair=0x1ba8ec0 00:21:33.050 [2024-07-16 00:22:51.839266] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:21:33.050 [2024-07-16 00:22:51.839291] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:33.050 [2024-07-16 00:22:51.839295] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ba8ec0) 00:21:33.050 [2024-07-16 00:22:51.839303] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.050 [2024-07-16 00:22:51.839309] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:33.050 [2024-07-16 00:22:51.839312] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:33.051 [2024-07-16 00:22:51.839315] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1ba8ec0) 00:21:33.051 [2024-07-16 00:22:51.839321] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:21:33.051 [2024-07-16 00:22:51.839336] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c2c440, cid 4, qid 0 00:21:33.051 [2024-07-16 00:22:51.839341] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c2c5c0, cid 5, qid 0 00:21:33.051 [2024-07-16 00:22:51.839602] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:33.051 [2024-07-16 00:22:51.839608] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:33.051 [2024-07-16 00:22:51.839610] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:33.051 [2024-07-16 00:22:51.839614] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ba8ec0): datao=0, datal=1024, cccid=4 00:21:33.051 [2024-07-16 00:22:51.839617] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c2c440) on tqpair(0x1ba8ec0): expected_datao=0, payload_size=1024 00:21:33.051 [2024-07-16 00:22:51.839621] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:33.051 [2024-07-16 00:22:51.839627] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:33.051 [2024-07-16 00:22:51.839630] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:33.051 [2024-07-16 00:22:51.839635] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:33.051 [2024-07-16 00:22:51.839640] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:33.051 [2024-07-16 00:22:51.839643] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:33.051 [2024-07-16 00:22:51.839646] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c2c5c0) on tqpair=0x1ba8ec0 00:21:33.051 [2024-07-16 00:22:51.881389] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:33.051 [2024-07-16 00:22:51.881402] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:33.051 [2024-07-16 00:22:51.881406] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:33.051 [2024-07-16 00:22:51.881409] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c2c440) on tqpair=0x1ba8ec0 00:21:33.051 [2024-07-16 00:22:51.881426] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:33.051 [2024-07-16 00:22:51.881431] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ba8ec0) 00:21:33.051 [2024-07-16 00:22:51.881437] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.051 [2024-07-16 00:22:51.881454] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c2c440, cid 4, qid 0 00:21:33.051 [2024-07-16 00:22:51.881568] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:33.051 [2024-07-16 00:22:51.881574] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:33.051 [2024-07-16 00:22:51.881577] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:33.051 [2024-07-16 00:22:51.881583] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ba8ec0): datao=0, datal=3072, cccid=4 00:21:33.051 [2024-07-16 00:22:51.881587] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c2c440) on tqpair(0x1ba8ec0): expected_datao=0, payload_size=3072 00:21:33.051 [2024-07-16 00:22:51.881591] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:33.051 [2024-07-16 00:22:51.881597] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:33.051 [2024-07-16 00:22:51.881600] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:33.051 [2024-07-16 00:22:51.881677] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:33.051 [2024-07-16 00:22:51.881682] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:33.051 [2024-07-16 00:22:51.881685] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:33.051 [2024-07-16 00:22:51.881689] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c2c440) on tqpair=0x1ba8ec0 00:21:33.051 [2024-07-16 00:22:51.881696] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:33.051 [2024-07-16 00:22:51.881700] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ba8ec0) 00:21:33.051 [2024-07-16 00:22:51.881706] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.051 [2024-07-16 00:22:51.881719] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c2c440, cid 4, qid 0 00:21:33.051 [2024-07-16 00:22:51.881858] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:33.051 [2024-07-16 00:22:51.881863] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:33.051 [2024-07-16 00:22:51.881866] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:33.051 [2024-07-16 00:22:51.881869] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ba8ec0): datao=0, datal=8, cccid=4 00:21:33.051 [2024-07-16 00:22:51.881873] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c2c440) on tqpair(0x1ba8ec0): expected_datao=0, payload_size=8 00:21:33.051 [2024-07-16 00:22:51.881877] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:33.051 [2024-07-16 00:22:51.881882] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:33.051 [2024-07-16 00:22:51.881885] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:33.319 [2024-07-16 00:22:51.927235] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:33.319 [2024-07-16 00:22:51.927246] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:33.319 [2024-07-16 00:22:51.927249] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:33.319 [2024-07-16 00:22:51.927253] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c2c440) on tqpair=0x1ba8ec0 00:21:33.319 ===================================================== 00:21:33.319 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:21:33.319 ===================================================== 00:21:33.319 Controller Capabilities/Features 00:21:33.319 ================================ 00:21:33.319 Vendor ID: 0000 00:21:33.319 Subsystem Vendor ID: 0000 00:21:33.319 Serial Number: .................... 00:21:33.319 Model Number: ........................................ 00:21:33.319 Firmware Version: 24.09 00:21:33.319 Recommended Arb Burst: 0 00:21:33.319 IEEE OUI Identifier: 00 00 00 00:21:33.319 Multi-path I/O 00:21:33.319 May have multiple subsystem ports: No 00:21:33.319 May have multiple controllers: No 00:21:33.319 Associated with SR-IOV VF: No 00:21:33.319 Max Data Transfer Size: 131072 00:21:33.319 Max Number of Namespaces: 0 00:21:33.319 Max Number of I/O Queues: 1024 00:21:33.319 NVMe Specification Version (VS): 1.3 00:21:33.319 NVMe Specification Version (Identify): 1.3 00:21:33.319 Maximum Queue Entries: 128 00:21:33.319 Contiguous Queues Required: Yes 00:21:33.319 Arbitration Mechanisms Supported 00:21:33.319 Weighted Round Robin: Not Supported 00:21:33.319 Vendor Specific: Not Supported 00:21:33.319 Reset Timeout: 15000 ms 00:21:33.319 Doorbell Stride: 4 bytes 00:21:33.319 NVM Subsystem Reset: Not Supported 00:21:33.319 Command Sets Supported 00:21:33.319 NVM Command Set: Supported 00:21:33.319 Boot Partition: Not Supported 00:21:33.319 Memory Page Size Minimum: 4096 bytes 00:21:33.319 Memory Page Size Maximum: 4096 bytes 00:21:33.319 Persistent Memory Region: Not Supported 00:21:33.319 Optional Asynchronous Events Supported 00:21:33.319 Namespace Attribute Notices: Not Supported 00:21:33.319 Firmware Activation Notices: Not Supported 00:21:33.319 ANA Change Notices: Not Supported 00:21:33.319 PLE Aggregate Log Change Notices: Not Supported 00:21:33.319 LBA Status Info Alert Notices: Not Supported 00:21:33.319 EGE Aggregate Log Change Notices: Not Supported 00:21:33.319 Normal NVM Subsystem Shutdown event: Not Supported 00:21:33.319 Zone Descriptor Change Notices: Not Supported 00:21:33.319 Discovery Log Change Notices: Supported 00:21:33.319 Controller Attributes 00:21:33.319 128-bit Host Identifier: Not Supported 00:21:33.319 Non-Operational Permissive Mode: Not Supported 00:21:33.319 NVM Sets: Not Supported 00:21:33.319 Read Recovery Levels: Not Supported 00:21:33.319 Endurance Groups: Not Supported 00:21:33.319 Predictable Latency Mode: Not Supported 00:21:33.319 Traffic Based Keep ALive: Not Supported 00:21:33.319 Namespace Granularity: Not Supported 00:21:33.319 SQ Associations: Not Supported 00:21:33.319 UUID List: Not Supported 00:21:33.319 Multi-Domain Subsystem: Not Supported 00:21:33.319 Fixed Capacity Management: Not Supported 00:21:33.319 Variable Capacity Management: Not Supported 00:21:33.319 Delete Endurance Group: Not Supported 00:21:33.319 Delete NVM Set: Not Supported 00:21:33.319 Extended LBA Formats Supported: Not Supported 00:21:33.319 Flexible Data Placement Supported: Not Supported 00:21:33.319 00:21:33.319 Controller Memory Buffer Support 00:21:33.319 ================================ 00:21:33.319 Supported: No 00:21:33.319 00:21:33.319 Persistent Memory Region Support 00:21:33.319 ================================ 00:21:33.319 Supported: No 00:21:33.319 00:21:33.319 Admin Command Set Attributes 00:21:33.319 ============================ 00:21:33.319 Security Send/Receive: Not Supported 00:21:33.319 Format NVM: Not Supported 00:21:33.319 Firmware Activate/Download: Not Supported 00:21:33.319 Namespace Management: Not Supported 00:21:33.319 Device Self-Test: Not Supported 00:21:33.319 Directives: Not Supported 00:21:33.319 NVMe-MI: Not Supported 00:21:33.319 Virtualization Management: Not Supported 00:21:33.319 Doorbell Buffer Config: Not Supported 00:21:33.319 Get LBA Status Capability: Not Supported 00:21:33.319 Command & Feature Lockdown Capability: Not Supported 00:21:33.319 Abort Command Limit: 1 00:21:33.319 Async Event Request Limit: 4 00:21:33.320 Number of Firmware Slots: N/A 00:21:33.320 Firmware Slot 1 Read-Only: N/A 00:21:33.320 Firmware Activation Without Reset: N/A 00:21:33.320 Multiple Update Detection Support: N/A 00:21:33.320 Firmware Update Granularity: No Information Provided 00:21:33.320 Per-Namespace SMART Log: No 00:21:33.320 Asymmetric Namespace Access Log Page: Not Supported 00:21:33.320 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:21:33.320 Command Effects Log Page: Not Supported 00:21:33.320 Get Log Page Extended Data: Supported 00:21:33.320 Telemetry Log Pages: Not Supported 00:21:33.320 Persistent Event Log Pages: Not Supported 00:21:33.320 Supported Log Pages Log Page: May Support 00:21:33.320 Commands Supported & Effects Log Page: Not Supported 00:21:33.320 Feature Identifiers & Effects Log Page:May Support 00:21:33.320 NVMe-MI Commands & Effects Log Page: May Support 00:21:33.320 Data Area 4 for Telemetry Log: Not Supported 00:21:33.320 Error Log Page Entries Supported: 128 00:21:33.320 Keep Alive: Not Supported 00:21:33.320 00:21:33.320 NVM Command Set Attributes 00:21:33.320 ========================== 00:21:33.320 Submission Queue Entry Size 00:21:33.320 Max: 1 00:21:33.320 Min: 1 00:21:33.320 Completion Queue Entry Size 00:21:33.320 Max: 1 00:21:33.320 Min: 1 00:21:33.320 Number of Namespaces: 0 00:21:33.320 Compare Command: Not Supported 00:21:33.320 Write Uncorrectable Command: Not Supported 00:21:33.320 Dataset Management Command: Not Supported 00:21:33.320 Write Zeroes Command: Not Supported 00:21:33.320 Set Features Save Field: Not Supported 00:21:33.320 Reservations: Not Supported 00:21:33.320 Timestamp: Not Supported 00:21:33.320 Copy: Not Supported 00:21:33.320 Volatile Write Cache: Not Present 00:21:33.320 Atomic Write Unit (Normal): 1 00:21:33.320 Atomic Write Unit (PFail): 1 00:21:33.320 Atomic Compare & Write Unit: 1 00:21:33.320 Fused Compare & Write: Supported 00:21:33.320 Scatter-Gather List 00:21:33.320 SGL Command Set: Supported 00:21:33.320 SGL Keyed: Supported 00:21:33.320 SGL Bit Bucket Descriptor: Not Supported 00:21:33.320 SGL Metadata Pointer: Not Supported 00:21:33.320 Oversized SGL: Not Supported 00:21:33.320 SGL Metadata Address: Not Supported 00:21:33.320 SGL Offset: Supported 00:21:33.320 Transport SGL Data Block: Not Supported 00:21:33.320 Replay Protected Memory Block: Not Supported 00:21:33.320 00:21:33.320 Firmware Slot Information 00:21:33.320 ========================= 00:21:33.320 Active slot: 0 00:21:33.320 00:21:33.320 00:21:33.320 Error Log 00:21:33.320 ========= 00:21:33.320 00:21:33.320 Active Namespaces 00:21:33.320 ================= 00:21:33.320 Discovery Log Page 00:21:33.320 ================== 00:21:33.320 Generation Counter: 2 00:21:33.320 Number of Records: 2 00:21:33.320 Record Format: 0 00:21:33.320 00:21:33.320 Discovery Log Entry 0 00:21:33.320 ---------------------- 00:21:33.320 Transport Type: 3 (TCP) 00:21:33.320 Address Family: 1 (IPv4) 00:21:33.320 Subsystem Type: 3 (Current Discovery Subsystem) 00:21:33.320 Entry Flags: 00:21:33.320 Duplicate Returned Information: 1 00:21:33.320 Explicit Persistent Connection Support for Discovery: 1 00:21:33.320 Transport Requirements: 00:21:33.320 Secure Channel: Not Required 00:21:33.320 Port ID: 0 (0x0000) 00:21:33.320 Controller ID: 65535 (0xffff) 00:21:33.320 Admin Max SQ Size: 128 00:21:33.320 Transport Service Identifier: 4420 00:21:33.320 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:21:33.320 Transport Address: 10.0.0.2 00:21:33.320 Discovery Log Entry 1 00:21:33.320 ---------------------- 00:21:33.320 Transport Type: 3 (TCP) 00:21:33.320 Address Family: 1 (IPv4) 00:21:33.320 Subsystem Type: 2 (NVM Subsystem) 00:21:33.320 Entry Flags: 00:21:33.320 Duplicate Returned Information: 0 00:21:33.320 Explicit Persistent Connection Support for Discovery: 0 00:21:33.320 Transport Requirements: 00:21:33.320 Secure Channel: Not Required 00:21:33.320 Port ID: 0 (0x0000) 00:21:33.320 Controller ID: 65535 (0xffff) 00:21:33.320 Admin Max SQ Size: 128 00:21:33.320 Transport Service Identifier: 4420 00:21:33.320 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:21:33.320 Transport Address: 10.0.0.2 [2024-07-16 00:22:51.927335] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:21:33.320 [2024-07-16 00:22:51.927345] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c2be40) on tqpair=0x1ba8ec0 00:21:33.320 [2024-07-16 00:22:51.927351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.320 [2024-07-16 00:22:51.927356] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c2bfc0) on tqpair=0x1ba8ec0 00:21:33.320 [2024-07-16 00:22:51.927360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.320 [2024-07-16 00:22:51.927364] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c2c140) on tqpair=0x1ba8ec0 00:21:33.320 [2024-07-16 00:22:51.927368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.320 [2024-07-16 00:22:51.927372] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c2c2c0) on tqpair=0x1ba8ec0 00:21:33.320 [2024-07-16 00:22:51.927376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.320 [2024-07-16 00:22:51.927387] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:33.320 [2024-07-16 00:22:51.927391] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:33.320 [2024-07-16 00:22:51.927394] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ba8ec0) 00:21:33.320 [2024-07-16 00:22:51.927401] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.320 [2024-07-16 00:22:51.927414] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c2c2c0, cid 3, qid 0 00:21:33.320 [2024-07-16 00:22:51.927494] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:33.320 [2024-07-16 00:22:51.927501] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:33.320 [2024-07-16 00:22:51.927504] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:33.320 [2024-07-16 00:22:51.927507] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c2c2c0) on tqpair=0x1ba8ec0 00:21:33.320 [2024-07-16 00:22:51.927514] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:33.320 [2024-07-16 00:22:51.927517] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:33.321 [2024-07-16 00:22:51.927520] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ba8ec0) 00:21:33.321 [2024-07-16 00:22:51.927526] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.321 [2024-07-16 00:22:51.927540] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c2c2c0, cid 3, qid 0 00:21:33.321 [2024-07-16 00:22:51.927690] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:33.321 [2024-07-16 00:22:51.927695] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:33.321 [2024-07-16 00:22:51.927698] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:33.321 [2024-07-16 00:22:51.927701] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c2c2c0) on tqpair=0x1ba8ec0 00:21:33.321 [2024-07-16 00:22:51.927705] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:21:33.321 [2024-07-16 00:22:51.927709] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:21:33.321 [2024-07-16 00:22:51.927718] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:33.321 [2024-07-16 00:22:51.927722] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:33.321 [2024-07-16 00:22:51.927725] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ba8ec0) 00:21:33.321 [2024-07-16 00:22:51.927731] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.321 [2024-07-16 00:22:51.927741] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c2c2c0, cid 3, qid 0 00:21:33.321 [2024-07-16 00:22:51.927819] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:33.321 [2024-07-16 00:22:51.927825] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:33.321 [2024-07-16 00:22:51.927828] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:33.321 [2024-07-16 00:22:51.927831] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c2c2c0) on tqpair=0x1ba8ec0 00:21:33.321 [2024-07-16 00:22:51.927840] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:33.321 [2024-07-16 00:22:51.927844] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:33.321 [2024-07-16 00:22:51.927847] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ba8ec0) 00:21:33.321 [2024-07-16 00:22:51.927852] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.321 [2024-07-16 00:22:51.927861] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c2c2c0, cid 3, qid 0 00:21:33.321 [2024-07-16 00:22:51.927941] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:33.321 [2024-07-16 00:22:51.927947] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:33.321 [2024-07-16 00:22:51.927952] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:33.321 [2024-07-16 00:22:51.927955] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c2c2c0) on tqpair=0x1ba8ec0 00:21:33.321 [2024-07-16 00:22:51.927963] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:33.321 [2024-07-16 00:22:51.927967] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:33.321 [2024-07-16 00:22:51.927970] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ba8ec0) 00:21:33.321 [2024-07-16 00:22:51.927975] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.321 [2024-07-16 00:22:51.927985] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c2c2c0, cid 3, qid 0 00:21:33.321 [2024-07-16 00:22:51.928064] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:33.321 [2024-07-16 00:22:51.928069] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:33.321 [2024-07-16 00:22:51.928072] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:33.321 [2024-07-16 00:22:51.928076] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c2c2c0) on tqpair=0x1ba8ec0 00:21:33.321 [2024-07-16 00:22:51.928083] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:33.321 [2024-07-16 00:22:51.928087] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:33.321 [2024-07-16 00:22:51.928090] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ba8ec0) 00:21:33.321 [2024-07-16 00:22:51.928096] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.321 [2024-07-16 00:22:51.928105] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c2c2c0, cid 3, qid 0 00:21:33.321 [2024-07-16 00:22:51.928192] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:33.321 [2024-07-16 00:22:51.928197] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:33.321 [2024-07-16 00:22:51.928200] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:33.321 [2024-07-16 00:22:51.928204] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c2c2c0) on tqpair=0x1ba8ec0 00:21:33.321 [2024-07-16 00:22:51.928211] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:33.321 [2024-07-16 00:22:51.928215] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:33.321 [2024-07-16 00:22:51.928218] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ba8ec0) 00:21:33.321 [2024-07-16 00:22:51.928228] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.321 [2024-07-16 00:22:51.928238] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c2c2c0, cid 3, qid 0 00:21:33.321 [2024-07-16 00:22:51.928317] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:33.321 [2024-07-16 00:22:51.928323] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:33.321 [2024-07-16 00:22:51.928326] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:33.321 [2024-07-16 00:22:51.928329] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c2c2c0) on tqpair=0x1ba8ec0 00:21:33.321 [2024-07-16 00:22:51.928337] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:33.321 [2024-07-16 00:22:51.928341] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:33.321 [2024-07-16 00:22:51.928344] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ba8ec0) 00:21:33.321 [2024-07-16 00:22:51.928349] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.321 [2024-07-16 00:22:51.928358] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c2c2c0, cid 3, qid 0 00:21:33.321 [2024-07-16 00:22:51.928435] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:33.321 [2024-07-16 00:22:51.928441] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:33.321 [2024-07-16 00:22:51.928444] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:33.321 [2024-07-16 00:22:51.928449] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c2c2c0) on tqpair=0x1ba8ec0 00:21:33.321 [2024-07-16 00:22:51.928457] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:33.321 [2024-07-16 00:22:51.928461] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:33.321 [2024-07-16 00:22:51.928464] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ba8ec0) 00:21:33.321 [2024-07-16 00:22:51.928469] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.321 [2024-07-16 00:22:51.928479] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c2c2c0, cid 3, qid 0 00:21:33.321 [2024-07-16 00:22:51.928560] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:33.321 [2024-07-16 00:22:51.928566] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:33.321 [2024-07-16 00:22:51.928569] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:33.321 [2024-07-16 00:22:51.928572] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c2c2c0) on tqpair=0x1ba8ec0 00:21:33.321 [2024-07-16 00:22:51.928580] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:33.321 [2024-07-16 00:22:51.928583] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:33.321 [2024-07-16 00:22:51.928586] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ba8ec0) 00:21:33.322 [2024-07-16 00:22:51.928592] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.322 [2024-07-16 00:22:51.928601] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c2c2c0, cid 3, qid 0 00:21:33.322 [2024-07-16 00:22:51.928683] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:33.322 [2024-07-16 00:22:51.928688] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:33.322 [2024-07-16 00:22:51.928691] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:33.322 [2024-07-16 00:22:51.928694] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c2c2c0) on tqpair=0x1ba8ec0 00:21:33.322 [2024-07-16 00:22:51.928702] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:33.322 [2024-07-16 00:22:51.928706] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:33.322 [2024-07-16 00:22:51.928709] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ba8ec0) 00:21:33.322 [2024-07-16 00:22:51.928714] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.322 [2024-07-16 00:22:51.928723] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c2c2c0, cid 3, qid 0 00:21:33.322 [2024-07-16 00:22:51.928804] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:33.322 [2024-07-16 00:22:51.928810] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:33.322 [2024-07-16 00:22:51.928813] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:33.322 [2024-07-16 00:22:51.928816] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c2c2c0) on tqpair=0x1ba8ec0 00:21:33.322 [2024-07-16 00:22:51.928824] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:33.322 [2024-07-16 00:22:51.928827] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:33.322 [2024-07-16 00:22:51.928830] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ba8ec0) 00:21:33.322 [2024-07-16 00:22:51.928836] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.322 [2024-07-16 00:22:51.928845] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c2c2c0, cid 3, qid 0 00:21:33.322 [2024-07-16 00:22:51.928927] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:33.322 [2024-07-16 00:22:51.928933] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:33.322 [2024-07-16 00:22:51.928936] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:33.322 [2024-07-16 00:22:51.928939] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c2c2c0) on tqpair=0x1ba8ec0 00:21:33.322 [2024-07-16 00:22:51.928950] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:33.322 [2024-07-16 00:22:51.928954] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:33.322 [2024-07-16 00:22:51.928957] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ba8ec0) 00:21:33.322 [2024-07-16 00:22:51.928963] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.322 [2024-07-16 00:22:51.928972] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c2c2c0, cid 3, qid 0 00:21:33.322 [2024-07-16 00:22:51.929052] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:33.322 [2024-07-16 00:22:51.929057] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:33.322 [2024-07-16 00:22:51.929060] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:33.322 [2024-07-16 00:22:51.929063] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c2c2c0) on tqpair=0x1ba8ec0 00:21:33.322 [2024-07-16 00:22:51.929071] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:33.322 [2024-07-16 00:22:51.929075] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:33.322 [2024-07-16 00:22:51.929078] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ba8ec0) 00:21:33.322 [2024-07-16 00:22:51.929084] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.322 [2024-07-16 00:22:51.929093] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c2c2c0, cid 3, qid 0 00:21:33.322 [2024-07-16 00:22:51.929172] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:33.322 [2024-07-16 00:22:51.929178] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:33.322 [2024-07-16 00:22:51.929181] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:33.322 [2024-07-16 00:22:51.929184] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c2c2c0) on tqpair=0x1ba8ec0 00:21:33.322 [2024-07-16 00:22:51.929192] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:33.322 [2024-07-16 00:22:51.929195] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:33.322 [2024-07-16 00:22:51.929198] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ba8ec0) 00:21:33.322 [2024-07-16 00:22:51.929204] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.322 [2024-07-16 00:22:51.929213] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c2c2c0, cid 3, qid 0 00:21:33.322 [2024-07-16 00:22:51.929294] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:33.322 [2024-07-16 00:22:51.929300] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:33.322 [2024-07-16 00:22:51.929303] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:33.322 [2024-07-16 00:22:51.929306] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c2c2c0) on tqpair=0x1ba8ec0 00:21:33.322 [2024-07-16 00:22:51.929314] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:33.322 [2024-07-16 00:22:51.929318] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:33.322 [2024-07-16 00:22:51.929321] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ba8ec0) 00:21:33.322 [2024-07-16 00:22:51.929326] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.322 [2024-07-16 00:22:51.929336] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c2c2c0, cid 3, qid 0 00:21:33.322 [2024-07-16 00:22:51.929414] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:33.322 [2024-07-16 00:22:51.929419] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:33.322 [2024-07-16 00:22:51.929422] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:33.322 [2024-07-16 00:22:51.929425] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c2c2c0) on tqpair=0x1ba8ec0 00:21:33.322 [2024-07-16 00:22:51.929434] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:33.322 [2024-07-16 00:22:51.929439] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:33.322 [2024-07-16 00:22:51.929442] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ba8ec0) 00:21:33.322 [2024-07-16 00:22:51.929448] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.322 [2024-07-16 00:22:51.929457] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c2c2c0, cid 3, qid 0 00:21:33.322 [2024-07-16 00:22:51.929535] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:33.322 [2024-07-16 00:22:51.929540] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:33.322 [2024-07-16 00:22:51.929543] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:33.322 [2024-07-16 00:22:51.929546] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c2c2c0) on tqpair=0x1ba8ec0 00:21:33.322 [2024-07-16 00:22:51.929555] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:33.322 [2024-07-16 00:22:51.929558] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:33.322 [2024-07-16 00:22:51.929561] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ba8ec0) 00:21:33.322 [2024-07-16 00:22:51.929567] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.322 [2024-07-16 00:22:51.929577] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c2c2c0, cid 3, qid 0 00:21:33.322 [2024-07-16 00:22:51.929657] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:33.322 [2024-07-16 00:22:51.929662] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:33.323 [2024-07-16 00:22:51.929666] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:33.323 [2024-07-16 00:22:51.929669] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c2c2c0) on tqpair=0x1ba8ec0 00:21:33.323 [2024-07-16 00:22:51.929677] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:33.323 [2024-07-16 00:22:51.929680] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:33.323 [2024-07-16 00:22:51.929683] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ba8ec0) 00:21:33.323 [2024-07-16 00:22:51.929689] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.323 [2024-07-16 00:22:51.929699] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c2c2c0, cid 3, qid 0 00:21:33.323 [2024-07-16 00:22:51.933232] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:33.323 [2024-07-16 00:22:51.933241] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:33.323 [2024-07-16 00:22:51.933245] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:33.323 [2024-07-16 00:22:51.933248] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c2c2c0) on tqpair=0x1ba8ec0 00:21:33.323 [2024-07-16 00:22:51.933258] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:33.323 [2024-07-16 00:22:51.933262] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:33.323 [2024-07-16 00:22:51.933265] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ba8ec0) 00:21:33.323 [2024-07-16 00:22:51.933271] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.323 [2024-07-16 00:22:51.933282] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c2c2c0, cid 3, qid 0 00:21:33.323 [2024-07-16 00:22:51.933449] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:33.323 [2024-07-16 00:22:51.933455] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:33.323 [2024-07-16 00:22:51.933458] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:33.323 [2024-07-16 00:22:51.933461] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c2c2c0) on tqpair=0x1ba8ec0 00:21:33.323 [2024-07-16 00:22:51.933467] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:21:33.323 00:21:33.323 00:22:51 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:21:33.323 [2024-07-16 00:22:51.970205] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:21:33.323 [2024-07-16 00:22:51.970250] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1587242 ] 00:21:33.323 [2024-07-16 00:22:51.997449] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:21:33.323 [2024-07-16 00:22:51.997493] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:21:33.323 [2024-07-16 00:22:51.997498] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:21:33.323 [2024-07-16 00:22:51.997507] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:21:33.323 [2024-07-16 00:22:51.997513] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:21:33.323 [2024-07-16 00:22:51.997857] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:21:33.323 [2024-07-16 00:22:51.997880] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x14ecec0 0 00:21:33.323 [2024-07-16 00:22:52.004237] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:21:33.323 [2024-07-16 00:22:52.004248] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:21:33.323 [2024-07-16 00:22:52.004251] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:21:33.323 [2024-07-16 00:22:52.004254] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:21:33.323 [2024-07-16 00:22:52.004282] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:33.323 [2024-07-16 00:22:52.004286] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:33.323 [2024-07-16 00:22:52.004290] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14ecec0) 00:21:33.323 [2024-07-16 00:22:52.004300] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:21:33.323 [2024-07-16 00:22:52.004314] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x156fe40, cid 0, qid 0 00:21:33.323 [2024-07-16 00:22:52.011234] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:33.323 [2024-07-16 00:22:52.011242] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:33.323 [2024-07-16 00:22:52.011246] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:33.323 [2024-07-16 00:22:52.011249] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x156fe40) on tqpair=0x14ecec0 00:21:33.323 [2024-07-16 00:22:52.011260] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:21:33.323 [2024-07-16 00:22:52.011266] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:21:33.323 [2024-07-16 00:22:52.011270] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:21:33.323 [2024-07-16 00:22:52.011280] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:33.323 [2024-07-16 00:22:52.011283] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:33.323 [2024-07-16 00:22:52.011286] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14ecec0) 00:21:33.323 [2024-07-16 00:22:52.011293] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.323 [2024-07-16 00:22:52.011306] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x156fe40, cid 0, qid 0 00:21:33.323 [2024-07-16 00:22:52.011472] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:33.323 [2024-07-16 00:22:52.011478] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:33.323 [2024-07-16 00:22:52.011481] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:33.323 [2024-07-16 00:22:52.011485] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x156fe40) on tqpair=0x14ecec0 00:21:33.323 [2024-07-16 00:22:52.011489] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:21:33.323 [2024-07-16 00:22:52.011495] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:21:33.323 [2024-07-16 00:22:52.011501] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:33.323 [2024-07-16 00:22:52.011505] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:33.323 [2024-07-16 00:22:52.011508] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14ecec0) 00:21:33.323 [2024-07-16 00:22:52.011514] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.323 [2024-07-16 00:22:52.011524] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x156fe40, cid 0, qid 0 00:21:33.323 [2024-07-16 00:22:52.011604] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:33.323 [2024-07-16 00:22:52.011610] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:33.323 [2024-07-16 00:22:52.011612] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:33.323 [2024-07-16 00:22:52.011616] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x156fe40) on tqpair=0x14ecec0 00:21:33.323 [2024-07-16 00:22:52.011620] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:21:33.323 [2024-07-16 00:22:52.011626] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:21:33.323 [2024-07-16 00:22:52.011632] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:33.323 [2024-07-16 00:22:52.011635] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:33.323 [2024-07-16 00:22:52.011638] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14ecec0) 00:21:33.323 [2024-07-16 00:22:52.011644] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.323 [2024-07-16 00:22:52.011654] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x156fe40, cid 0, qid 0 00:21:33.323 [2024-07-16 00:22:52.011738] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:33.323 [2024-07-16 00:22:52.011744] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:33.323 [2024-07-16 00:22:52.011746] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:33.323 [2024-07-16 00:22:52.011750] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x156fe40) on tqpair=0x14ecec0 00:21:33.323 [2024-07-16 00:22:52.011754] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:21:33.323 [2024-07-16 00:22:52.011762] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:33.323 [2024-07-16 00:22:52.011765] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:33.323 [2024-07-16 00:22:52.011768] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14ecec0) 00:21:33.323 [2024-07-16 00:22:52.011774] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.323 [2024-07-16 00:22:52.011783] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x156fe40, cid 0, qid 0 00:21:33.323 [2024-07-16 00:22:52.011862] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:33.323 [2024-07-16 00:22:52.011868] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:33.323 [2024-07-16 00:22:52.011870] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:33.323 [2024-07-16 00:22:52.011876] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x156fe40) on tqpair=0x14ecec0 00:21:33.323 [2024-07-16 00:22:52.011880] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:21:33.324 [2024-07-16 00:22:52.011884] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:21:33.324 [2024-07-16 00:22:52.011890] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:21:33.324 [2024-07-16 00:22:52.011995] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:21:33.324 [2024-07-16 00:22:52.011998] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:21:33.324 [2024-07-16 00:22:52.012005] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:33.324 [2024-07-16 00:22:52.012008] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:33.324 [2024-07-16 00:22:52.012011] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14ecec0) 00:21:33.324 [2024-07-16 00:22:52.012017] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.324 [2024-07-16 00:22:52.012027] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x156fe40, cid 0, qid 0 00:21:33.324 [2024-07-16 00:22:52.012118] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:33.324 [2024-07-16 00:22:52.012124] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:33.324 [2024-07-16 00:22:52.012126] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:33.324 [2024-07-16 00:22:52.012130] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x156fe40) on tqpair=0x14ecec0 00:21:33.324 [2024-07-16 00:22:52.012134] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:21:33.324 [2024-07-16 00:22:52.012141] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:33.324 [2024-07-16 00:22:52.012145] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:33.324 [2024-07-16 00:22:52.012148] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14ecec0) 00:21:33.324 [2024-07-16 00:22:52.012154] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.324 [2024-07-16 00:22:52.012162] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x156fe40, cid 0, qid 0 00:21:33.324 [2024-07-16 00:22:52.012244] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:33.324 [2024-07-16 00:22:52.012250] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:33.324 [2024-07-16 00:22:52.012253] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:33.324 [2024-07-16 00:22:52.012256] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x156fe40) on tqpair=0x14ecec0 00:21:33.324 [2024-07-16 00:22:52.012260] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:21:33.324 [2024-07-16 00:22:52.012264] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:21:33.324 [2024-07-16 00:22:52.012272] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:21:33.324 [2024-07-16 00:22:52.012282] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:21:33.324 [2024-07-16 00:22:52.012291] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:33.324 [2024-07-16 00:22:52.012294] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14ecec0) 00:21:33.324 [2024-07-16 00:22:52.012299] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.324 [2024-07-16 00:22:52.012312] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x156fe40, cid 0, qid 0 00:21:33.324 [2024-07-16 00:22:52.012434] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:33.324 [2024-07-16 00:22:52.012440] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:33.324 [2024-07-16 00:22:52.012442] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:33.324 [2024-07-16 00:22:52.012446] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x14ecec0): datao=0, datal=4096, cccid=0 00:21:33.324 [2024-07-16 00:22:52.012449] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x156fe40) on tqpair(0x14ecec0): expected_datao=0, payload_size=4096 00:21:33.324 [2024-07-16 00:22:52.012453] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:33.324 [2024-07-16 00:22:52.012484] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:33.324 [2024-07-16 00:22:52.012488] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:33.324 [2024-07-16 00:22:52.056235] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:33.324 [2024-07-16 00:22:52.056248] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:33.324 [2024-07-16 00:22:52.056252] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:33.324 [2024-07-16 00:22:52.056256] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x156fe40) on tqpair=0x14ecec0 00:21:33.324 [2024-07-16 00:22:52.056263] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:21:33.324 [2024-07-16 00:22:52.056270] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:21:33.324 [2024-07-16 00:22:52.056274] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:21:33.324 [2024-07-16 00:22:52.056278] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:21:33.324 [2024-07-16 00:22:52.056281] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:21:33.324 [2024-07-16 00:22:52.056286] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:21:33.324 [2024-07-16 00:22:52.056294] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:21:33.324 [2024-07-16 00:22:52.056301] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:33.324 [2024-07-16 00:22:52.056304] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:33.324 [2024-07-16 00:22:52.056307] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14ecec0) 00:21:33.324 [2024-07-16 00:22:52.056314] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:33.324 [2024-07-16 00:22:52.056331] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x156fe40, cid 0, qid 0 00:21:33.324 [2024-07-16 00:22:52.056501] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:33.324 [2024-07-16 00:22:52.056507] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:33.324 [2024-07-16 00:22:52.056510] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:33.324 [2024-07-16 00:22:52.056514] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x156fe40) on tqpair=0x14ecec0 00:21:33.324 [2024-07-16 00:22:52.056520] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:33.324 [2024-07-16 00:22:52.056523] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:33.324 [2024-07-16 00:22:52.056526] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14ecec0) 00:21:33.324 [2024-07-16 00:22:52.056532] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:33.324 [2024-07-16 00:22:52.056537] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:33.324 [2024-07-16 00:22:52.056543] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:33.324 [2024-07-16 00:22:52.056546] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x14ecec0) 00:21:33.324 [2024-07-16 00:22:52.056551] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:33.324 [2024-07-16 00:22:52.056556] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:33.324 [2024-07-16 00:22:52.056559] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:33.324 [2024-07-16 00:22:52.056562] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x14ecec0) 00:21:33.324 [2024-07-16 00:22:52.056567] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:33.324 [2024-07-16 00:22:52.056572] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:33.324 [2024-07-16 00:22:52.056575] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:33.324 [2024-07-16 00:22:52.056578] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14ecec0) 00:21:33.324 [2024-07-16 00:22:52.056583] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:33.324 [2024-07-16 00:22:52.056587] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:21:33.324 [2024-07-16 00:22:52.056597] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:21:33.324 [2024-07-16 00:22:52.056603] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:33.324 [2024-07-16 00:22:52.056606] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x14ecec0) 00:21:33.324 [2024-07-16 00:22:52.056612] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.324 [2024-07-16 00:22:52.056624] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x156fe40, cid 0, qid 0 00:21:33.324 [2024-07-16 00:22:52.056628] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x156ffc0, cid 1, qid 0 00:21:33.324 [2024-07-16 00:22:52.056632] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1570140, cid 2, qid 0 00:21:33.324 [2024-07-16 00:22:52.056636] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15702c0, cid 3, qid 0 00:21:33.324 [2024-07-16 00:22:52.056640] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1570440, cid 4, qid 0 00:21:33.324 [2024-07-16 00:22:52.056766] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:33.324 [2024-07-16 00:22:52.056771] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:33.324 [2024-07-16 00:22:52.056775] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:33.324 [2024-07-16 00:22:52.056778] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1570440) on tqpair=0x14ecec0 00:21:33.324 [2024-07-16 00:22:52.056782] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:21:33.324 [2024-07-16 00:22:52.056786] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:21:33.324 [2024-07-16 00:22:52.056793] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:21:33.324 [2024-07-16 00:22:52.056798] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:21:33.324 [2024-07-16 00:22:52.056803] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:33.324 [2024-07-16 00:22:52.056807] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:33.324 [2024-07-16 00:22:52.056809] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x14ecec0) 00:21:33.325 [2024-07-16 00:22:52.056815] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:33.325 [2024-07-16 00:22:52.056827] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1570440, cid 4, qid 0 00:21:33.325 [2024-07-16 00:22:52.056906] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:33.325 [2024-07-16 00:22:52.056912] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:33.325 [2024-07-16 00:22:52.056915] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:33.325 [2024-07-16 00:22:52.056918] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1570440) on tqpair=0x14ecec0 00:21:33.325 [2024-07-16 00:22:52.056969] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:21:33.325 [2024-07-16 00:22:52.056978] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:21:33.325 [2024-07-16 00:22:52.056984] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:33.325 [2024-07-16 00:22:52.056988] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x14ecec0) 00:21:33.325 [2024-07-16 00:22:52.056993] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.325 [2024-07-16 00:22:52.057003] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1570440, cid 4, qid 0 00:21:33.325 [2024-07-16 00:22:52.057094] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:33.325 [2024-07-16 00:22:52.057100] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:33.325 [2024-07-16 00:22:52.057103] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:33.325 [2024-07-16 00:22:52.057106] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x14ecec0): datao=0, datal=4096, cccid=4 00:21:33.325 [2024-07-16 00:22:52.057110] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1570440) on tqpair(0x14ecec0): expected_datao=0, payload_size=4096 00:21:33.325 [2024-07-16 00:22:52.057114] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:33.325 [2024-07-16 00:22:52.057120] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:33.325 [2024-07-16 00:22:52.057123] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:33.325 [2024-07-16 00:22:52.057156] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:33.325 [2024-07-16 00:22:52.057162] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:33.325 [2024-07-16 00:22:52.057165] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:33.325 [2024-07-16 00:22:52.057168] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1570440) on tqpair=0x14ecec0 00:21:33.325 [2024-07-16 00:22:52.057176] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:21:33.325 [2024-07-16 00:22:52.057185] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:21:33.325 [2024-07-16 00:22:52.057194] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:21:33.325 [2024-07-16 00:22:52.057200] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:33.325 [2024-07-16 00:22:52.057203] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x14ecec0) 00:21:33.325 [2024-07-16 00:22:52.057209] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.325 [2024-07-16 00:22:52.057219] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1570440, cid 4, qid 0 00:21:33.325 [2024-07-16 00:22:52.057340] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:33.325 [2024-07-16 00:22:52.057346] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:33.325 [2024-07-16 00:22:52.057349] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:33.325 [2024-07-16 00:22:52.057352] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x14ecec0): datao=0, datal=4096, cccid=4 00:21:33.325 [2024-07-16 00:22:52.057358] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1570440) on tqpair(0x14ecec0): expected_datao=0, payload_size=4096 00:21:33.325 [2024-07-16 00:22:52.057362] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:33.325 [2024-07-16 00:22:52.057368] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:33.325 [2024-07-16 00:22:52.057371] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:33.325 [2024-07-16 00:22:52.057448] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:33.325 [2024-07-16 00:22:52.057454] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:33.325 [2024-07-16 00:22:52.057456] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:33.325 [2024-07-16 00:22:52.057460] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1570440) on tqpair=0x14ecec0 00:21:33.325 [2024-07-16 00:22:52.057470] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:21:33.325 [2024-07-16 00:22:52.057479] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:21:33.325 [2024-07-16 00:22:52.057486] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:33.325 [2024-07-16 00:22:52.057489] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x14ecec0) 00:21:33.325 [2024-07-16 00:22:52.057495] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.325 [2024-07-16 00:22:52.057506] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1570440, cid 4, qid 0 00:21:33.325 [2024-07-16 00:22:52.057596] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:33.325 [2024-07-16 00:22:52.057602] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:33.325 [2024-07-16 00:22:52.057605] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:33.325 [2024-07-16 00:22:52.057608] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x14ecec0): datao=0, datal=4096, cccid=4 00:21:33.325 [2024-07-16 00:22:52.057612] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1570440) on tqpair(0x14ecec0): expected_datao=0, payload_size=4096 00:21:33.325 [2024-07-16 00:22:52.057615] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:33.325 [2024-07-16 00:22:52.057621] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:33.325 [2024-07-16 00:22:52.057624] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:33.325 [2024-07-16 00:22:52.057657] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:33.325 [2024-07-16 00:22:52.057663] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:33.325 [2024-07-16 00:22:52.057666] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:33.325 [2024-07-16 00:22:52.057669] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1570440) on tqpair=0x14ecec0 00:21:33.325 [2024-07-16 00:22:52.057675] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:21:33.325 [2024-07-16 00:22:52.057682] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:21:33.325 [2024-07-16 00:22:52.057691] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:21:33.325 [2024-07-16 00:22:52.057697] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:21:33.325 [2024-07-16 00:22:52.057701] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:21:33.325 [2024-07-16 00:22:52.057705] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:21:33.325 [2024-07-16 00:22:52.057711] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:21:33.325 [2024-07-16 00:22:52.057715] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:21:33.325 [2024-07-16 00:22:52.057719] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:21:33.325 [2024-07-16 00:22:52.057732] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:33.325 [2024-07-16 00:22:52.057736] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x14ecec0) 00:21:33.325 [2024-07-16 00:22:52.057742] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.326 [2024-07-16 00:22:52.057747] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:33.326 [2024-07-16 00:22:52.057750] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:33.326 [2024-07-16 00:22:52.057753] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x14ecec0) 00:21:33.326 [2024-07-16 00:22:52.057759] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:21:33.326 [2024-07-16 00:22:52.057771] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1570440, cid 4, qid 0 00:21:33.326 [2024-07-16 00:22:52.057776] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15705c0, cid 5, qid 0 00:21:33.326 [2024-07-16 00:22:52.057869] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:33.326 [2024-07-16 00:22:52.057875] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:33.326 [2024-07-16 00:22:52.057878] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:33.326 [2024-07-16 00:22:52.057882] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1570440) on tqpair=0x14ecec0 00:21:33.326 [2024-07-16 00:22:52.057887] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:33.326 [2024-07-16 00:22:52.057892] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:33.326 [2024-07-16 00:22:52.057895] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:33.326 [2024-07-16 00:22:52.057898] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15705c0) on tqpair=0x14ecec0 00:21:33.326 [2024-07-16 00:22:52.057907] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:33.326 [2024-07-16 00:22:52.057911] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x14ecec0) 00:21:33.326 [2024-07-16 00:22:52.057916] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.326 [2024-07-16 00:22:52.057925] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15705c0, cid 5, qid 0 00:21:33.326 [2024-07-16 00:22:52.058003] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:33.326 [2024-07-16 00:22:52.058009] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:33.326 [2024-07-16 00:22:52.058012] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:33.326 [2024-07-16 00:22:52.058015] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15705c0) on tqpair=0x14ecec0 00:21:33.326 [2024-07-16 00:22:52.058023] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:33.326 [2024-07-16 00:22:52.058026] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x14ecec0) 00:21:33.326 [2024-07-16 00:22:52.058032] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.326 [2024-07-16 00:22:52.058041] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15705c0, cid 5, qid 0 00:21:33.326 [2024-07-16 00:22:52.058120] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:33.326 [2024-07-16 00:22:52.058126] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:33.326 [2024-07-16 00:22:52.058129] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:33.326 [2024-07-16 00:22:52.058134] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15705c0) on tqpair=0x14ecec0 00:21:33.326 [2024-07-16 00:22:52.058141] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:33.326 [2024-07-16 00:22:52.058145] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x14ecec0) 00:21:33.326 [2024-07-16 00:22:52.058150] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.326 [2024-07-16 00:22:52.058159] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15705c0, cid 5, qid 0 00:21:33.326 [2024-07-16 00:22:52.058245] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:33.326 [2024-07-16 00:22:52.058251] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:33.326 [2024-07-16 00:22:52.058254] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:33.326 [2024-07-16 00:22:52.058257] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15705c0) on tqpair=0x14ecec0 00:21:33.326 [2024-07-16 00:22:52.058269] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:33.326 [2024-07-16 00:22:52.058273] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x14ecec0) 00:21:33.326 [2024-07-16 00:22:52.058279] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.326 [2024-07-16 00:22:52.058285] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:33.326 [2024-07-16 00:22:52.058288] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x14ecec0) 00:21:33.326 [2024-07-16 00:22:52.058293] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.326 [2024-07-16 00:22:52.058299] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:33.326 [2024-07-16 00:22:52.058302] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x14ecec0) 00:21:33.326 [2024-07-16 00:22:52.058308] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.326 [2024-07-16 00:22:52.058314] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:33.326 [2024-07-16 00:22:52.058317] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x14ecec0) 00:21:33.326 [2024-07-16 00:22:52.058322] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.326 [2024-07-16 00:22:52.058333] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15705c0, cid 5, qid 0 00:21:33.326 [2024-07-16 00:22:52.058337] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1570440, cid 4, qid 0 00:21:33.326 [2024-07-16 00:22:52.058341] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1570740, cid 6, qid 0 00:21:33.326 [2024-07-16 00:22:52.058345] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15708c0, cid 7, qid 0 00:21:33.326 [2024-07-16 00:22:52.062236] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:33.326 [2024-07-16 00:22:52.062247] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:33.326 [2024-07-16 00:22:52.062250] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:33.326 [2024-07-16 00:22:52.062253] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x14ecec0): datao=0, datal=8192, cccid=5 00:21:33.326 [2024-07-16 00:22:52.062257] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15705c0) on tqpair(0x14ecec0): expected_datao=0, payload_size=8192 00:21:33.326 [2024-07-16 00:22:52.062261] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:33.326 [2024-07-16 00:22:52.062267] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:33.326 [2024-07-16 00:22:52.062270] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:33.326 [2024-07-16 00:22:52.062277] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:33.326 [2024-07-16 00:22:52.062282] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:33.326 [2024-07-16 00:22:52.062285] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:33.326 [2024-07-16 00:22:52.062288] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x14ecec0): datao=0, datal=512, cccid=4 00:21:33.326 [2024-07-16 00:22:52.062292] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1570440) on tqpair(0x14ecec0): expected_datao=0, payload_size=512 00:21:33.326 [2024-07-16 00:22:52.062295] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:33.326 [2024-07-16 00:22:52.062301] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:33.326 [2024-07-16 00:22:52.062304] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:33.326 [2024-07-16 00:22:52.062308] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:33.326 [2024-07-16 00:22:52.062313] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:33.326 [2024-07-16 00:22:52.062316] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:33.327 [2024-07-16 00:22:52.062319] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x14ecec0): datao=0, datal=512, cccid=6 00:21:33.327 [2024-07-16 00:22:52.062323] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1570740) on tqpair(0x14ecec0): expected_datao=0, payload_size=512 00:21:33.327 [2024-07-16 00:22:52.062326] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:33.327 [2024-07-16 00:22:52.062331] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:33.327 [2024-07-16 00:22:52.062334] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:33.327 [2024-07-16 00:22:52.062339] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:33.327 [2024-07-16 00:22:52.062344] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:33.327 [2024-07-16 00:22:52.062346] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:33.327 [2024-07-16 00:22:52.062350] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x14ecec0): datao=0, datal=4096, cccid=7 00:21:33.327 [2024-07-16 00:22:52.062353] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15708c0) on tqpair(0x14ecec0): expected_datao=0, payload_size=4096 00:21:33.327 [2024-07-16 00:22:52.062357] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:33.327 [2024-07-16 00:22:52.062362] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:33.327 [2024-07-16 00:22:52.062365] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:33.327 [2024-07-16 00:22:52.062370] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:33.327 [2024-07-16 00:22:52.062374] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:33.327 [2024-07-16 00:22:52.062377] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:33.327 [2024-07-16 00:22:52.062381] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15705c0) on tqpair=0x14ecec0 00:21:33.327 [2024-07-16 00:22:52.062392] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:33.327 [2024-07-16 00:22:52.062397] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:33.327 [2024-07-16 00:22:52.062400] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:33.327 [2024-07-16 00:22:52.062403] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1570440) on tqpair=0x14ecec0 00:21:33.327 [2024-07-16 00:22:52.062411] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:33.327 [2024-07-16 00:22:52.062416] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:33.327 [2024-07-16 00:22:52.062419] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:33.327 [2024-07-16 00:22:52.062422] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1570740) on tqpair=0x14ecec0 00:21:33.327 [2024-07-16 00:22:52.062428] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:33.327 [2024-07-16 00:22:52.062432] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:33.327 [2024-07-16 00:22:52.062435] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:33.327 [2024-07-16 00:22:52.062440] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15708c0) on tqpair=0x14ecec0 00:21:33.327 ===================================================== 00:21:33.327 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:33.327 ===================================================== 00:21:33.327 Controller Capabilities/Features 00:21:33.327 ================================ 00:21:33.327 Vendor ID: 8086 00:21:33.327 Subsystem Vendor ID: 8086 00:21:33.327 Serial Number: SPDK00000000000001 00:21:33.327 Model Number: SPDK bdev Controller 00:21:33.327 Firmware Version: 24.09 00:21:33.327 Recommended Arb Burst: 6 00:21:33.327 IEEE OUI Identifier: e4 d2 5c 00:21:33.327 Multi-path I/O 00:21:33.327 May have multiple subsystem ports: Yes 00:21:33.327 May have multiple controllers: Yes 00:21:33.327 Associated with SR-IOV VF: No 00:21:33.327 Max Data Transfer Size: 131072 00:21:33.327 Max Number of Namespaces: 32 00:21:33.327 Max Number of I/O Queues: 127 00:21:33.327 NVMe Specification Version (VS): 1.3 00:21:33.327 NVMe Specification Version (Identify): 1.3 00:21:33.327 Maximum Queue Entries: 128 00:21:33.327 Contiguous Queues Required: Yes 00:21:33.327 Arbitration Mechanisms Supported 00:21:33.327 Weighted Round Robin: Not Supported 00:21:33.327 Vendor Specific: Not Supported 00:21:33.327 Reset Timeout: 15000 ms 00:21:33.327 Doorbell Stride: 4 bytes 00:21:33.327 NVM Subsystem Reset: Not Supported 00:21:33.327 Command Sets Supported 00:21:33.327 NVM Command Set: Supported 00:21:33.327 Boot Partition: Not Supported 00:21:33.327 Memory Page Size Minimum: 4096 bytes 00:21:33.327 Memory Page Size Maximum: 4096 bytes 00:21:33.327 Persistent Memory Region: Not Supported 00:21:33.327 Optional Asynchronous Events Supported 00:21:33.327 Namespace Attribute Notices: Supported 00:21:33.327 Firmware Activation Notices: Not Supported 00:21:33.327 ANA Change Notices: Not Supported 00:21:33.327 PLE Aggregate Log Change Notices: Not Supported 00:21:33.327 LBA Status Info Alert Notices: Not Supported 00:21:33.327 EGE Aggregate Log Change Notices: Not Supported 00:21:33.327 Normal NVM Subsystem Shutdown event: Not Supported 00:21:33.327 Zone Descriptor Change Notices: Not Supported 00:21:33.327 Discovery Log Change Notices: Not Supported 00:21:33.327 Controller Attributes 00:21:33.327 128-bit Host Identifier: Supported 00:21:33.327 Non-Operational Permissive Mode: Not Supported 00:21:33.327 NVM Sets: Not Supported 00:21:33.327 Read Recovery Levels: Not Supported 00:21:33.327 Endurance Groups: Not Supported 00:21:33.327 Predictable Latency Mode: Not Supported 00:21:33.327 Traffic Based Keep ALive: Not Supported 00:21:33.327 Namespace Granularity: Not Supported 00:21:33.327 SQ Associations: Not Supported 00:21:33.327 UUID List: Not Supported 00:21:33.327 Multi-Domain Subsystem: Not Supported 00:21:33.327 Fixed Capacity Management: Not Supported 00:21:33.327 Variable Capacity Management: Not Supported 00:21:33.327 Delete Endurance Group: Not Supported 00:21:33.327 Delete NVM Set: Not Supported 00:21:33.327 Extended LBA Formats Supported: Not Supported 00:21:33.327 Flexible Data Placement Supported: Not Supported 00:21:33.327 00:21:33.327 Controller Memory Buffer Support 00:21:33.327 ================================ 00:21:33.327 Supported: No 00:21:33.327 00:21:33.327 Persistent Memory Region Support 00:21:33.327 ================================ 00:21:33.327 Supported: No 00:21:33.327 00:21:33.327 Admin Command Set Attributes 00:21:33.327 ============================ 00:21:33.327 Security Send/Receive: Not Supported 00:21:33.327 Format NVM: Not Supported 00:21:33.328 Firmware Activate/Download: Not Supported 00:21:33.328 Namespace Management: Not Supported 00:21:33.328 Device Self-Test: Not Supported 00:21:33.328 Directives: Not Supported 00:21:33.328 NVMe-MI: Not Supported 00:21:33.328 Virtualization Management: Not Supported 00:21:33.328 Doorbell Buffer Config: Not Supported 00:21:33.328 Get LBA Status Capability: Not Supported 00:21:33.328 Command & Feature Lockdown Capability: Not Supported 00:21:33.328 Abort Command Limit: 4 00:21:33.328 Async Event Request Limit: 4 00:21:33.328 Number of Firmware Slots: N/A 00:21:33.328 Firmware Slot 1 Read-Only: N/A 00:21:33.328 Firmware Activation Without Reset: N/A 00:21:33.328 Multiple Update Detection Support: N/A 00:21:33.328 Firmware Update Granularity: No Information Provided 00:21:33.328 Per-Namespace SMART Log: No 00:21:33.328 Asymmetric Namespace Access Log Page: Not Supported 00:21:33.328 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:21:33.328 Command Effects Log Page: Supported 00:21:33.328 Get Log Page Extended Data: Supported 00:21:33.328 Telemetry Log Pages: Not Supported 00:21:33.328 Persistent Event Log Pages: Not Supported 00:21:33.328 Supported Log Pages Log Page: May Support 00:21:33.328 Commands Supported & Effects Log Page: Not Supported 00:21:33.328 Feature Identifiers & Effects Log Page:May Support 00:21:33.328 NVMe-MI Commands & Effects Log Page: May Support 00:21:33.328 Data Area 4 for Telemetry Log: Not Supported 00:21:33.328 Error Log Page Entries Supported: 128 00:21:33.328 Keep Alive: Supported 00:21:33.328 Keep Alive Granularity: 10000 ms 00:21:33.328 00:21:33.328 NVM Command Set Attributes 00:21:33.328 ========================== 00:21:33.328 Submission Queue Entry Size 00:21:33.328 Max: 64 00:21:33.328 Min: 64 00:21:33.328 Completion Queue Entry Size 00:21:33.328 Max: 16 00:21:33.328 Min: 16 00:21:33.328 Number of Namespaces: 32 00:21:33.328 Compare Command: Supported 00:21:33.328 Write Uncorrectable Command: Not Supported 00:21:33.328 Dataset Management Command: Supported 00:21:33.328 Write Zeroes Command: Supported 00:21:33.328 Set Features Save Field: Not Supported 00:21:33.328 Reservations: Supported 00:21:33.328 Timestamp: Not Supported 00:21:33.328 Copy: Supported 00:21:33.328 Volatile Write Cache: Present 00:21:33.328 Atomic Write Unit (Normal): 1 00:21:33.328 Atomic Write Unit (PFail): 1 00:21:33.328 Atomic Compare & Write Unit: 1 00:21:33.328 Fused Compare & Write: Supported 00:21:33.328 Scatter-Gather List 00:21:33.328 SGL Command Set: Supported 00:21:33.328 SGL Keyed: Supported 00:21:33.328 SGL Bit Bucket Descriptor: Not Supported 00:21:33.328 SGL Metadata Pointer: Not Supported 00:21:33.328 Oversized SGL: Not Supported 00:21:33.328 SGL Metadata Address: Not Supported 00:21:33.328 SGL Offset: Supported 00:21:33.328 Transport SGL Data Block: Not Supported 00:21:33.328 Replay Protected Memory Block: Not Supported 00:21:33.328 00:21:33.328 Firmware Slot Information 00:21:33.328 ========================= 00:21:33.328 Active slot: 1 00:21:33.328 Slot 1 Firmware Revision: 24.09 00:21:33.328 00:21:33.328 00:21:33.328 Commands Supported and Effects 00:21:33.328 ============================== 00:21:33.328 Admin Commands 00:21:33.328 -------------- 00:21:33.328 Get Log Page (02h): Supported 00:21:33.328 Identify (06h): Supported 00:21:33.328 Abort (08h): Supported 00:21:33.328 Set Features (09h): Supported 00:21:33.328 Get Features (0Ah): Supported 00:21:33.328 Asynchronous Event Request (0Ch): Supported 00:21:33.328 Keep Alive (18h): Supported 00:21:33.328 I/O Commands 00:21:33.328 ------------ 00:21:33.328 Flush (00h): Supported LBA-Change 00:21:33.328 Write (01h): Supported LBA-Change 00:21:33.328 Read (02h): Supported 00:21:33.328 Compare (05h): Supported 00:21:33.328 Write Zeroes (08h): Supported LBA-Change 00:21:33.328 Dataset Management (09h): Supported LBA-Change 00:21:33.328 Copy (19h): Supported LBA-Change 00:21:33.328 00:21:33.328 Error Log 00:21:33.328 ========= 00:21:33.328 00:21:33.328 Arbitration 00:21:33.328 =========== 00:21:33.328 Arbitration Burst: 1 00:21:33.328 00:21:33.328 Power Management 00:21:33.328 ================ 00:21:33.328 Number of Power States: 1 00:21:33.328 Current Power State: Power State #0 00:21:33.328 Power State #0: 00:21:33.328 Max Power: 0.00 W 00:21:33.328 Non-Operational State: Operational 00:21:33.328 Entry Latency: Not Reported 00:21:33.328 Exit Latency: Not Reported 00:21:33.328 Relative Read Throughput: 0 00:21:33.328 Relative Read Latency: 0 00:21:33.328 Relative Write Throughput: 0 00:21:33.328 Relative Write Latency: 0 00:21:33.328 Idle Power: Not Reported 00:21:33.328 Active Power: Not Reported 00:21:33.328 Non-Operational Permissive Mode: Not Supported 00:21:33.328 00:21:33.328 Health Information 00:21:33.328 ================== 00:21:33.328 Critical Warnings: 00:21:33.328 Available Spare Space: OK 00:21:33.328 Temperature: OK 00:21:33.328 Device Reliability: OK 00:21:33.328 Read Only: No 00:21:33.328 Volatile Memory Backup: OK 00:21:33.328 Current Temperature: 0 Kelvin (-273 Celsius) 00:21:33.328 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:21:33.328 Available Spare: 0% 00:21:33.328 Available Spare Threshold: 0% 00:21:33.328 Life Percentage Used:[2024-07-16 00:22:52.062524] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:33.328 [2024-07-16 00:22:52.062529] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x14ecec0) 00:21:33.329 [2024-07-16 00:22:52.062535] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.329 [2024-07-16 00:22:52.062548] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15708c0, cid 7, qid 0 00:21:33.329 [2024-07-16 00:22:52.062722] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:33.329 [2024-07-16 00:22:52.062728] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:33.329 [2024-07-16 00:22:52.062731] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:33.329 [2024-07-16 00:22:52.062734] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15708c0) on tqpair=0x14ecec0 00:21:33.329 [2024-07-16 00:22:52.062763] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:21:33.329 [2024-07-16 00:22:52.062773] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x156fe40) on tqpair=0x14ecec0 00:21:33.329 [2024-07-16 00:22:52.062778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.329 [2024-07-16 00:22:52.062783] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x156ffc0) on tqpair=0x14ecec0 00:21:33.329 [2024-07-16 00:22:52.062787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.329 [2024-07-16 00:22:52.062791] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1570140) on tqpair=0x14ecec0 00:21:33.329 [2024-07-16 00:22:52.062795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.329 [2024-07-16 00:22:52.062799] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15702c0) on tqpair=0x14ecec0 00:21:33.329 [2024-07-16 00:22:52.062803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.329 [2024-07-16 00:22:52.062810] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:33.329 [2024-07-16 00:22:52.062813] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:33.329 [2024-07-16 00:22:52.062816] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14ecec0) 00:21:33.329 [2024-07-16 00:22:52.062822] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.329 [2024-07-16 00:22:52.062834] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15702c0, cid 3, qid 0 00:21:33.329 [2024-07-16 00:22:52.062915] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:33.329 [2024-07-16 00:22:52.062921] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:33.329 [2024-07-16 00:22:52.062924] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:33.329 [2024-07-16 00:22:52.062927] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15702c0) on tqpair=0x14ecec0 00:21:33.329 [2024-07-16 00:22:52.062933] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:33.329 [2024-07-16 00:22:52.062936] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:33.329 [2024-07-16 00:22:52.062939] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14ecec0) 00:21:33.329 [2024-07-16 00:22:52.062945] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.329 [2024-07-16 00:22:52.062957] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15702c0, cid 3, qid 0 00:21:33.329 [2024-07-16 00:22:52.063054] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:33.329 [2024-07-16 00:22:52.063059] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:33.329 [2024-07-16 00:22:52.063064] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:33.329 [2024-07-16 00:22:52.063067] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15702c0) on tqpair=0x14ecec0 00:21:33.329 [2024-07-16 00:22:52.063071] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:21:33.329 [2024-07-16 00:22:52.063075] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:21:33.329 [2024-07-16 00:22:52.063083] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:33.329 [2024-07-16 00:22:52.063086] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:33.329 [2024-07-16 00:22:52.063089] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14ecec0) 00:21:33.329 [2024-07-16 00:22:52.063095] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.329 [2024-07-16 00:22:52.063104] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15702c0, cid 3, qid 0 00:21:33.329 [2024-07-16 00:22:52.063182] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:33.329 [2024-07-16 00:22:52.063188] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:33.329 [2024-07-16 00:22:52.063191] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:33.329 [2024-07-16 00:22:52.063194] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15702c0) on tqpair=0x14ecec0 00:21:33.329 [2024-07-16 00:22:52.063202] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:33.329 [2024-07-16 00:22:52.063206] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:33.329 [2024-07-16 00:22:52.063209] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14ecec0) 00:21:33.329 [2024-07-16 00:22:52.063214] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.329 [2024-07-16 00:22:52.063229] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15702c0, cid 3, qid 0 00:21:33.329 [2024-07-16 00:22:52.063301] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:33.329 [2024-07-16 00:22:52.063307] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:33.329 [2024-07-16 00:22:52.063310] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:33.329 [2024-07-16 00:22:52.063313] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15702c0) on tqpair=0x14ecec0 00:21:33.329 [2024-07-16 00:22:52.063321] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:33.329 [2024-07-16 00:22:52.063325] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:33.329 [2024-07-16 00:22:52.063328] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14ecec0) 00:21:33.329 [2024-07-16 00:22:52.063333] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.329 [2024-07-16 00:22:52.063343] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15702c0, cid 3, qid 0 00:21:33.329 [2024-07-16 00:22:52.063429] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:33.329 [2024-07-16 00:22:52.063434] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:33.329 [2024-07-16 00:22:52.063437] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:33.329 [2024-07-16 00:22:52.063441] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15702c0) on tqpair=0x14ecec0 00:21:33.329 [2024-07-16 00:22:52.063449] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:33.329 [2024-07-16 00:22:52.063452] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:33.329 [2024-07-16 00:22:52.063455] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14ecec0) 00:21:33.329 [2024-07-16 00:22:52.063461] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.329 [2024-07-16 00:22:52.063470] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15702c0, cid 3, qid 0 00:21:33.329 [2024-07-16 00:22:52.063553] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:33.329 [2024-07-16 00:22:52.063558] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:33.329 [2024-07-16 00:22:52.063561] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:33.329 [2024-07-16 00:22:52.063564] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15702c0) on tqpair=0x14ecec0 00:21:33.329 [2024-07-16 00:22:52.063572] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:33.330 [2024-07-16 00:22:52.063576] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:33.330 [2024-07-16 00:22:52.063579] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14ecec0) 00:21:33.330 [2024-07-16 00:22:52.063584] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.330 [2024-07-16 00:22:52.063593] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15702c0, cid 3, qid 0 00:21:33.330 [2024-07-16 00:22:52.063673] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:33.330 [2024-07-16 00:22:52.063679] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:33.330 [2024-07-16 00:22:52.063682] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:33.330 [2024-07-16 00:22:52.063685] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15702c0) on tqpair=0x14ecec0 00:21:33.330 [2024-07-16 00:22:52.063693] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:33.330 [2024-07-16 00:22:52.063697] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:33.330 [2024-07-16 00:22:52.063700] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14ecec0) 00:21:33.330 [2024-07-16 00:22:52.063705] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.330 [2024-07-16 00:22:52.063714] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15702c0, cid 3, qid 0 00:21:33.330 [2024-07-16 00:22:52.063790] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:33.330 [2024-07-16 00:22:52.063796] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:33.330 [2024-07-16 00:22:52.063798] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:33.330 [2024-07-16 00:22:52.063801] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15702c0) on tqpair=0x14ecec0 00:21:33.330 [2024-07-16 00:22:52.063809] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:33.330 [2024-07-16 00:22:52.063813] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:33.330 [2024-07-16 00:22:52.063816] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14ecec0) 00:21:33.330 [2024-07-16 00:22:52.063821] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.330 [2024-07-16 00:22:52.063831] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15702c0, cid 3, qid 0 00:21:33.330 [2024-07-16 00:22:52.063911] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:33.330 [2024-07-16 00:22:52.063916] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:33.330 [2024-07-16 00:22:52.063919] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:33.330 [2024-07-16 00:22:52.063922] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15702c0) on tqpair=0x14ecec0 00:21:33.330 [2024-07-16 00:22:52.063930] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:33.330 [2024-07-16 00:22:52.063934] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:33.330 [2024-07-16 00:22:52.063937] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14ecec0) 00:21:33.330 [2024-07-16 00:22:52.063942] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.330 [2024-07-16 00:22:52.063951] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15702c0, cid 3, qid 0 00:21:33.330 [2024-07-16 00:22:52.064034] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:33.330 [2024-07-16 00:22:52.064041] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:33.330 [2024-07-16 00:22:52.064044] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:33.330 [2024-07-16 00:22:52.064048] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15702c0) on tqpair=0x14ecec0 00:21:33.330 [2024-07-16 00:22:52.064055] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:33.330 [2024-07-16 00:22:52.064059] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:33.330 [2024-07-16 00:22:52.064062] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14ecec0) 00:21:33.330 [2024-07-16 00:22:52.064067] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.330 [2024-07-16 00:22:52.064076] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15702c0, cid 3, qid 0 00:21:33.330 [2024-07-16 00:22:52.064161] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:33.330 [2024-07-16 00:22:52.064167] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:33.330 [2024-07-16 00:22:52.064170] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:33.330 [2024-07-16 00:22:52.064173] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15702c0) on tqpair=0x14ecec0 00:21:33.330 [2024-07-16 00:22:52.064181] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:33.330 [2024-07-16 00:22:52.064184] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:33.330 [2024-07-16 00:22:52.064187] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14ecec0) 00:21:33.330 [2024-07-16 00:22:52.064193] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.330 [2024-07-16 00:22:52.064202] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15702c0, cid 3, qid 0 00:21:33.330 [2024-07-16 00:22:52.064288] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:33.330 [2024-07-16 00:22:52.064294] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:33.330 [2024-07-16 00:22:52.064297] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:33.330 [2024-07-16 00:22:52.064300] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15702c0) on tqpair=0x14ecec0 00:21:33.330 [2024-07-16 00:22:52.064308] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:33.330 [2024-07-16 00:22:52.064312] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:33.330 [2024-07-16 00:22:52.064315] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14ecec0) 00:21:33.330 [2024-07-16 00:22:52.064320] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.330 [2024-07-16 00:22:52.064330] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15702c0, cid 3, qid 0 00:21:33.330 [2024-07-16 00:22:52.064406] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:33.330 [2024-07-16 00:22:52.064412] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:33.330 [2024-07-16 00:22:52.064415] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:33.330 [2024-07-16 00:22:52.064418] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15702c0) on tqpair=0x14ecec0 00:21:33.330 [2024-07-16 00:22:52.064426] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:33.330 [2024-07-16 00:22:52.064429] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:33.330 [2024-07-16 00:22:52.064432] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14ecec0) 00:21:33.330 [2024-07-16 00:22:52.064438] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.330 [2024-07-16 00:22:52.064447] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15702c0, cid 3, qid 0 00:21:33.330 [2024-07-16 00:22:52.064525] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:33.330 [2024-07-16 00:22:52.064531] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:33.330 [2024-07-16 00:22:52.064535] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:33.330 [2024-07-16 00:22:52.064539] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15702c0) on tqpair=0x14ecec0 00:21:33.330 [2024-07-16 00:22:52.064546] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:33.330 [2024-07-16 00:22:52.064550] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:33.331 [2024-07-16 00:22:52.064553] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14ecec0) 00:21:33.331 [2024-07-16 00:22:52.064558] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.331 [2024-07-16 00:22:52.064567] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15702c0, cid 3, qid 0 00:21:33.331 [2024-07-16 00:22:52.064646] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:33.331 [2024-07-16 00:22:52.064651] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:33.331 [2024-07-16 00:22:52.064654] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:33.331 [2024-07-16 00:22:52.064657] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15702c0) on tqpair=0x14ecec0 00:21:33.331 [2024-07-16 00:22:52.064665] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:33.331 [2024-07-16 00:22:52.064669] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:33.331 [2024-07-16 00:22:52.064672] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14ecec0) 00:21:33.331 [2024-07-16 00:22:52.064677] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.331 [2024-07-16 00:22:52.064687] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15702c0, cid 3, qid 0 00:21:33.331 [2024-07-16 00:22:52.064761] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:33.331 [2024-07-16 00:22:52.064767] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:33.331 [2024-07-16 00:22:52.064770] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:33.331 [2024-07-16 00:22:52.064773] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15702c0) on tqpair=0x14ecec0 00:21:33.331 [2024-07-16 00:22:52.064781] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:33.331 [2024-07-16 00:22:52.064785] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:33.331 [2024-07-16 00:22:52.064788] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14ecec0) 00:21:33.331 [2024-07-16 00:22:52.064793] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.331 [2024-07-16 00:22:52.064802] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15702c0, cid 3, qid 0 00:21:33.331 [2024-07-16 00:22:52.064878] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:33.331 [2024-07-16 00:22:52.064884] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:33.331 [2024-07-16 00:22:52.064887] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:33.331 [2024-07-16 00:22:52.064890] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15702c0) on tqpair=0x14ecec0 00:21:33.331 [2024-07-16 00:22:52.064898] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:33.331 [2024-07-16 00:22:52.064901] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:33.331 [2024-07-16 00:22:52.064904] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14ecec0) 00:21:33.331 [2024-07-16 00:22:52.064910] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.331 [2024-07-16 00:22:52.064919] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15702c0, cid 3, qid 0 00:21:33.331 [2024-07-16 00:22:52.064994] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:33.331 [2024-07-16 00:22:52.064999] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:33.331 [2024-07-16 00:22:52.065002] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:33.331 [2024-07-16 00:22:52.065007] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15702c0) on tqpair=0x14ecec0 00:21:33.331 [2024-07-16 00:22:52.065015] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:33.331 [2024-07-16 00:22:52.065019] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:33.331 [2024-07-16 00:22:52.065022] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14ecec0) 00:21:33.331 [2024-07-16 00:22:52.065027] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.331 [2024-07-16 00:22:52.065036] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15702c0, cid 3, qid 0 00:21:33.331 [2024-07-16 00:22:52.065116] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:33.331 [2024-07-16 00:22:52.065122] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:33.331 [2024-07-16 00:22:52.065125] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:33.331 [2024-07-16 00:22:52.065128] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15702c0) on tqpair=0x14ecec0 00:21:33.331 [2024-07-16 00:22:52.065136] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:33.331 [2024-07-16 00:22:52.065140] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:33.331 [2024-07-16 00:22:52.065143] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14ecec0) 00:21:33.331 [2024-07-16 00:22:52.065148] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.331 [2024-07-16 00:22:52.065158] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15702c0, cid 3, qid 0 00:21:33.331 [2024-07-16 00:22:52.065294] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:33.331 [2024-07-16 00:22:52.065299] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:33.331 [2024-07-16 00:22:52.065302] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:33.331 [2024-07-16 00:22:52.065306] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15702c0) on tqpair=0x14ecec0 00:21:33.331 [2024-07-16 00:22:52.065315] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:33.331 [2024-07-16 00:22:52.065318] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:33.331 [2024-07-16 00:22:52.065321] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14ecec0) 00:21:33.331 [2024-07-16 00:22:52.065327] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.331 [2024-07-16 00:22:52.065337] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15702c0, cid 3, qid 0 00:21:33.331 [2024-07-16 00:22:52.065416] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:33.331 [2024-07-16 00:22:52.065422] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:33.331 [2024-07-16 00:22:52.065424] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:33.332 [2024-07-16 00:22:52.065428] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15702c0) on tqpair=0x14ecec0 00:21:33.332 [2024-07-16 00:22:52.065436] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:33.332 [2024-07-16 00:22:52.065439] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:33.332 [2024-07-16 00:22:52.065442] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14ecec0) 00:21:33.332 [2024-07-16 00:22:52.065448] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.332 [2024-07-16 00:22:52.065457] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15702c0, cid 3, qid 0 00:21:33.332 [2024-07-16 00:22:52.065534] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:33.332 [2024-07-16 00:22:52.065539] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:33.332 [2024-07-16 00:22:52.065542] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:33.332 [2024-07-16 00:22:52.065545] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15702c0) on tqpair=0x14ecec0 00:21:33.332 [2024-07-16 00:22:52.065555] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:33.332 [2024-07-16 00:22:52.065558] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:33.332 [2024-07-16 00:22:52.065561] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14ecec0) 00:21:33.332 [2024-07-16 00:22:52.065567] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.332 [2024-07-16 00:22:52.065576] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15702c0, cid 3, qid 0 00:21:33.332 [2024-07-16 00:22:52.065659] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:33.332 [2024-07-16 00:22:52.065664] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:33.332 [2024-07-16 00:22:52.065667] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:33.332 [2024-07-16 00:22:52.065670] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15702c0) on tqpair=0x14ecec0 00:21:33.332 [2024-07-16 00:22:52.065678] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:33.332 [2024-07-16 00:22:52.065682] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:33.332 [2024-07-16 00:22:52.065685] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14ecec0) 00:21:33.332 [2024-07-16 00:22:52.065690] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.332 [2024-07-16 00:22:52.065700] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15702c0, cid 3, qid 0 00:21:33.332 [2024-07-16 00:22:52.065780] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:33.332 [2024-07-16 00:22:52.065785] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:33.332 [2024-07-16 00:22:52.065788] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:33.332 [2024-07-16 00:22:52.065791] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15702c0) on tqpair=0x14ecec0 00:21:33.332 [2024-07-16 00:22:52.065799] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:33.332 [2024-07-16 00:22:52.065803] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:33.332 [2024-07-16 00:22:52.065806] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14ecec0) 00:21:33.332 [2024-07-16 00:22:52.065811] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.332 [2024-07-16 00:22:52.065820] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15702c0, cid 3, qid 0 00:21:33.332 [2024-07-16 00:22:52.065901] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:33.332 [2024-07-16 00:22:52.065906] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:33.332 [2024-07-16 00:22:52.065909] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:33.332 [2024-07-16 00:22:52.065912] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15702c0) on tqpair=0x14ecec0 00:21:33.332 [2024-07-16 00:22:52.065921] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:33.332 [2024-07-16 00:22:52.065924] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:33.332 [2024-07-16 00:22:52.065927] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14ecec0) 00:21:33.332 [2024-07-16 00:22:52.065933] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.332 [2024-07-16 00:22:52.065941] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15702c0, cid 3, qid 0 00:21:33.332 [2024-07-16 00:22:52.066022] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:33.332 [2024-07-16 00:22:52.066027] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:33.332 [2024-07-16 00:22:52.066030] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:33.332 [2024-07-16 00:22:52.066033] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15702c0) on tqpair=0x14ecec0 00:21:33.332 [2024-07-16 00:22:52.066041] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:33.332 [2024-07-16 00:22:52.066046] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:33.332 [2024-07-16 00:22:52.066049] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14ecec0) 00:21:33.332 [2024-07-16 00:22:52.066055] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.332 [2024-07-16 00:22:52.066064] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15702c0, cid 3, qid 0 00:21:33.332 [2024-07-16 00:22:52.066191] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:33.332 [2024-07-16 00:22:52.066196] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:33.332 [2024-07-16 00:22:52.066199] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:33.332 [2024-07-16 00:22:52.066202] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15702c0) on tqpair=0x14ecec0 00:21:33.332 [2024-07-16 00:22:52.066211] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:33.332 [2024-07-16 00:22:52.066214] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:33.332 [2024-07-16 00:22:52.066217] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14ecec0) 00:21:33.332 [2024-07-16 00:22:52.066223] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.332 [2024-07-16 00:22:52.066236] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15702c0, cid 3, qid 0 00:21:33.332 [2024-07-16 00:22:52.066315] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:33.332 [2024-07-16 00:22:52.066321] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:33.332 [2024-07-16 00:22:52.066323] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:33.332 [2024-07-16 00:22:52.066327] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15702c0) on tqpair=0x14ecec0 00:21:33.332 [2024-07-16 00:22:52.066334] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:33.332 [2024-07-16 00:22:52.066338] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:33.332 [2024-07-16 00:22:52.066341] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14ecec0) 00:21:33.332 [2024-07-16 00:22:52.066347] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.332 [2024-07-16 00:22:52.066356] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15702c0, cid 3, qid 0 00:21:33.332 [2024-07-16 00:22:52.066439] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:33.332 [2024-07-16 00:22:52.066445] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:33.332 [2024-07-16 00:22:52.066448] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:33.332 [2024-07-16 00:22:52.066451] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15702c0) on tqpair=0x14ecec0 00:21:33.332 [2024-07-16 00:22:52.066459] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:33.332 [2024-07-16 00:22:52.066462] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:33.332 [2024-07-16 00:22:52.066465] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14ecec0) 00:21:33.332 [2024-07-16 00:22:52.066471] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.332 [2024-07-16 00:22:52.066480] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15702c0, cid 3, qid 0 00:21:33.332 [2024-07-16 00:22:52.066562] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:33.332 [2024-07-16 00:22:52.066568] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:33.332 [2024-07-16 00:22:52.066571] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:33.332 [2024-07-16 00:22:52.066574] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15702c0) on tqpair=0x14ecec0 00:21:33.332 [2024-07-16 00:22:52.066582] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:33.332 [2024-07-16 00:22:52.066585] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:33.332 [2024-07-16 00:22:52.066590] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14ecec0) 00:21:33.332 [2024-07-16 00:22:52.066595] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.332 [2024-07-16 00:22:52.066605] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15702c0, cid 3, qid 0 00:21:33.332 [2024-07-16 00:22:52.066684] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:33.332 [2024-07-16 00:22:52.066689] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:33.332 [2024-07-16 00:22:52.066692] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:33.332 [2024-07-16 00:22:52.066695] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15702c0) on tqpair=0x14ecec0 00:21:33.332 [2024-07-16 00:22:52.066703] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:33.332 [2024-07-16 00:22:52.066707] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:33.332 [2024-07-16 00:22:52.066710] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14ecec0) 00:21:33.332 [2024-07-16 00:22:52.066715] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.332 [2024-07-16 00:22:52.066725] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15702c0, cid 3, qid 0 00:21:33.332 [2024-07-16 00:22:52.066807] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:33.332 [2024-07-16 00:22:52.066813] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:33.332 [2024-07-16 00:22:52.066815] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:33.333 [2024-07-16 00:22:52.066819] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15702c0) on tqpair=0x14ecec0 00:21:33.333 [2024-07-16 00:22:52.066827] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:33.333 [2024-07-16 00:22:52.066830] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:33.333 [2024-07-16 00:22:52.066833] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14ecec0) 00:21:33.333 [2024-07-16 00:22:52.066839] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.333 [2024-07-16 00:22:52.066848] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15702c0, cid 3, qid 0 00:21:33.333 [2024-07-16 00:22:52.066926] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:33.333 [2024-07-16 00:22:52.066932] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:33.333 [2024-07-16 00:22:52.066934] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:33.333 [2024-07-16 00:22:52.066938] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15702c0) on tqpair=0x14ecec0 00:21:33.333 [2024-07-16 00:22:52.066946] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:33.333 [2024-07-16 00:22:52.066949] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:33.333 [2024-07-16 00:22:52.066952] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14ecec0) 00:21:33.333 [2024-07-16 00:22:52.066958] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.333 [2024-07-16 00:22:52.066966] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15702c0, cid 3, qid 0 00:21:33.333 [2024-07-16 00:22:52.067045] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:33.333 [2024-07-16 00:22:52.067050] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:33.333 [2024-07-16 00:22:52.067053] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:33.333 [2024-07-16 00:22:52.067056] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15702c0) on tqpair=0x14ecec0 00:21:33.333 [2024-07-16 00:22:52.067064] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:33.333 [2024-07-16 00:22:52.067067] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:33.333 [2024-07-16 00:22:52.067070] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14ecec0) 00:21:33.333 [2024-07-16 00:22:52.067077] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.333 [2024-07-16 00:22:52.067086] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15702c0, cid 3, qid 0 00:21:33.333 [2024-07-16 00:22:52.067165] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:33.333 [2024-07-16 00:22:52.067170] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:33.333 [2024-07-16 00:22:52.067173] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:33.333 [2024-07-16 00:22:52.067176] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15702c0) on tqpair=0x14ecec0 00:21:33.333 [2024-07-16 00:22:52.067184] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:33.333 [2024-07-16 00:22:52.067188] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:33.333 [2024-07-16 00:22:52.067191] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14ecec0) 00:21:33.333 [2024-07-16 00:22:52.067196] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.333 [2024-07-16 00:22:52.067206] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15702c0, cid 3, qid 0 00:21:33.333 [2024-07-16 00:22:52.071232] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:33.333 [2024-07-16 00:22:52.071241] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:33.333 [2024-07-16 00:22:52.071243] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:33.333 [2024-07-16 00:22:52.071247] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15702c0) on tqpair=0x14ecec0 00:21:33.333 [2024-07-16 00:22:52.071257] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:33.333 [2024-07-16 00:22:52.071261] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:33.333 [2024-07-16 00:22:52.071264] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14ecec0) 00:21:33.333 [2024-07-16 00:22:52.071270] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.333 [2024-07-16 00:22:52.071281] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15702c0, cid 3, qid 0 00:21:33.333 [2024-07-16 00:22:52.071365] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:33.333 [2024-07-16 00:22:52.071370] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:33.333 [2024-07-16 00:22:52.071373] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:33.333 [2024-07-16 00:22:52.071376] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15702c0) on tqpair=0x14ecec0 00:21:33.333 [2024-07-16 00:22:52.071383] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 8 milliseconds 00:21:33.333 0% 00:21:33.333 Data Units Read: 0 00:21:33.333 Data Units Written: 0 00:21:33.333 Host Read Commands: 0 00:21:33.333 Host Write Commands: 0 00:21:33.333 Controller Busy Time: 0 minutes 00:21:33.333 Power Cycles: 0 00:21:33.333 Power On Hours: 0 hours 00:21:33.333 Unsafe Shutdowns: 0 00:21:33.333 Unrecoverable Media Errors: 0 00:21:33.333 Lifetime Error Log Entries: 0 00:21:33.333 Warning Temperature Time: 0 minutes 00:21:33.333 Critical Temperature Time: 0 minutes 00:21:33.333 00:21:33.333 Number of Queues 00:21:33.333 ================ 00:21:33.333 Number of I/O Submission Queues: 127 00:21:33.333 Number of I/O Completion Queues: 127 00:21:33.333 00:21:33.333 Active Namespaces 00:21:33.333 ================= 00:21:33.333 Namespace ID:1 00:21:33.333 Error Recovery Timeout: Unlimited 00:21:33.333 Command Set Identifier: NVM (00h) 00:21:33.333 Deallocate: Supported 00:21:33.333 Deallocated/Unwritten Error: Not Supported 00:21:33.333 Deallocated Read Value: Unknown 00:21:33.333 Deallocate in Write Zeroes: Not Supported 00:21:33.333 Deallocated Guard Field: 0xFFFF 00:21:33.333 Flush: Supported 00:21:33.333 Reservation: Supported 00:21:33.333 Namespace Sharing Capabilities: Multiple Controllers 00:21:33.333 Size (in LBAs): 131072 (0GiB) 00:21:33.333 Capacity (in LBAs): 131072 (0GiB) 00:21:33.333 Utilization (in LBAs): 131072 (0GiB) 00:21:33.333 NGUID: ABCDEF0123456789ABCDEF0123456789 00:21:33.333 EUI64: ABCDEF0123456789 00:21:33.333 UUID: c323e47e-67a3-4744-9318-cdbf5a43f1ae 00:21:33.333 Thin Provisioning: Not Supported 00:21:33.333 Per-NS Atomic Units: Yes 00:21:33.333 Atomic Boundary Size (Normal): 0 00:21:33.333 Atomic Boundary Size (PFail): 0 00:21:33.333 Atomic Boundary Offset: 0 00:21:33.333 Maximum Single Source Range Length: 65535 00:21:33.333 Maximum Copy Length: 65535 00:21:33.333 Maximum Source Range Count: 1 00:21:33.333 NGUID/EUI64 Never Reused: No 00:21:33.333 Namespace Write Protected: No 00:21:33.333 Number of LBA Formats: 1 00:21:33.333 Current LBA Format: LBA Format #00 00:21:33.333 LBA Format #00: Data Size: 512 Metadata Size: 0 00:21:33.333 00:21:33.333 00:22:52 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:21:33.333 00:22:52 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:33.333 00:22:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:33.333 00:22:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:33.333 00:22:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:33.333 00:22:52 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:21:33.333 00:22:52 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:21:33.333 00:22:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:33.333 00:22:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:21:33.333 00:22:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:33.333 00:22:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:21:33.333 00:22:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:33.333 00:22:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:33.333 rmmod nvme_tcp 00:21:33.333 rmmod nvme_fabrics 00:21:33.333 rmmod nvme_keyring 00:21:33.333 00:22:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:33.333 00:22:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:21:33.333 00:22:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:21:33.333 00:22:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 1586995 ']' 00:21:33.333 00:22:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 1586995 00:21:33.333 00:22:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@942 -- # '[' -z 1586995 ']' 00:21:33.333 00:22:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@946 -- # kill -0 1586995 00:21:33.333 00:22:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@947 -- # uname 00:21:33.333 00:22:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:21:33.333 00:22:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1586995 00:21:33.593 00:22:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:21:33.593 00:22:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:21:33.593 00:22:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1586995' 00:21:33.593 killing process with pid 1586995 00:21:33.593 00:22:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@961 -- # kill 1586995 00:21:33.593 00:22:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # wait 1586995 00:21:33.593 00:22:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:33.593 00:22:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:33.593 00:22:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:33.593 00:22:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:33.593 00:22:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:33.593 00:22:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:33.593 00:22:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:33.593 00:22:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:36.139 00:22:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:36.139 00:21:36.139 real 0m9.391s 00:21:36.139 user 0m7.409s 00:21:36.139 sys 0m4.602s 00:21:36.139 00:22:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1118 -- # xtrace_disable 00:21:36.139 00:22:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:36.139 ************************************ 00:21:36.139 END TEST nvmf_identify 00:21:36.139 ************************************ 00:21:36.139 00:22:54 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:21:36.139 00:22:54 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:21:36.139 00:22:54 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:21:36.139 00:22:54 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:21:36.139 00:22:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:36.139 ************************************ 00:21:36.139 START TEST nvmf_perf 00:21:36.139 ************************************ 00:21:36.139 00:22:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:21:36.139 * Looking for test storage... 00:21:36.139 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:36.139 00:22:54 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:36.139 00:22:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:21:36.139 00:22:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:36.139 00:22:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:36.139 00:22:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:36.139 00:22:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:36.139 00:22:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:36.139 00:22:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:36.139 00:22:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:36.139 00:22:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:36.139 00:22:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:36.139 00:22:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:36.139 00:22:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:36.139 00:22:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:36.139 00:22:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:36.139 00:22:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:36.139 00:22:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:36.139 00:22:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:36.139 00:22:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:36.139 00:22:54 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:36.139 00:22:54 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:36.139 00:22:54 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:36.139 00:22:54 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:36.139 00:22:54 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:36.139 00:22:54 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:36.139 00:22:54 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:21:36.139 00:22:54 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:36.139 00:22:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:21:36.139 00:22:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:36.139 00:22:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:36.139 00:22:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:36.139 00:22:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:36.139 00:22:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:36.139 00:22:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:36.139 00:22:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:36.139 00:22:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:36.139 00:22:54 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:21:36.139 00:22:54 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:21:36.139 00:22:54 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:36.139 00:22:54 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:21:36.139 00:22:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:36.139 00:22:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:36.139 00:22:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:36.139 00:22:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:36.139 00:22:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:36.139 00:22:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:36.139 00:22:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:36.139 00:22:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:36.139 00:22:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:36.139 00:22:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:36.139 00:22:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:21:36.139 00:22:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:41.474 00:22:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:41.474 00:22:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:21:41.474 00:22:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:41.474 00:22:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:41.474 00:22:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:41.474 00:22:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:41.474 00:22:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:41.474 00:22:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:21:41.474 00:22:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:41.474 00:22:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:21:41.474 00:22:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:21:41.474 00:22:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:21:41.474 00:22:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:21:41.474 00:22:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:21:41.474 00:22:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:21:41.474 00:22:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:41.474 00:22:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:41.474 00:22:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:41.474 00:22:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:41.474 00:22:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:41.474 00:22:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:41.474 00:22:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:41.474 00:22:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:41.474 00:22:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:41.474 00:22:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:41.474 00:22:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:41.474 00:22:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:41.474 00:22:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:41.474 00:22:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:41.474 00:22:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:41.474 00:22:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:41.474 00:22:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:41.474 00:22:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:41.474 00:22:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:41.474 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:41.474 00:22:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:41.474 00:22:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:41.474 00:22:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:41.474 00:22:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:41.474 00:22:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:41.474 00:22:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:41.474 00:22:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:41.474 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:41.474 00:22:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:41.474 00:22:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:41.474 00:22:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:41.474 00:22:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:41.474 00:22:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:41.474 00:22:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:41.474 00:22:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:41.474 00:22:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:41.474 00:22:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:41.474 00:22:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:41.474 00:22:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:41.474 00:22:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:41.474 00:22:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:41.474 00:22:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:41.474 00:22:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:41.474 00:22:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:41.474 Found net devices under 0000:86:00.0: cvl_0_0 00:21:41.474 00:22:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:41.474 00:22:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:41.474 00:22:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:41.474 00:22:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:41.474 00:22:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:41.474 00:22:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:41.474 00:22:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:41.474 00:22:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:41.474 00:22:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:41.474 Found net devices under 0000:86:00.1: cvl_0_1 00:21:41.474 00:22:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:41.474 00:22:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:41.474 00:22:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:21:41.474 00:22:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:41.474 00:22:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:41.474 00:22:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:41.474 00:22:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:41.474 00:22:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:41.474 00:22:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:41.474 00:22:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:41.474 00:22:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:41.474 00:22:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:41.474 00:22:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:41.474 00:22:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:41.474 00:22:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:41.474 00:22:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:41.474 00:22:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:41.474 00:22:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:41.474 00:22:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:41.474 00:22:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:41.474 00:22:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:41.474 00:22:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:41.474 00:22:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:41.474 00:22:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:41.474 00:22:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:41.474 00:22:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:41.474 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:41.474 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms 00:21:41.474 00:21:41.474 --- 10.0.0.2 ping statistics --- 00:21:41.474 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:41.474 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:21:41.474 00:22:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:41.474 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:41.474 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.299 ms 00:21:41.474 00:21:41.474 --- 10.0.0.1 ping statistics --- 00:21:41.474 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:41.474 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:21:41.474 00:22:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:41.474 00:22:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:21:41.474 00:22:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:41.474 00:22:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:41.474 00:22:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:41.474 00:22:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:41.474 00:22:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:41.474 00:22:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:41.474 00:22:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:41.474 00:22:59 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:21:41.474 00:22:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:41.474 00:22:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:41.474 00:22:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:41.475 00:22:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:41.475 00:22:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=1590752 00:21:41.475 00:22:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 1590752 00:21:41.475 00:22:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@823 -- # '[' -z 1590752 ']' 00:21:41.475 00:22:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:41.475 00:22:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@828 -- # local max_retries=100 00:21:41.475 00:22:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:41.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:41.475 00:22:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@832 -- # xtrace_disable 00:21:41.475 00:22:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:41.475 [2024-07-16 00:22:59.972531] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:21:41.475 [2024-07-16 00:22:59.972575] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:41.475 [2024-07-16 00:23:00.031070] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:41.475 [2024-07-16 00:23:00.118455] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:41.475 [2024-07-16 00:23:00.118489] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:41.475 [2024-07-16 00:23:00.118496] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:41.475 [2024-07-16 00:23:00.118502] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:41.475 [2024-07-16 00:23:00.118507] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:41.475 [2024-07-16 00:23:00.118546] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:41.475 [2024-07-16 00:23:00.118641] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:41.475 [2024-07-16 00:23:00.118750] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:41.475 [2024-07-16 00:23:00.118751] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:42.043 00:23:00 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:21:42.043 00:23:00 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@856 -- # return 0 00:21:42.043 00:23:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:42.043 00:23:00 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:42.043 00:23:00 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:42.043 00:23:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:42.043 00:23:00 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:21:42.043 00:23:00 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:21:45.329 00:23:03 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:21:45.329 00:23:03 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:21:45.329 00:23:04 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:21:45.329 00:23:04 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:45.588 00:23:04 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:21:45.588 00:23:04 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:21:45.588 00:23:04 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:21:45.588 00:23:04 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:21:45.588 00:23:04 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:45.588 [2024-07-16 00:23:04.415706] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:45.847 00:23:04 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:45.847 00:23:04 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:21:45.847 00:23:04 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:46.106 00:23:04 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:21:46.106 00:23:04 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:21:46.365 00:23:05 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:46.365 [2024-07-16 00:23:05.170495] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:46.365 00:23:05 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:46.625 00:23:05 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:21:46.625 00:23:05 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:21:46.625 00:23:05 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:21:46.625 00:23:05 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:21:48.003 Initializing NVMe Controllers 00:21:48.003 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:21:48.003 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:21:48.003 Initialization complete. Launching workers. 00:21:48.003 ======================================================== 00:21:48.003 Latency(us) 00:21:48.003 Device Information : IOPS MiB/s Average min max 00:21:48.003 PCIE (0000:5e:00.0) NSID 1 from core 0: 97786.26 381.98 326.81 10.71 7190.53 00:21:48.003 ======================================================== 00:21:48.003 Total : 97786.26 381.98 326.81 10.71 7190.53 00:21:48.003 00:21:48.003 00:23:06 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:49.382 Initializing NVMe Controllers 00:21:49.382 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:49.382 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:49.382 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:49.382 Initialization complete. Launching workers. 00:21:49.382 ======================================================== 00:21:49.382 Latency(us) 00:21:49.382 Device Information : IOPS MiB/s Average min max 00:21:49.382 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 61.78 0.24 16754.10 173.65 45623.88 00:21:49.382 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 58.79 0.23 17549.53 6982.65 55869.53 00:21:49.382 ======================================================== 00:21:49.382 Total : 120.57 0.47 17141.95 173.65 55869.53 00:21:49.382 00:21:49.382 00:23:07 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:50.317 Initializing NVMe Controllers 00:21:50.317 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:50.317 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:50.317 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:50.317 Initialization complete. Launching workers. 00:21:50.317 ======================================================== 00:21:50.317 Latency(us) 00:21:50.317 Device Information : IOPS MiB/s Average min max 00:21:50.317 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10623.99 41.50 3021.81 330.12 8056.07 00:21:50.317 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3840.00 15.00 8371.53 5432.69 17840.60 00:21:50.317 ======================================================== 00:21:50.317 Total : 14463.99 56.50 4442.09 330.12 17840.60 00:21:50.317 00:21:50.317 00:23:09 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:21:50.317 00:23:09 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:21:50.317 00:23:09 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:53.603 Initializing NVMe Controllers 00:21:53.603 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:53.603 Controller IO queue size 128, less than required. 00:21:53.603 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:53.603 Controller IO queue size 128, less than required. 00:21:53.603 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:53.603 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:53.603 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:53.603 Initialization complete. Launching workers. 00:21:53.603 ======================================================== 00:21:53.603 Latency(us) 00:21:53.603 Device Information : IOPS MiB/s Average min max 00:21:53.603 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1179.46 294.86 113139.44 62860.03 198099.39 00:21:53.603 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 596.98 149.24 227096.46 101789.41 319620.48 00:21:53.603 ======================================================== 00:21:53.603 Total : 1776.44 444.11 151435.16 62860.03 319620.48 00:21:53.603 00:21:53.603 00:23:11 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:21:53.603 No valid NVMe controllers or AIO or URING devices found 00:21:53.603 Initializing NVMe Controllers 00:21:53.603 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:53.603 Controller IO queue size 128, less than required. 00:21:53.603 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:53.603 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:21:53.603 Controller IO queue size 128, less than required. 00:21:53.603 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:53.603 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:21:53.603 WARNING: Some requested NVMe devices were skipped 00:21:53.603 00:23:11 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:21:56.133 Initializing NVMe Controllers 00:21:56.133 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:56.133 Controller IO queue size 128, less than required. 00:21:56.133 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:56.133 Controller IO queue size 128, less than required. 00:21:56.133 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:56.133 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:56.133 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:56.133 Initialization complete. Launching workers. 00:21:56.133 00:21:56.133 ==================== 00:21:56.133 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:21:56.133 TCP transport: 00:21:56.133 polls: 46498 00:21:56.133 idle_polls: 16360 00:21:56.133 sock_completions: 30138 00:21:56.133 nvme_completions: 4271 00:21:56.133 submitted_requests: 6450 00:21:56.133 queued_requests: 1 00:21:56.133 00:21:56.133 ==================== 00:21:56.133 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:21:56.133 TCP transport: 00:21:56.133 polls: 42339 00:21:56.133 idle_polls: 13221 00:21:56.133 sock_completions: 29118 00:21:56.133 nvme_completions: 4361 00:21:56.133 submitted_requests: 6508 00:21:56.133 queued_requests: 1 00:21:56.133 ======================================================== 00:21:56.133 Latency(us) 00:21:56.133 Device Information : IOPS MiB/s Average min max 00:21:56.133 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1066.83 266.71 125425.74 43340.32 210425.33 00:21:56.133 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1089.32 272.33 119129.52 55307.31 161165.98 00:21:56.133 ======================================================== 00:21:56.133 Total : 2156.14 539.04 122244.80 43340.32 210425.33 00:21:56.133 00:21:56.133 00:23:14 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:21:56.133 00:23:14 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:56.133 00:23:14 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:21:56.133 00:23:14 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:21:56.133 00:23:14 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:21:56.133 00:23:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:56.133 00:23:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:21:56.133 00:23:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:56.133 00:23:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:21:56.133 00:23:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:56.133 00:23:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:56.133 rmmod nvme_tcp 00:21:56.133 rmmod nvme_fabrics 00:21:56.133 rmmod nvme_keyring 00:21:56.133 00:23:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:56.133 00:23:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:21:56.133 00:23:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:21:56.133 00:23:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 1590752 ']' 00:21:56.133 00:23:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 1590752 00:21:56.133 00:23:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@942 -- # '[' -z 1590752 ']' 00:21:56.133 00:23:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@946 -- # kill -0 1590752 00:21:56.133 00:23:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@947 -- # uname 00:21:56.133 00:23:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:21:56.133 00:23:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1590752 00:21:56.133 00:23:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:21:56.133 00:23:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:21:56.133 00:23:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1590752' 00:21:56.133 killing process with pid 1590752 00:21:56.133 00:23:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@961 -- # kill 1590752 00:21:56.133 00:23:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # wait 1590752 00:21:57.509 00:23:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:57.509 00:23:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:57.509 00:23:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:57.509 00:23:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:57.509 00:23:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:57.509 00:23:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:57.509 00:23:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:57.509 00:23:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:59.412 00:23:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:59.412 00:21:59.412 real 0m23.701s 00:21:59.412 user 1m4.407s 00:21:59.412 sys 0m6.872s 00:21:59.412 00:23:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1118 -- # xtrace_disable 00:21:59.413 00:23:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:59.413 ************************************ 00:21:59.413 END TEST nvmf_perf 00:21:59.413 ************************************ 00:21:59.413 00:23:18 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:21:59.413 00:23:18 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:21:59.413 00:23:18 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:21:59.413 00:23:18 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:21:59.413 00:23:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:59.675 ************************************ 00:21:59.675 START TEST nvmf_fio_host 00:21:59.675 ************************************ 00:21:59.675 00:23:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:21:59.675 * Looking for test storage... 00:21:59.675 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:59.675 00:23:18 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:59.675 00:23:18 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:59.675 00:23:18 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:59.675 00:23:18 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:59.675 00:23:18 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.675 00:23:18 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.675 00:23:18 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.675 00:23:18 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:21:59.675 00:23:18 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.675 00:23:18 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:59.675 00:23:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:21:59.676 00:23:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:59.676 00:23:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:59.676 00:23:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:59.676 00:23:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:59.676 00:23:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:59.676 00:23:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:59.676 00:23:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:59.676 00:23:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:59.676 00:23:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:59.676 00:23:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:59.676 00:23:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:59.676 00:23:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:59.676 00:23:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:59.676 00:23:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:59.676 00:23:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:59.676 00:23:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:59.676 00:23:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:59.676 00:23:18 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:59.676 00:23:18 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:59.676 00:23:18 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:59.676 00:23:18 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.676 00:23:18 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.676 00:23:18 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.676 00:23:18 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:21:59.676 00:23:18 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.676 00:23:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:21:59.676 00:23:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:59.676 00:23:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:59.676 00:23:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:59.676 00:23:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:59.676 00:23:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:59.676 00:23:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:59.676 00:23:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:59.676 00:23:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:59.676 00:23:18 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:59.676 00:23:18 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:21:59.676 00:23:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:59.676 00:23:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:59.676 00:23:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:59.676 00:23:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:59.676 00:23:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:59.676 00:23:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:59.676 00:23:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:59.676 00:23:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:59.676 00:23:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:59.676 00:23:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:59.676 00:23:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:21:59.676 00:23:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:04.988 00:23:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:04.988 00:23:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:22:04.988 00:23:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:04.988 00:23:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:04.988 00:23:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:04.988 00:23:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:04.988 00:23:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:04.988 00:23:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:22:04.988 00:23:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:04.988 00:23:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:22:04.988 00:23:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:22:04.988 00:23:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:22:04.988 00:23:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:22:04.988 00:23:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:22:04.988 00:23:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:22:04.988 00:23:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:04.988 00:23:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:04.988 00:23:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:04.988 00:23:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:04.988 00:23:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:04.988 00:23:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:04.988 00:23:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:04.988 00:23:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:04.988 00:23:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:04.988 00:23:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:04.988 00:23:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:04.988 00:23:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:04.988 00:23:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:04.988 00:23:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:04.988 00:23:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:04.988 00:23:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:04.988 00:23:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:04.988 00:23:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:04.988 00:23:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:04.988 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:04.988 00:23:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:04.988 00:23:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:04.988 00:23:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:04.988 00:23:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:04.988 00:23:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:04.988 00:23:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:04.988 00:23:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:04.988 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:04.988 00:23:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:04.988 00:23:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:04.988 00:23:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:04.988 00:23:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:04.988 00:23:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:04.988 00:23:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:04.988 00:23:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:04.988 00:23:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:04.988 00:23:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:04.988 00:23:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:04.988 00:23:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:04.988 00:23:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:04.988 00:23:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:04.988 00:23:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:04.988 00:23:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:04.988 00:23:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:04.988 Found net devices under 0000:86:00.0: cvl_0_0 00:22:04.988 00:23:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:04.988 00:23:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:04.988 00:23:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:04.988 00:23:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:04.988 00:23:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:04.988 00:23:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:04.988 00:23:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:04.988 00:23:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:04.988 00:23:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:04.988 Found net devices under 0000:86:00.1: cvl_0_1 00:22:04.988 00:23:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:04.988 00:23:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:04.988 00:23:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:22:04.988 00:23:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:04.988 00:23:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:04.988 00:23:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:04.988 00:23:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:04.988 00:23:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:04.988 00:23:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:04.988 00:23:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:04.988 00:23:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:04.988 00:23:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:04.988 00:23:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:04.988 00:23:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:04.988 00:23:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:04.988 00:23:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:04.988 00:23:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:04.988 00:23:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:04.988 00:23:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:04.988 00:23:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:04.988 00:23:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:04.988 00:23:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:04.988 00:23:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:04.988 00:23:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:04.988 00:23:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:04.988 00:23:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:04.988 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:04.988 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:22:04.988 00:22:04.988 --- 10.0.0.2 ping statistics --- 00:22:04.988 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:04.988 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:22:04.988 00:23:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:04.988 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:04.988 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:22:04.988 00:22:04.988 --- 10.0.0.1 ping statistics --- 00:22:04.988 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:04.988 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:22:04.988 00:23:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:04.988 00:23:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:22:04.988 00:23:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:04.988 00:23:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:04.988 00:23:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:04.988 00:23:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:04.988 00:23:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:04.988 00:23:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:04.988 00:23:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:04.988 00:23:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:22:04.988 00:23:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:22:04.988 00:23:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:04.988 00:23:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:04.988 00:23:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1596841 00:22:04.988 00:23:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:04.988 00:23:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:04.988 00:23:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1596841 00:22:04.988 00:23:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@823 -- # '[' -z 1596841 ']' 00:22:04.988 00:23:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:04.988 00:23:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@828 -- # local max_retries=100 00:22:04.988 00:23:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:04.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:04.988 00:23:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@832 -- # xtrace_disable 00:22:04.988 00:23:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:04.988 [2024-07-16 00:23:23.796737] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:22:04.988 [2024-07-16 00:23:23.796788] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:05.246 [2024-07-16 00:23:23.856861] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:05.246 [2024-07-16 00:23:23.931203] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:05.246 [2024-07-16 00:23:23.931250] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:05.246 [2024-07-16 00:23:23.931257] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:05.246 [2024-07-16 00:23:23.931263] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:05.246 [2024-07-16 00:23:23.931268] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:05.246 [2024-07-16 00:23:23.931327] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:05.246 [2024-07-16 00:23:23.931423] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:05.246 [2024-07-16 00:23:23.931514] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:05.246 [2024-07-16 00:23:23.931516] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:05.814 00:23:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:22:05.814 00:23:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@856 -- # return 0 00:22:05.814 00:23:24 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:06.072 [2024-07-16 00:23:24.761558] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:06.072 00:23:24 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:22:06.072 00:23:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:06.072 00:23:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:06.072 00:23:24 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:22:06.329 Malloc1 00:22:06.329 00:23:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:06.586 00:23:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:06.587 00:23:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:06.844 [2024-07-16 00:23:25.531754] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:06.845 00:23:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:07.103 00:23:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:22:07.103 00:23:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:07.103 00:23:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1354 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:07.103 00:23:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1331 -- # local fio_dir=/usr/src/fio 00:22:07.103 00:23:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:07.103 00:23:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local sanitizers 00:22:07.103 00:23:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1334 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:07.103 00:23:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # shift 00:22:07.103 00:23:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local asan_lib= 00:22:07.103 00:23:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # for sanitizer in "${sanitizers[@]}" 00:22:07.103 00:23:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:07.103 00:23:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # grep libasan 00:22:07.103 00:23:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # awk '{print $3}' 00:22:07.103 00:23:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # asan_lib= 00:22:07.103 00:23:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # [[ -n '' ]] 00:22:07.103 00:23:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # for sanitizer in "${sanitizers[@]}" 00:22:07.103 00:23:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:07.103 00:23:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # grep libclang_rt.asan 00:22:07.103 00:23:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # awk '{print $3}' 00:22:07.103 00:23:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # asan_lib= 00:22:07.103 00:23:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # [[ -n '' ]] 00:22:07.103 00:23:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:22:07.103 00:23:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:07.369 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:22:07.369 fio-3.35 00:22:07.369 Starting 1 thread 00:22:09.900 00:22:09.900 test: (groupid=0, jobs=1): err= 0: pid=1597358: Tue Jul 16 00:23:28 2024 00:22:09.900 read: IOPS=11.7k, BW=45.9MiB/s (48.1MB/s)(91.9MiB/2005msec) 00:22:09.900 slat (nsec): min=1595, max=257155, avg=1750.12, stdev=2311.70 00:22:09.900 clat (usec): min=2926, max=10532, avg=6049.40, stdev=420.60 00:22:09.900 lat (usec): min=2959, max=10534, avg=6051.15, stdev=420.50 00:22:09.900 clat percentiles (usec): 00:22:09.900 | 1.00th=[ 5014], 5.00th=[ 5342], 10.00th=[ 5538], 20.00th=[ 5735], 00:22:09.900 | 30.00th=[ 5866], 40.00th=[ 5932], 50.00th=[ 6063], 60.00th=[ 6128], 00:22:09.900 | 70.00th=[ 6259], 80.00th=[ 6390], 90.00th=[ 6587], 95.00th=[ 6718], 00:22:09.900 | 99.00th=[ 6980], 99.50th=[ 7111], 99.90th=[ 8160], 99.95th=[ 9241], 00:22:09.900 | 99.99th=[ 9896] 00:22:09.900 bw ( KiB/s): min=46048, max=47480, per=99.96%, avg=46944.00, stdev=640.77, samples=4 00:22:09.900 iops : min=11512, max=11870, avg=11736.00, stdev=160.19, samples=4 00:22:09.900 write: IOPS=11.7k, BW=45.6MiB/s (47.8MB/s)(91.3MiB/2005msec); 0 zone resets 00:22:09.900 slat (nsec): min=1657, max=257810, avg=1849.34, stdev=1845.21 00:22:09.900 clat (usec): min=2534, max=9238, avg=4842.32, stdev=362.15 00:22:09.900 lat (usec): min=2550, max=9243, avg=4844.17, stdev=362.11 00:22:09.900 clat percentiles (usec): 00:22:09.900 | 1.00th=[ 4015], 5.00th=[ 4293], 10.00th=[ 4424], 20.00th=[ 4555], 00:22:09.900 | 30.00th=[ 4686], 40.00th=[ 4752], 50.00th=[ 4817], 60.00th=[ 4948], 00:22:09.900 | 70.00th=[ 5014], 80.00th=[ 5145], 90.00th=[ 5276], 95.00th=[ 5407], 00:22:09.900 | 99.00th=[ 5604], 99.50th=[ 5735], 99.90th=[ 7767], 99.95th=[ 8455], 00:22:09.900 | 99.99th=[ 9241] 00:22:09.900 bw ( KiB/s): min=46208, max=47104, per=99.99%, avg=46650.00, stdev=365.99, samples=4 00:22:09.900 iops : min=11552, max=11776, avg=11662.50, stdev=91.50, samples=4 00:22:09.900 lat (msec) : 4=0.51%, 10=99.49%, 20=0.01% 00:22:09.900 cpu : usr=69.11%, sys=27.20%, ctx=110, majf=0, minf=6 00:22:09.900 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:22:09.900 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.900 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:09.900 issued rwts: total=23539,23385,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:09.900 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:09.900 00:22:09.900 Run status group 0 (all jobs): 00:22:09.900 READ: bw=45.9MiB/s (48.1MB/s), 45.9MiB/s-45.9MiB/s (48.1MB/s-48.1MB/s), io=91.9MiB (96.4MB), run=2005-2005msec 00:22:09.900 WRITE: bw=45.6MiB/s (47.8MB/s), 45.6MiB/s-45.6MiB/s (47.8MB/s-47.8MB/s), io=91.3MiB (95.8MB), run=2005-2005msec 00:22:09.900 00:23:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:09.900 00:23:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1354 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:09.900 00:23:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1331 -- # local fio_dir=/usr/src/fio 00:22:09.900 00:23:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:09.900 00:23:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local sanitizers 00:22:09.900 00:23:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1334 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:09.900 00:23:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # shift 00:22:09.900 00:23:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local asan_lib= 00:22:09.900 00:23:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # for sanitizer in "${sanitizers[@]}" 00:22:09.900 00:23:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:09.900 00:23:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # grep libasan 00:22:09.900 00:23:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # awk '{print $3}' 00:22:09.900 00:23:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # asan_lib= 00:22:09.900 00:23:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # [[ -n '' ]] 00:22:09.900 00:23:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # for sanitizer in "${sanitizers[@]}" 00:22:09.900 00:23:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:09.900 00:23:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # grep libclang_rt.asan 00:22:09.900 00:23:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # awk '{print $3}' 00:22:09.900 00:23:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # asan_lib= 00:22:09.900 00:23:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # [[ -n '' ]] 00:22:09.900 00:23:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:22:09.900 00:23:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:09.900 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:22:09.900 fio-3.35 00:22:09.900 Starting 1 thread 00:22:12.433 00:22:12.433 test: (groupid=0, jobs=1): err= 0: pid=1597916: Tue Jul 16 00:23:31 2024 00:22:12.433 read: IOPS=10.4k, BW=163MiB/s (171MB/s)(327MiB/2007msec) 00:22:12.433 slat (nsec): min=2625, max=87143, avg=2878.79, stdev=1300.48 00:22:12.434 clat (usec): min=2358, max=14979, avg=7307.85, stdev=1770.21 00:22:12.434 lat (usec): min=2360, max=14982, avg=7310.73, stdev=1770.35 00:22:12.434 clat percentiles (usec): 00:22:12.434 | 1.00th=[ 3851], 5.00th=[ 4621], 10.00th=[ 5080], 20.00th=[ 5735], 00:22:12.434 | 30.00th=[ 6325], 40.00th=[ 6783], 50.00th=[ 7308], 60.00th=[ 7701], 00:22:12.434 | 70.00th=[ 8094], 80.00th=[ 8717], 90.00th=[ 9503], 95.00th=[10290], 00:22:12.434 | 99.00th=[12256], 99.50th=[12780], 99.90th=[13829], 99.95th=[14091], 00:22:12.434 | 99.99th=[14353] 00:22:12.434 bw ( KiB/s): min=76928, max=91584, per=50.00%, avg=83496.00, stdev=6163.05, samples=4 00:22:12.434 iops : min= 4808, max= 5724, avg=5218.50, stdev=385.19, samples=4 00:22:12.434 write: IOPS=6136, BW=95.9MiB/s (101MB/s)(170MiB/1777msec); 0 zone resets 00:22:12.434 slat (usec): min=30, max=387, avg=32.37, stdev= 7.84 00:22:12.434 clat (usec): min=3336, max=14120, avg=8664.49, stdev=1468.00 00:22:12.434 lat (usec): min=3367, max=14153, avg=8696.86, stdev=1469.43 00:22:12.434 clat percentiles (usec): 00:22:12.434 | 1.00th=[ 5735], 5.00th=[ 6521], 10.00th=[ 6915], 20.00th=[ 7439], 00:22:12.434 | 30.00th=[ 7832], 40.00th=[ 8160], 50.00th=[ 8586], 60.00th=[ 8848], 00:22:12.434 | 70.00th=[ 9241], 80.00th=[ 9765], 90.00th=[10552], 95.00th=[11338], 00:22:12.434 | 99.00th=[12649], 99.50th=[12911], 99.90th=[13698], 99.95th=[13698], 00:22:12.434 | 99.99th=[14091] 00:22:12.434 bw ( KiB/s): min=80896, max=94112, per=88.59%, avg=86984.00, stdev=5533.40, samples=4 00:22:12.434 iops : min= 5056, max= 5882, avg=5436.50, stdev=345.84, samples=4 00:22:12.434 lat (msec) : 4=0.91%, 10=88.87%, 20=10.22% 00:22:12.434 cpu : usr=84.50%, sys=13.46%, ctx=71, majf=0, minf=3 00:22:12.434 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:22:12.434 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:12.434 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:12.434 issued rwts: total=20946,10905,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:12.434 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:12.434 00:22:12.434 Run status group 0 (all jobs): 00:22:12.434 READ: bw=163MiB/s (171MB/s), 163MiB/s-163MiB/s (171MB/s-171MB/s), io=327MiB (343MB), run=2007-2007msec 00:22:12.434 WRITE: bw=95.9MiB/s (101MB/s), 95.9MiB/s-95.9MiB/s (101MB/s-101MB/s), io=170MiB (179MB), run=1777-1777msec 00:22:12.434 00:23:31 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:12.693 00:23:31 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:22:12.693 00:23:31 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:22:12.693 00:23:31 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:22:12.693 00:23:31 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:22:12.693 00:23:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:12.693 00:23:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:22:12.693 00:23:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:12.693 00:23:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:22:12.693 00:23:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:12.693 00:23:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:12.693 rmmod nvme_tcp 00:22:12.693 rmmod nvme_fabrics 00:22:12.693 rmmod nvme_keyring 00:22:12.693 00:23:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:12.693 00:23:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:22:12.693 00:23:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:22:12.693 00:23:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 1596841 ']' 00:22:12.693 00:23:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 1596841 00:22:12.693 00:23:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@942 -- # '[' -z 1596841 ']' 00:22:12.693 00:23:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@946 -- # kill -0 1596841 00:22:12.693 00:23:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@947 -- # uname 00:22:12.693 00:23:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:22:12.693 00:23:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1596841 00:22:12.693 00:23:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:22:12.693 00:23:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:22:12.693 00:23:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1596841' 00:22:12.693 killing process with pid 1596841 00:22:12.693 00:23:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@961 -- # kill 1596841 00:22:12.693 00:23:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # wait 1596841 00:22:12.952 00:23:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:12.952 00:23:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:12.952 00:23:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:12.952 00:23:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:12.952 00:23:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:12.952 00:23:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:12.952 00:23:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:12.952 00:23:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:15.484 00:23:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:15.484 00:22:15.484 real 0m15.433s 00:22:15.484 user 0m47.966s 00:22:15.484 sys 0m5.917s 00:22:15.484 00:23:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1118 -- # xtrace_disable 00:22:15.484 00:23:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:15.484 ************************************ 00:22:15.484 END TEST nvmf_fio_host 00:22:15.484 ************************************ 00:22:15.484 00:23:33 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:22:15.484 00:23:33 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:15.484 00:23:33 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:22:15.484 00:23:33 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:22:15.484 00:23:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:15.484 ************************************ 00:22:15.484 START TEST nvmf_failover 00:22:15.484 ************************************ 00:22:15.484 00:23:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:15.484 * Looking for test storage... 00:22:15.484 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:15.484 00:23:33 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:15.484 00:23:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:22:15.484 00:23:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:15.484 00:23:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:15.484 00:23:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:15.484 00:23:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:15.484 00:23:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:15.484 00:23:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:15.484 00:23:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:15.484 00:23:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:15.484 00:23:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:15.484 00:23:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:15.484 00:23:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:15.484 00:23:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:15.484 00:23:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:15.484 00:23:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:15.484 00:23:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:15.484 00:23:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:15.484 00:23:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:15.484 00:23:33 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:15.484 00:23:33 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:15.484 00:23:33 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:15.484 00:23:33 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:15.484 00:23:33 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:15.484 00:23:33 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:15.484 00:23:33 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:22:15.484 00:23:33 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:15.484 00:23:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:22:15.484 00:23:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:15.484 00:23:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:15.484 00:23:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:15.484 00:23:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:15.484 00:23:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:15.484 00:23:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:15.484 00:23:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:15.484 00:23:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:15.484 00:23:33 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:15.484 00:23:33 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:15.484 00:23:33 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:15.484 00:23:33 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:15.484 00:23:33 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:22:15.484 00:23:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:15.484 00:23:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:15.484 00:23:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:15.484 00:23:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:15.484 00:23:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:15.484 00:23:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:15.484 00:23:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:15.484 00:23:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:15.484 00:23:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:15.484 00:23:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:15.484 00:23:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:22:15.484 00:23:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:19.673 00:23:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:19.673 00:23:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:22:19.673 00:23:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:19.673 00:23:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:19.673 00:23:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:19.673 00:23:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:19.673 00:23:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:19.673 00:23:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:22:19.673 00:23:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:19.673 00:23:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:22:19.673 00:23:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:22:19.673 00:23:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:22:19.673 00:23:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:22:19.673 00:23:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:22:19.673 00:23:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:22:19.673 00:23:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:19.673 00:23:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:19.673 00:23:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:19.673 00:23:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:19.673 00:23:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:19.673 00:23:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:19.673 00:23:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:19.673 00:23:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:19.674 00:23:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:19.674 00:23:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:19.674 00:23:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:19.674 00:23:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:19.674 00:23:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:19.674 00:23:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:19.674 00:23:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:19.674 00:23:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:19.674 00:23:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:19.674 00:23:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:19.674 00:23:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:19.674 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:19.674 00:23:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:19.674 00:23:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:19.674 00:23:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:19.674 00:23:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:19.674 00:23:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:19.674 00:23:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:19.674 00:23:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:19.674 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:19.674 00:23:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:19.674 00:23:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:19.674 00:23:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:19.674 00:23:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:19.674 00:23:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:19.674 00:23:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:19.674 00:23:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:19.674 00:23:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:19.674 00:23:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:19.674 00:23:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:19.674 00:23:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:19.674 00:23:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:19.674 00:23:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:19.674 00:23:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:19.674 00:23:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:19.674 00:23:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:19.674 Found net devices under 0000:86:00.0: cvl_0_0 00:22:19.674 00:23:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:19.674 00:23:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:19.674 00:23:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:19.674 00:23:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:19.674 00:23:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:19.674 00:23:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:19.674 00:23:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:19.674 00:23:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:19.674 00:23:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:19.674 Found net devices under 0000:86:00.1: cvl_0_1 00:22:19.674 00:23:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:19.674 00:23:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:19.674 00:23:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:22:19.674 00:23:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:19.674 00:23:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:19.674 00:23:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:19.674 00:23:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:19.674 00:23:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:19.674 00:23:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:19.674 00:23:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:19.674 00:23:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:19.674 00:23:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:19.674 00:23:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:19.674 00:23:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:19.674 00:23:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:19.674 00:23:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:19.674 00:23:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:19.674 00:23:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:19.674 00:23:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:19.934 00:23:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:19.934 00:23:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:19.934 00:23:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:19.934 00:23:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:19.934 00:23:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:19.934 00:23:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:19.934 00:23:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:19.934 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:19.934 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.175 ms 00:22:19.934 00:22:19.934 --- 10.0.0.2 ping statistics --- 00:22:19.934 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:19.934 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:22:19.934 00:23:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:19.934 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:19.934 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.080 ms 00:22:19.934 00:22:19.934 --- 10.0.0.1 ping statistics --- 00:22:19.934 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:19.934 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:22:19.934 00:23:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:19.934 00:23:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:22:19.934 00:23:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:19.934 00:23:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:19.934 00:23:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:19.934 00:23:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:19.934 00:23:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:19.934 00:23:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:19.934 00:23:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:19.934 00:23:38 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:22:19.934 00:23:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:19.934 00:23:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:19.934 00:23:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:19.934 00:23:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=1601622 00:22:19.934 00:23:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 1601622 00:22:19.934 00:23:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:19.934 00:23:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@823 -- # '[' -z 1601622 ']' 00:22:19.934 00:23:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:19.934 00:23:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@828 -- # local max_retries=100 00:22:19.934 00:23:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:19.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:19.934 00:23:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # xtrace_disable 00:22:19.934 00:23:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:20.194 [2024-07-16 00:23:38.790440] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:22:20.194 [2024-07-16 00:23:38.790484] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:20.194 [2024-07-16 00:23:38.849724] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:20.194 [2024-07-16 00:23:38.930924] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:20.194 [2024-07-16 00:23:38.930960] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:20.194 [2024-07-16 00:23:38.930967] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:20.194 [2024-07-16 00:23:38.930975] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:20.194 [2024-07-16 00:23:38.930982] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:20.194 [2024-07-16 00:23:38.931077] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:20.194 [2024-07-16 00:23:38.931161] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:20.194 [2024-07-16 00:23:38.931162] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:20.763 00:23:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:22:20.763 00:23:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # return 0 00:22:20.763 00:23:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:20.763 00:23:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:20.763 00:23:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:21.022 00:23:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:21.022 00:23:39 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:21.022 [2024-07-16 00:23:39.791976] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:21.022 00:23:39 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:21.281 Malloc0 00:22:21.281 00:23:40 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:21.540 00:23:40 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:21.799 00:23:40 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:21.799 [2024-07-16 00:23:40.582196] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:21.799 00:23:40 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:22.057 [2024-07-16 00:23:40.754674] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:22.058 00:23:40 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:22.317 [2024-07-16 00:23:40.927212] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:22:22.317 00:23:40 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:22:22.317 00:23:40 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1602021 00:22:22.317 00:23:40 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:22.317 00:23:40 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1602021 /var/tmp/bdevperf.sock 00:22:22.317 00:23:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@823 -- # '[' -z 1602021 ']' 00:22:22.317 00:23:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:22.317 00:23:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@828 -- # local max_retries=100 00:22:22.317 00:23:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:22.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:22.317 00:23:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # xtrace_disable 00:22:22.317 00:23:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:23.284 00:23:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:22:23.284 00:23:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # return 0 00:22:23.284 00:23:41 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:23.284 NVMe0n1 00:22:23.284 00:23:42 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:23.542 00:22:23.542 00:23:42 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:23.542 00:23:42 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1602265 00:22:23.542 00:23:42 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:22:24.919 00:23:43 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:24.919 [2024-07-16 00:23:43.559336] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c7080 is same with the state(5) to be set 00:22:24.919 [2024-07-16 00:23:43.559392] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c7080 is same with the state(5) to be set 00:22:24.919 [2024-07-16 00:23:43.559400] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c7080 is same with the state(5) to be set 00:22:24.919 [2024-07-16 00:23:43.559405] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c7080 is same with the state(5) to be set 00:22:24.919 [2024-07-16 00:23:43.559411] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c7080 is same with the state(5) to be set 00:22:24.919 [2024-07-16 00:23:43.559417] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c7080 is same with the state(5) to be set 00:22:24.919 [2024-07-16 00:23:43.559423] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c7080 is same with the state(5) to be set 00:22:24.919 [2024-07-16 00:23:43.559428] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c7080 is same with the state(5) to be set 00:22:24.919 [2024-07-16 00:23:43.559434] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c7080 is same with the state(5) to be set 00:22:24.919 [2024-07-16 00:23:43.559440] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c7080 is same with the state(5) to be set 00:22:24.920 [2024-07-16 00:23:43.559446] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c7080 is same with the state(5) to be set 00:22:24.920 [2024-07-16 00:23:43.559452] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c7080 is same with the state(5) to be set 00:22:24.920 [2024-07-16 00:23:43.559457] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c7080 is same with the state(5) to be set 00:22:24.920 [2024-07-16 00:23:43.559463] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c7080 is same with the state(5) to be set 00:22:24.920 00:23:43 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:22:28.206 00:23:46 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:28.206 00:22:28.206 00:23:46 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:28.463 [2024-07-16 00:23:47.070850] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c8460 is same with the state(5) to be set 00:22:28.463 [2024-07-16 00:23:47.070889] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c8460 is same with the state(5) to be set 00:22:28.463 [2024-07-16 00:23:47.070898] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c8460 is same with the state(5) to be set 00:22:28.463 [2024-07-16 00:23:47.070905] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c8460 is same with the state(5) to be set 00:22:28.463 [2024-07-16 00:23:47.070912] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c8460 is same with the state(5) to be set 00:22:28.464 [2024-07-16 00:23:47.070918] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c8460 is same with the state(5) to be set 00:22:28.464 00:23:47 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:22:31.746 00:23:50 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:31.746 [2024-07-16 00:23:50.270532] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:31.746 00:23:50 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:22:32.683 00:23:51 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:32.683 [2024-07-16 00:23:51.470165] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1781ac0 is same with the state(5) to be set 00:22:32.683 [2024-07-16 00:23:51.470210] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1781ac0 is same with the state(5) to be set 00:22:32.683 [2024-07-16 00:23:51.470218] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1781ac0 is same with the state(5) to be set 00:22:32.683 [2024-07-16 00:23:51.470230] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1781ac0 is same with the state(5) to be set 00:22:32.683 [2024-07-16 00:23:51.470236] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1781ac0 is same with the state(5) to be set 00:22:32.683 [2024-07-16 00:23:51.470243] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1781ac0 is same with the state(5) to be set 00:22:32.683 [2024-07-16 00:23:51.470249] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1781ac0 is same with the state(5) to be set 00:22:32.683 [2024-07-16 00:23:51.470255] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1781ac0 is same with the state(5) to be set 00:22:32.683 [2024-07-16 00:23:51.470260] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1781ac0 is same with the state(5) to be set 00:22:32.683 [2024-07-16 00:23:51.470266] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1781ac0 is same with the state(5) to be set 00:22:32.683 [2024-07-16 00:23:51.470272] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1781ac0 is same with the state(5) to be set 00:22:32.684 [2024-07-16 00:23:51.470278] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1781ac0 is same with the state(5) to be set 00:22:32.684 [2024-07-16 00:23:51.470284] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1781ac0 is same with the state(5) to be set 00:22:32.684 [2024-07-16 00:23:51.470290] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1781ac0 is same with the state(5) to be set 00:22:32.684 [2024-07-16 00:23:51.470295] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1781ac0 is same with the state(5) to be set 00:22:32.684 [2024-07-16 00:23:51.470301] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1781ac0 is same with the state(5) to be set 00:22:32.684 [2024-07-16 00:23:51.470306] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1781ac0 is same with the state(5) to be set 00:22:32.684 [2024-07-16 00:23:51.470312] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1781ac0 is same with the state(5) to be set 00:22:32.684 [2024-07-16 00:23:51.470318] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1781ac0 is same with the state(5) to be set 00:22:32.684 [2024-07-16 00:23:51.470324] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1781ac0 is same with the state(5) to be set 00:22:32.684 [2024-07-16 00:23:51.470330] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1781ac0 is same with the state(5) to be set 00:22:32.684 [2024-07-16 00:23:51.470336] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1781ac0 is same with the state(5) to be set 00:22:32.684 [2024-07-16 00:23:51.470343] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1781ac0 is same with the state(5) to be set 00:22:32.684 [2024-07-16 00:23:51.470349] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1781ac0 is same with the state(5) to be set 00:22:32.684 [2024-07-16 00:23:51.470355] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1781ac0 is same with the state(5) to be set 00:22:32.684 [2024-07-16 00:23:51.470361] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1781ac0 is same with the state(5) to be set 00:22:32.684 [2024-07-16 00:23:51.470367] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1781ac0 is same with the state(5) to be set 00:22:32.684 [2024-07-16 00:23:51.470383] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1781ac0 is same with the state(5) to be set 00:22:32.684 [2024-07-16 00:23:51.470389] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1781ac0 is same with the state(5) to be set 00:22:32.684 [2024-07-16 00:23:51.470394] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1781ac0 is same with the state(5) to be set 00:22:32.684 [2024-07-16 00:23:51.470400] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1781ac0 is same with the state(5) to be set 00:22:32.684 [2024-07-16 00:23:51.470406] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1781ac0 is same with the state(5) to be set 00:22:32.684 [2024-07-16 00:23:51.470412] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1781ac0 is same with the state(5) to be set 00:22:32.684 [2024-07-16 00:23:51.470418] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1781ac0 is same with the state(5) to be set 00:22:32.684 [2024-07-16 00:23:51.470424] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1781ac0 is same with the state(5) to be set 00:22:32.684 [2024-07-16 00:23:51.470430] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1781ac0 is same with the state(5) to be set 00:22:32.684 [2024-07-16 00:23:51.470436] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1781ac0 is same with the state(5) to be set 00:22:32.684 [2024-07-16 00:23:51.470441] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1781ac0 is same with the state(5) to be set 00:22:32.684 [2024-07-16 00:23:51.470447] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1781ac0 is same with the state(5) to be set 00:22:32.684 [2024-07-16 00:23:51.470453] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1781ac0 is same with the state(5) to be set 00:22:32.684 [2024-07-16 00:23:51.470458] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1781ac0 is same with the state(5) to be set 00:22:32.684 [2024-07-16 00:23:51.470464] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1781ac0 is same with the state(5) to be set 00:22:32.684 [2024-07-16 00:23:51.470469] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1781ac0 is same with the state(5) to be set 00:22:32.684 [2024-07-16 00:23:51.470475] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1781ac0 is same with the state(5) to be set 00:22:32.684 [2024-07-16 00:23:51.470481] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1781ac0 is same with the state(5) to be set 00:22:32.684 [2024-07-16 00:23:51.470486] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1781ac0 is same with the state(5) to be set 00:22:32.684 [2024-07-16 00:23:51.470492] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1781ac0 is same with the state(5) to be set 00:22:32.684 [2024-07-16 00:23:51.470498] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1781ac0 is same with the state(5) to be set 00:22:32.684 [2024-07-16 00:23:51.470504] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1781ac0 is same with the state(5) to be set 00:22:32.684 [2024-07-16 00:23:51.470510] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1781ac0 is same with the state(5) to be set 00:22:32.684 [2024-07-16 00:23:51.470515] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1781ac0 is same with the state(5) to be set 00:22:32.684 [2024-07-16 00:23:51.470521] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1781ac0 is same with the state(5) to be set 00:22:32.684 [2024-07-16 00:23:51.470527] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1781ac0 is same with the state(5) to be set 00:22:32.684 [2024-07-16 00:23:51.470533] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1781ac0 is same with the state(5) to be set 00:22:32.684 [2024-07-16 00:23:51.470539] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1781ac0 is same with the state(5) to be set 00:22:32.684 [2024-07-16 00:23:51.470547] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1781ac0 is same with the state(5) to be set 00:22:32.684 [2024-07-16 00:23:51.470553] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1781ac0 is same with the state(5) to be set 00:22:32.684 [2024-07-16 00:23:51.470559] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1781ac0 is same with the state(5) to be set 00:22:32.684 [2024-07-16 00:23:51.470564] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1781ac0 is same with the state(5) to be set 00:22:32.684 [2024-07-16 00:23:51.470570] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1781ac0 is same with the state(5) to be set 00:22:32.684 [2024-07-16 00:23:51.470576] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1781ac0 is same with the state(5) to be set 00:22:32.684 [2024-07-16 00:23:51.470581] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1781ac0 is same with the state(5) to be set 00:22:32.684 [2024-07-16 00:23:51.470587] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1781ac0 is same with the state(5) to be set 00:22:32.684 [2024-07-16 00:23:51.470593] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1781ac0 is same with the state(5) to be set 00:22:32.684 [2024-07-16 00:23:51.470598] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1781ac0 is same with the state(5) to be set 00:22:32.684 [2024-07-16 00:23:51.470604] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1781ac0 is same with the state(5) to be set 00:22:32.684 [2024-07-16 00:23:51.470611] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1781ac0 is same with the state(5) to be set 00:22:32.684 00:23:51 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 1602265 00:22:39.276 0 00:22:39.276 00:23:57 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 1602021 00:22:39.276 00:23:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@942 -- # '[' -z 1602021 ']' 00:22:39.276 00:23:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # kill -0 1602021 00:22:39.276 00:23:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@947 -- # uname 00:22:39.276 00:23:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:22:39.276 00:23:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1602021 00:22:39.276 00:23:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:22:39.276 00:23:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:22:39.276 00:23:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1602021' 00:22:39.276 killing process with pid 1602021 00:22:39.276 00:23:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@961 -- # kill 1602021 00:22:39.276 00:23:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # wait 1602021 00:22:39.276 00:23:57 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:39.276 [2024-07-16 00:23:40.998264] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:22:39.276 [2024-07-16 00:23:40.998313] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1602021 ] 00:22:39.276 [2024-07-16 00:23:41.052386] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:39.276 [2024-07-16 00:23:41.128233] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:39.276 Running I/O for 15 seconds... 00:22:39.276 [2024-07-16 00:23:43.559873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:96712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.276 [2024-07-16 00:23:43.559909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.276 [2024-07-16 00:23:43.559926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:95816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.276 [2024-07-16 00:23:43.559934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.276 [2024-07-16 00:23:43.559943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:95824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.276 [2024-07-16 00:23:43.559950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.276 [2024-07-16 00:23:43.559959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:95832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.276 [2024-07-16 00:23:43.559966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.276 [2024-07-16 00:23:43.559974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:95840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.276 [2024-07-16 00:23:43.559981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.276 [2024-07-16 00:23:43.559989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:95848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.276 [2024-07-16 00:23:43.559996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.276 [2024-07-16 00:23:43.560005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:95856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.276 [2024-07-16 00:23:43.560011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.276 [2024-07-16 00:23:43.560019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:95864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.276 [2024-07-16 00:23:43.560031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.276 [2024-07-16 00:23:43.560039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:95872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.277 [2024-07-16 00:23:43.560046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.277 [2024-07-16 00:23:43.560054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:95880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.277 [2024-07-16 00:23:43.560060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.277 [2024-07-16 00:23:43.560068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:95888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.277 [2024-07-16 00:23:43.560075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.277 [2024-07-16 00:23:43.560088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:95896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.277 [2024-07-16 00:23:43.560095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.277 [2024-07-16 00:23:43.560103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:95904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.277 [2024-07-16 00:23:43.560110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.277 [2024-07-16 00:23:43.560117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:95912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.277 [2024-07-16 00:23:43.560124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.277 [2024-07-16 00:23:43.560132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:95920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.277 [2024-07-16 00:23:43.560138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.277 [2024-07-16 00:23:43.560146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:95928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.277 [2024-07-16 00:23:43.560153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.277 [2024-07-16 00:23:43.560160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:95936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.277 [2024-07-16 00:23:43.560167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.277 [2024-07-16 00:23:43.560176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:95944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.277 [2024-07-16 00:23:43.560182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.277 [2024-07-16 00:23:43.560190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:95952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.277 [2024-07-16 00:23:43.560197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.277 [2024-07-16 00:23:43.560205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:95960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.277 [2024-07-16 00:23:43.560211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.277 [2024-07-16 00:23:43.560219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:95968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.277 [2024-07-16 00:23:43.560229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.277 [2024-07-16 00:23:43.560238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:95976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.277 [2024-07-16 00:23:43.560244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.277 [2024-07-16 00:23:43.560252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:95984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.277 [2024-07-16 00:23:43.560259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.277 [2024-07-16 00:23:43.560268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:95992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.277 [2024-07-16 00:23:43.560276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.277 [2024-07-16 00:23:43.560285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:96000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.277 [2024-07-16 00:23:43.560291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.277 [2024-07-16 00:23:43.560299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:96008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.277 [2024-07-16 00:23:43.560306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.277 [2024-07-16 00:23:43.560314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:96016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.277 [2024-07-16 00:23:43.560320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.277 [2024-07-16 00:23:43.560328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:96024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.277 [2024-07-16 00:23:43.560335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.277 [2024-07-16 00:23:43.560342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:96032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.277 [2024-07-16 00:23:43.560349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.277 [2024-07-16 00:23:43.560357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:96040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.277 [2024-07-16 00:23:43.560363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.277 [2024-07-16 00:23:43.560371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:96048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.277 [2024-07-16 00:23:43.560378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.277 [2024-07-16 00:23:43.560385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:96056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.277 [2024-07-16 00:23:43.560392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.277 [2024-07-16 00:23:43.560400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:96064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.277 [2024-07-16 00:23:43.560406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.277 [2024-07-16 00:23:43.560414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:96072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.277 [2024-07-16 00:23:43.560420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.277 [2024-07-16 00:23:43.560429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:96080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.277 [2024-07-16 00:23:43.560435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.277 [2024-07-16 00:23:43.560443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:96088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.277 [2024-07-16 00:23:43.560449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.277 [2024-07-16 00:23:43.560459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:96096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.277 [2024-07-16 00:23:43.560465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.277 [2024-07-16 00:23:43.560475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:96104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.277 [2024-07-16 00:23:43.560482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.277 [2024-07-16 00:23:43.560490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:96112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.277 [2024-07-16 00:23:43.560497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.277 [2024-07-16 00:23:43.560505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:96120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.277 [2024-07-16 00:23:43.560511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.277 [2024-07-16 00:23:43.560519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:96128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.277 [2024-07-16 00:23:43.560525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.277 [2024-07-16 00:23:43.560533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:96136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.277 [2024-07-16 00:23:43.560540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.277 [2024-07-16 00:23:43.560548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:96144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.277 [2024-07-16 00:23:43.560554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.277 [2024-07-16 00:23:43.560562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:96152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.277 [2024-07-16 00:23:43.560568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.277 [2024-07-16 00:23:43.560576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:96160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.277 [2024-07-16 00:23:43.560582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.277 [2024-07-16 00:23:43.560590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:96168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.277 [2024-07-16 00:23:43.560597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.277 [2024-07-16 00:23:43.560606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:96176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.277 [2024-07-16 00:23:43.560612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.277 [2024-07-16 00:23:43.560620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:96184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.277 [2024-07-16 00:23:43.560626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.277 [2024-07-16 00:23:43.560634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:96192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.277 [2024-07-16 00:23:43.560641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.277 [2024-07-16 00:23:43.560650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:96200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.277 [2024-07-16 00:23:43.560657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.277 [2024-07-16 00:23:43.560665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:96208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.277 [2024-07-16 00:23:43.560671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.277 [2024-07-16 00:23:43.560679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:96216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.278 [2024-07-16 00:23:43.560686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.278 [2024-07-16 00:23:43.560694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:96224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.278 [2024-07-16 00:23:43.560700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.278 [2024-07-16 00:23:43.560708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:96232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.278 [2024-07-16 00:23:43.560715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.278 [2024-07-16 00:23:43.560723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:96240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.278 [2024-07-16 00:23:43.560729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.278 [2024-07-16 00:23:43.560737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:96248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.278 [2024-07-16 00:23:43.560743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.278 [2024-07-16 00:23:43.560752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:96256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.278 [2024-07-16 00:23:43.560758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.278 [2024-07-16 00:23:43.560766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:96264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.278 [2024-07-16 00:23:43.560772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.278 [2024-07-16 00:23:43.560780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:96272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.278 [2024-07-16 00:23:43.560787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.278 [2024-07-16 00:23:43.560795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:96280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.278 [2024-07-16 00:23:43.560802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.278 [2024-07-16 00:23:43.560809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:96288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.278 [2024-07-16 00:23:43.560816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.278 [2024-07-16 00:23:43.560824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:96296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.278 [2024-07-16 00:23:43.560832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.278 [2024-07-16 00:23:43.560840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:96304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.278 [2024-07-16 00:23:43.560846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.278 [2024-07-16 00:23:43.560854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:96312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.278 [2024-07-16 00:23:43.560861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.278 [2024-07-16 00:23:43.560870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:96320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.278 [2024-07-16 00:23:43.560877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.278 [2024-07-16 00:23:43.560885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:96328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.278 [2024-07-16 00:23:43.560891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.278 [2024-07-16 00:23:43.560899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:96336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.278 [2024-07-16 00:23:43.560906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.278 [2024-07-16 00:23:43.560914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:96344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.278 [2024-07-16 00:23:43.560920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.278 [2024-07-16 00:23:43.560928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:96352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.278 [2024-07-16 00:23:43.560934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.278 [2024-07-16 00:23:43.560942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:96360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.278 [2024-07-16 00:23:43.560948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.278 [2024-07-16 00:23:43.560957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:96368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.278 [2024-07-16 00:23:43.560963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.278 [2024-07-16 00:23:43.560971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:96376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.278 [2024-07-16 00:23:43.560978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.278 [2024-07-16 00:23:43.560987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:96384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.278 [2024-07-16 00:23:43.560993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.278 [2024-07-16 00:23:43.561001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:96392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.278 [2024-07-16 00:23:43.561007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.278 [2024-07-16 00:23:43.561016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:96400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.278 [2024-07-16 00:23:43.561022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.278 [2024-07-16 00:23:43.561030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:96408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.278 [2024-07-16 00:23:43.561036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.278 [2024-07-16 00:23:43.561044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:96416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.278 [2024-07-16 00:23:43.561051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.278 [2024-07-16 00:23:43.561059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.278 [2024-07-16 00:23:43.561065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.278 [2024-07-16 00:23:43.561073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:96432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.278 [2024-07-16 00:23:43.561079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.278 [2024-07-16 00:23:43.561087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:96440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.278 [2024-07-16 00:23:43.561093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.278 [2024-07-16 00:23:43.561101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:96448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.278 [2024-07-16 00:23:43.561114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.278 [2024-07-16 00:23:43.561123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:96456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.278 [2024-07-16 00:23:43.561129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.278 [2024-07-16 00:23:43.561137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:96464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.278 [2024-07-16 00:23:43.561143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.278 [2024-07-16 00:23:43.561151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:96472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.278 [2024-07-16 00:23:43.561157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.278 [2024-07-16 00:23:43.561165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:96480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.278 [2024-07-16 00:23:43.561172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.278 [2024-07-16 00:23:43.561180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:96488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.278 [2024-07-16 00:23:43.561187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.278 [2024-07-16 00:23:43.561195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:96496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.278 [2024-07-16 00:23:43.561202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.278 [2024-07-16 00:23:43.561210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:96504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.278 [2024-07-16 00:23:43.561217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.278 [2024-07-16 00:23:43.561228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:96512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.278 [2024-07-16 00:23:43.561235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.278 [2024-07-16 00:23:43.561243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:96520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.278 [2024-07-16 00:23:43.561249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.278 [2024-07-16 00:23:43.561257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:96528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.278 [2024-07-16 00:23:43.561263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.278 [2024-07-16 00:23:43.561272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:96536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.278 [2024-07-16 00:23:43.561278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.278 [2024-07-16 00:23:43.561286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:96544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.278 [2024-07-16 00:23:43.561292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.278 [2024-07-16 00:23:43.561300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:96552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.278 [2024-07-16 00:23:43.561306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.279 [2024-07-16 00:23:43.561314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:96560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.279 [2024-07-16 00:23:43.561320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.279 [2024-07-16 00:23:43.561328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:96568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.279 [2024-07-16 00:23:43.561334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.279 [2024-07-16 00:23:43.561343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:96576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.279 [2024-07-16 00:23:43.561351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.279 [2024-07-16 00:23:43.561359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:96584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.279 [2024-07-16 00:23:43.561366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.279 [2024-07-16 00:23:43.561374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:96592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.279 [2024-07-16 00:23:43.561380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.279 [2024-07-16 00:23:43.561390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:96600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.279 [2024-07-16 00:23:43.561397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.279 [2024-07-16 00:23:43.561405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:96608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.279 [2024-07-16 00:23:43.561411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.279 [2024-07-16 00:23:43.561419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:96616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.279 [2024-07-16 00:23:43.561425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.279 [2024-07-16 00:23:43.561433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:96624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.279 [2024-07-16 00:23:43.561440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.279 [2024-07-16 00:23:43.561447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:96632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.279 [2024-07-16 00:23:43.561454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.279 [2024-07-16 00:23:43.561462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:96640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.279 [2024-07-16 00:23:43.561468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.279 [2024-07-16 00:23:43.561476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:96648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.279 [2024-07-16 00:23:43.561482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.279 [2024-07-16 00:23:43.561490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:96656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.279 [2024-07-16 00:23:43.561496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.279 [2024-07-16 00:23:43.561504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:96664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.279 [2024-07-16 00:23:43.561511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.279 [2024-07-16 00:23:43.561519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:96672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.279 [2024-07-16 00:23:43.561526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.279 [2024-07-16 00:23:43.561533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.279 [2024-07-16 00:23:43.561540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.279 [2024-07-16 00:23:43.561548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:96688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.279 [2024-07-16 00:23:43.561554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.279 [2024-07-16 00:23:43.561562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:96696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.279 [2024-07-16 00:23:43.561569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.279 [2024-07-16 00:23:43.561578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:96704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.279 [2024-07-16 00:23:43.561586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.279 [2024-07-16 00:23:43.561594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:96720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.279 [2024-07-16 00:23:43.561601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.279 [2024-07-16 00:23:43.561609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:96728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.279 [2024-07-16 00:23:43.561616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.279 [2024-07-16 00:23:43.561623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:96736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.279 [2024-07-16 00:23:43.561630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.279 [2024-07-16 00:23:43.561638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:96744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.279 [2024-07-16 00:23:43.561644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.279 [2024-07-16 00:23:43.561652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:96752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.279 [2024-07-16 00:23:43.561658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.279 [2024-07-16 00:23:43.561666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:96760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.279 [2024-07-16 00:23:43.561673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.279 [2024-07-16 00:23:43.561681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:96768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.279 [2024-07-16 00:23:43.561687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.279 [2024-07-16 00:23:43.561695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:96776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.279 [2024-07-16 00:23:43.561703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.279 [2024-07-16 00:23:43.561711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:96784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.279 [2024-07-16 00:23:43.561718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.279 [2024-07-16 00:23:43.561726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:96792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.279 [2024-07-16 00:23:43.561732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.279 [2024-07-16 00:23:43.561740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:96800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.279 [2024-07-16 00:23:43.561747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.279 [2024-07-16 00:23:43.561755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:96808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.279 [2024-07-16 00:23:43.561763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.279 [2024-07-16 00:23:43.561771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:96816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.279 [2024-07-16 00:23:43.561778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.279 [2024-07-16 00:23:43.561785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:96824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.279 [2024-07-16 00:23:43.561792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.279 [2024-07-16 00:23:43.561810] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.279 [2024-07-16 00:23:43.561816] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.279 [2024-07-16 00:23:43.561822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96832 len:8 PRP1 0x0 PRP2 0x0 00:22:39.279 [2024-07-16 00:23:43.561830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.279 [2024-07-16 00:23:43.561871] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xfcf300 was disconnected and freed. reset controller. 00:22:39.279 [2024-07-16 00:23:43.561880] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:22:39.279 [2024-07-16 00:23:43.561900] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:39.279 [2024-07-16 00:23:43.561907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.279 [2024-07-16 00:23:43.561914] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:39.279 [2024-07-16 00:23:43.561921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.279 [2024-07-16 00:23:43.561929] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:39.279 [2024-07-16 00:23:43.561936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.279 [2024-07-16 00:23:43.561943] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:39.279 [2024-07-16 00:23:43.561949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.279 [2024-07-16 00:23:43.561956] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:39.279 [2024-07-16 00:23:43.561982] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1540 (9): Bad file descriptor 00:22:39.279 [2024-07-16 00:23:43.564830] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:39.279 [2024-07-16 00:23:43.724228] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:39.279 [2024-07-16 00:23:47.072364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:55192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.279 [2024-07-16 00:23:47.072402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.280 [2024-07-16 00:23:47.072417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:55200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.280 [2024-07-16 00:23:47.072425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.280 [2024-07-16 00:23:47.072439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:55208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.280 [2024-07-16 00:23:47.072447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.280 [2024-07-16 00:23:47.072455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:55216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.280 [2024-07-16 00:23:47.072462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.280 [2024-07-16 00:23:47.072471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:55224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.280 [2024-07-16 00:23:47.072477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.280 [2024-07-16 00:23:47.072486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:55232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.280 [2024-07-16 00:23:47.072492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.280 [2024-07-16 00:23:47.072500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:55240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.280 [2024-07-16 00:23:47.072507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.280 [2024-07-16 00:23:47.072515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:55248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.280 [2024-07-16 00:23:47.072522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.280 [2024-07-16 00:23:47.072530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:55256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.280 [2024-07-16 00:23:47.072536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.280 [2024-07-16 00:23:47.072545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:55264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.280 [2024-07-16 00:23:47.072552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.280 [2024-07-16 00:23:47.072561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:55272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.280 [2024-07-16 00:23:47.072568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.280 [2024-07-16 00:23:47.072576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:55280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.280 [2024-07-16 00:23:47.072583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.280 [2024-07-16 00:23:47.072591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:55288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.280 [2024-07-16 00:23:47.072598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.280 [2024-07-16 00:23:47.072606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:55296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.280 [2024-07-16 00:23:47.072612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.280 [2024-07-16 00:23:47.072620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:55304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.280 [2024-07-16 00:23:47.072628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.280 [2024-07-16 00:23:47.072637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:55312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.280 [2024-07-16 00:23:47.072643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.280 [2024-07-16 00:23:47.072651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:55320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.280 [2024-07-16 00:23:47.072658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.280 [2024-07-16 00:23:47.072668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:55328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.280 [2024-07-16 00:23:47.072674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.280 [2024-07-16 00:23:47.072684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:55336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.280 [2024-07-16 00:23:47.072691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.280 [2024-07-16 00:23:47.072699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:55344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.280 [2024-07-16 00:23:47.072705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.280 [2024-07-16 00:23:47.072714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:55352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.280 [2024-07-16 00:23:47.072720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.280 [2024-07-16 00:23:47.072728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:55360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.280 [2024-07-16 00:23:47.072735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.280 [2024-07-16 00:23:47.072743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:55368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.280 [2024-07-16 00:23:47.072749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.280 [2024-07-16 00:23:47.072757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:55376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.280 [2024-07-16 00:23:47.072764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.280 [2024-07-16 00:23:47.072772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:55384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.280 [2024-07-16 00:23:47.072778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.280 [2024-07-16 00:23:47.072786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:55392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.280 [2024-07-16 00:23:47.072792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.280 [2024-07-16 00:23:47.072800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:55400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.280 [2024-07-16 00:23:47.072807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.280 [2024-07-16 00:23:47.072817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:55416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.280 [2024-07-16 00:23:47.072823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.280 [2024-07-16 00:23:47.072831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:55424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.280 [2024-07-16 00:23:47.072838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.280 [2024-07-16 00:23:47.072846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:55432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.280 [2024-07-16 00:23:47.072852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.280 [2024-07-16 00:23:47.072860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:55440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.280 [2024-07-16 00:23:47.072867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.280 [2024-07-16 00:23:47.072875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:55448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.280 [2024-07-16 00:23:47.072881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.280 [2024-07-16 00:23:47.072889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:55456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.280 [2024-07-16 00:23:47.072895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.280 [2024-07-16 00:23:47.072903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:55464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.280 [2024-07-16 00:23:47.072910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.280 [2024-07-16 00:23:47.072919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:55408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.280 [2024-07-16 00:23:47.072925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.280 [2024-07-16 00:23:47.072933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:55472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.280 [2024-07-16 00:23:47.072939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.280 [2024-07-16 00:23:47.072947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:55480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.281 [2024-07-16 00:23:47.072954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.281 [2024-07-16 00:23:47.072962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:55488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.281 [2024-07-16 00:23:47.072968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.281 [2024-07-16 00:23:47.072976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:55496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.281 [2024-07-16 00:23:47.072982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.281 [2024-07-16 00:23:47.072990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:55504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.281 [2024-07-16 00:23:47.072997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.281 [2024-07-16 00:23:47.073007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:55512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.281 [2024-07-16 00:23:47.073014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.281 [2024-07-16 00:23:47.073022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:55520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.281 [2024-07-16 00:23:47.073028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.281 [2024-07-16 00:23:47.073036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:55528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.281 [2024-07-16 00:23:47.073042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.281 [2024-07-16 00:23:47.073050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:55536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.281 [2024-07-16 00:23:47.073057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.281 [2024-07-16 00:23:47.073065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:55544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.281 [2024-07-16 00:23:47.073071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.281 [2024-07-16 00:23:47.073079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:55552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.281 [2024-07-16 00:23:47.073085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.281 [2024-07-16 00:23:47.073093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:55560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.281 [2024-07-16 00:23:47.073099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.281 [2024-07-16 00:23:47.073107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:55568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.281 [2024-07-16 00:23:47.073114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.281 [2024-07-16 00:23:47.073122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.281 [2024-07-16 00:23:47.073128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.281 [2024-07-16 00:23:47.073136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:55584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.281 [2024-07-16 00:23:47.073142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.281 [2024-07-16 00:23:47.073150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:55592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.281 [2024-07-16 00:23:47.073157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.281 [2024-07-16 00:23:47.073164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:55600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.281 [2024-07-16 00:23:47.073171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.281 [2024-07-16 00:23:47.073179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:55608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.281 [2024-07-16 00:23:47.073187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.281 [2024-07-16 00:23:47.073195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:55616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.281 [2024-07-16 00:23:47.073201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.281 [2024-07-16 00:23:47.073209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:55624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.281 [2024-07-16 00:23:47.073215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.281 [2024-07-16 00:23:47.073223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:55632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.281 [2024-07-16 00:23:47.073234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.281 [2024-07-16 00:23:47.073242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:55640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.281 [2024-07-16 00:23:47.073249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.281 [2024-07-16 00:23:47.073257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:55648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.281 [2024-07-16 00:23:47.073263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.281 [2024-07-16 00:23:47.073271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:55656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.281 [2024-07-16 00:23:47.073278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.281 [2024-07-16 00:23:47.073285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:55664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.281 [2024-07-16 00:23:47.073292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.281 [2024-07-16 00:23:47.073300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:55672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.281 [2024-07-16 00:23:47.073306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.281 [2024-07-16 00:23:47.073316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:55680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.281 [2024-07-16 00:23:47.073322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.281 [2024-07-16 00:23:47.073330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:55688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.281 [2024-07-16 00:23:47.073337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.281 [2024-07-16 00:23:47.073345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:55696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.281 [2024-07-16 00:23:47.073351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.281 [2024-07-16 00:23:47.073359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:55704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.281 [2024-07-16 00:23:47.073366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.281 [2024-07-16 00:23:47.073375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:55712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.281 [2024-07-16 00:23:47.073381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.281 [2024-07-16 00:23:47.073389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:55720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.281 [2024-07-16 00:23:47.073396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.281 [2024-07-16 00:23:47.073404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:55728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.281 [2024-07-16 00:23:47.073410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.281 [2024-07-16 00:23:47.073418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:55736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.281 [2024-07-16 00:23:47.073424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.281 [2024-07-16 00:23:47.073432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:55744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.281 [2024-07-16 00:23:47.073439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.281 [2024-07-16 00:23:47.073448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:55752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.281 [2024-07-16 00:23:47.073454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.281 [2024-07-16 00:23:47.073462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:55760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.281 [2024-07-16 00:23:47.073468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.281 [2024-07-16 00:23:47.073476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:55768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.281 [2024-07-16 00:23:47.073482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.281 [2024-07-16 00:23:47.073490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:55776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.281 [2024-07-16 00:23:47.073496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.281 [2024-07-16 00:23:47.073504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:55784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.281 [2024-07-16 00:23:47.073511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.281 [2024-07-16 00:23:47.073519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:55792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.281 [2024-07-16 00:23:47.073526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.281 [2024-07-16 00:23:47.073533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:55800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.281 [2024-07-16 00:23:47.073540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.281 [2024-07-16 00:23:47.073548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:55808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.281 [2024-07-16 00:23:47.073556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.281 [2024-07-16 00:23:47.073564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:55816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.281 [2024-07-16 00:23:47.073571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.282 [2024-07-16 00:23:47.073579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:55824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.282 [2024-07-16 00:23:47.073585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.282 [2024-07-16 00:23:47.073593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:55832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.282 [2024-07-16 00:23:47.073599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.282 [2024-07-16 00:23:47.073608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:55840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.282 [2024-07-16 00:23:47.073614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.282 [2024-07-16 00:23:47.073623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:55848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.282 [2024-07-16 00:23:47.073629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.282 [2024-07-16 00:23:47.073650] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.282 [2024-07-16 00:23:47.073657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55856 len:8 PRP1 0x0 PRP2 0x0 00:22:39.282 [2024-07-16 00:23:47.073663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.282 [2024-07-16 00:23:47.073672] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.282 [2024-07-16 00:23:47.073677] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.282 [2024-07-16 00:23:47.073683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55864 len:8 PRP1 0x0 PRP2 0x0 00:22:39.282 [2024-07-16 00:23:47.073689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.282 [2024-07-16 00:23:47.073696] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.282 [2024-07-16 00:23:47.073701] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.282 [2024-07-16 00:23:47.073707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55872 len:8 PRP1 0x0 PRP2 0x0 00:22:39.282 [2024-07-16 00:23:47.073713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.282 [2024-07-16 00:23:47.073720] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.282 [2024-07-16 00:23:47.073725] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.282 [2024-07-16 00:23:47.073731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55880 len:8 PRP1 0x0 PRP2 0x0 00:22:39.282 [2024-07-16 00:23:47.073737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.282 [2024-07-16 00:23:47.073744] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.282 [2024-07-16 00:23:47.073749] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.282 [2024-07-16 00:23:47.073754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55888 len:8 PRP1 0x0 PRP2 0x0 00:22:39.282 [2024-07-16 00:23:47.073763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.282 [2024-07-16 00:23:47.073770] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.282 [2024-07-16 00:23:47.073775] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.282 [2024-07-16 00:23:47.073780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55896 len:8 PRP1 0x0 PRP2 0x0 00:22:39.282 [2024-07-16 00:23:47.073788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.282 [2024-07-16 00:23:47.073794] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.282 [2024-07-16 00:23:47.073799] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.282 [2024-07-16 00:23:47.073805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55904 len:8 PRP1 0x0 PRP2 0x0 00:22:39.282 [2024-07-16 00:23:47.073811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.282 [2024-07-16 00:23:47.073818] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.282 [2024-07-16 00:23:47.073825] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.282 [2024-07-16 00:23:47.073830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55912 len:8 PRP1 0x0 PRP2 0x0 00:22:39.282 [2024-07-16 00:23:47.073836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.282 [2024-07-16 00:23:47.073843] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.282 [2024-07-16 00:23:47.073848] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.282 [2024-07-16 00:23:47.073853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55920 len:8 PRP1 0x0 PRP2 0x0 00:22:39.282 [2024-07-16 00:23:47.073860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.282 [2024-07-16 00:23:47.073867] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.282 [2024-07-16 00:23:47.073872] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.282 [2024-07-16 00:23:47.073877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55928 len:8 PRP1 0x0 PRP2 0x0 00:22:39.282 [2024-07-16 00:23:47.073884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.282 [2024-07-16 00:23:47.073890] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.282 [2024-07-16 00:23:47.073895] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.282 [2024-07-16 00:23:47.073901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55936 len:8 PRP1 0x0 PRP2 0x0 00:22:39.282 [2024-07-16 00:23:47.073907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.282 [2024-07-16 00:23:47.073913] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.282 [2024-07-16 00:23:47.073919] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.282 [2024-07-16 00:23:47.073924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55944 len:8 PRP1 0x0 PRP2 0x0 00:22:39.282 [2024-07-16 00:23:47.073930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.282 [2024-07-16 00:23:47.073937] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.282 [2024-07-16 00:23:47.073942] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.282 [2024-07-16 00:23:47.073949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55952 len:8 PRP1 0x0 PRP2 0x0 00:22:39.282 [2024-07-16 00:23:47.073955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.282 [2024-07-16 00:23:47.073962] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.282 [2024-07-16 00:23:47.073968] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.282 [2024-07-16 00:23:47.073973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55960 len:8 PRP1 0x0 PRP2 0x0 00:22:39.282 [2024-07-16 00:23:47.073979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.282 [2024-07-16 00:23:47.073987] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.282 [2024-07-16 00:23:47.073992] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.282 [2024-07-16 00:23:47.073997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55968 len:8 PRP1 0x0 PRP2 0x0 00:22:39.282 [2024-07-16 00:23:47.074003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.282 [2024-07-16 00:23:47.074010] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.282 [2024-07-16 00:23:47.074016] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.282 [2024-07-16 00:23:47.074022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55976 len:8 PRP1 0x0 PRP2 0x0 00:22:39.282 [2024-07-16 00:23:47.074028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.282 [2024-07-16 00:23:47.074035] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.282 [2024-07-16 00:23:47.074040] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.282 [2024-07-16 00:23:47.074045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55984 len:8 PRP1 0x0 PRP2 0x0 00:22:39.282 [2024-07-16 00:23:47.074051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.282 [2024-07-16 00:23:47.074058] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.282 [2024-07-16 00:23:47.074063] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.282 [2024-07-16 00:23:47.074068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55992 len:8 PRP1 0x0 PRP2 0x0 00:22:39.282 [2024-07-16 00:23:47.074075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.282 [2024-07-16 00:23:47.074081] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.282 [2024-07-16 00:23:47.074086] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.282 [2024-07-16 00:23:47.074091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56000 len:8 PRP1 0x0 PRP2 0x0 00:22:39.282 [2024-07-16 00:23:47.074098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.282 [2024-07-16 00:23:47.074104] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.282 [2024-07-16 00:23:47.074109] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.282 [2024-07-16 00:23:47.074115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56008 len:8 PRP1 0x0 PRP2 0x0 00:22:39.282 [2024-07-16 00:23:47.074121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.282 [2024-07-16 00:23:47.074132] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.282 [2024-07-16 00:23:47.074137] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.282 [2024-07-16 00:23:47.074143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56016 len:8 PRP1 0x0 PRP2 0x0 00:22:39.282 [2024-07-16 00:23:47.074149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.282 [2024-07-16 00:23:47.074156] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.282 [2024-07-16 00:23:47.074161] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.282 [2024-07-16 00:23:47.074166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56024 len:8 PRP1 0x0 PRP2 0x0 00:22:39.282 [2024-07-16 00:23:47.074172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.282 [2024-07-16 00:23:47.074179] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.282 [2024-07-16 00:23:47.074184] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.282 [2024-07-16 00:23:47.074190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56032 len:8 PRP1 0x0 PRP2 0x0 00:22:39.282 [2024-07-16 00:23:47.074196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.283 [2024-07-16 00:23:47.074202] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.283 [2024-07-16 00:23:47.074208] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.283 [2024-07-16 00:23:47.074214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56040 len:8 PRP1 0x0 PRP2 0x0 00:22:39.283 [2024-07-16 00:23:47.074221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.283 [2024-07-16 00:23:47.074233] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.283 [2024-07-16 00:23:47.074238] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.283 [2024-07-16 00:23:47.074244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56048 len:8 PRP1 0x0 PRP2 0x0 00:22:39.283 [2024-07-16 00:23:47.074250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.283 [2024-07-16 00:23:47.074257] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.283 [2024-07-16 00:23:47.074262] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.283 [2024-07-16 00:23:47.074267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56056 len:8 PRP1 0x0 PRP2 0x0 00:22:39.283 [2024-07-16 00:23:47.074273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.283 [2024-07-16 00:23:47.074280] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.283 [2024-07-16 00:23:47.074285] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.283 [2024-07-16 00:23:47.074291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56064 len:8 PRP1 0x0 PRP2 0x0 00:22:39.283 [2024-07-16 00:23:47.074297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.283 [2024-07-16 00:23:47.074304] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.283 [2024-07-16 00:23:47.074309] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.283 [2024-07-16 00:23:47.074314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56072 len:8 PRP1 0x0 PRP2 0x0 00:22:39.283 [2024-07-16 00:23:47.074322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.283 [2024-07-16 00:23:47.074328] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.283 [2024-07-16 00:23:47.074333] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.283 [2024-07-16 00:23:47.074338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56080 len:8 PRP1 0x0 PRP2 0x0 00:22:39.283 [2024-07-16 00:23:47.074345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.283 [2024-07-16 00:23:47.074352] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.283 [2024-07-16 00:23:47.074358] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.283 [2024-07-16 00:23:47.074363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56088 len:8 PRP1 0x0 PRP2 0x0 00:22:39.283 [2024-07-16 00:23:47.074369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.283 [2024-07-16 00:23:47.074376] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.283 [2024-07-16 00:23:47.074381] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.283 [2024-07-16 00:23:47.074386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56096 len:8 PRP1 0x0 PRP2 0x0 00:22:39.283 [2024-07-16 00:23:47.074393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.283 [2024-07-16 00:23:47.074400] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.283 [2024-07-16 00:23:47.074407] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.283 [2024-07-16 00:23:47.074413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56104 len:8 PRP1 0x0 PRP2 0x0 00:22:39.283 [2024-07-16 00:23:47.074419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.283 [2024-07-16 00:23:47.074426] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.283 [2024-07-16 00:23:47.074431] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.283 [2024-07-16 00:23:47.074436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56112 len:8 PRP1 0x0 PRP2 0x0 00:22:39.283 [2024-07-16 00:23:47.074442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.283 [2024-07-16 00:23:47.074449] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.283 [2024-07-16 00:23:47.074454] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.283 [2024-07-16 00:23:47.074459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56120 len:8 PRP1 0x0 PRP2 0x0 00:22:39.283 [2024-07-16 00:23:47.074466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.283 [2024-07-16 00:23:47.074472] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.283 [2024-07-16 00:23:47.074477] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.283 [2024-07-16 00:23:47.074482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56128 len:8 PRP1 0x0 PRP2 0x0 00:22:39.283 [2024-07-16 00:23:47.074489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.283 [2024-07-16 00:23:47.074496] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.283 [2024-07-16 00:23:47.074501] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.283 [2024-07-16 00:23:47.074508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56136 len:8 PRP1 0x0 PRP2 0x0 00:22:39.283 [2024-07-16 00:23:47.074515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.283 [2024-07-16 00:23:47.074521] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.283 [2024-07-16 00:23:47.074526] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.283 [2024-07-16 00:23:47.074531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56144 len:8 PRP1 0x0 PRP2 0x0 00:22:39.283 [2024-07-16 00:23:47.074537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.283 [2024-07-16 00:23:47.074544] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.283 [2024-07-16 00:23:47.085373] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.283 [2024-07-16 00:23:47.085382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56152 len:8 PRP1 0x0 PRP2 0x0 00:22:39.283 [2024-07-16 00:23:47.085390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.283 [2024-07-16 00:23:47.085397] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.283 [2024-07-16 00:23:47.085402] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.283 [2024-07-16 00:23:47.085408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56160 len:8 PRP1 0x0 PRP2 0x0 00:22:39.283 [2024-07-16 00:23:47.085415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.283 [2024-07-16 00:23:47.085422] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.283 [2024-07-16 00:23:47.085428] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.283 [2024-07-16 00:23:47.085433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56168 len:8 PRP1 0x0 PRP2 0x0 00:22:39.283 [2024-07-16 00:23:47.085439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.283 [2024-07-16 00:23:47.085446] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.283 [2024-07-16 00:23:47.085451] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.283 [2024-07-16 00:23:47.085456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56176 len:8 PRP1 0x0 PRP2 0x0 00:22:39.283 [2024-07-16 00:23:47.085462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.283 [2024-07-16 00:23:47.085469] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.283 [2024-07-16 00:23:47.085474] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.283 [2024-07-16 00:23:47.085480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56184 len:8 PRP1 0x0 PRP2 0x0 00:22:39.283 [2024-07-16 00:23:47.085486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.283 [2024-07-16 00:23:47.085492] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.283 [2024-07-16 00:23:47.085497] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.283 [2024-07-16 00:23:47.085502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56192 len:8 PRP1 0x0 PRP2 0x0 00:22:39.283 [2024-07-16 00:23:47.085509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.283 [2024-07-16 00:23:47.085516] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.283 [2024-07-16 00:23:47.085522] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.283 [2024-07-16 00:23:47.085528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56200 len:8 PRP1 0x0 PRP2 0x0 00:22:39.283 [2024-07-16 00:23:47.085535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.283 [2024-07-16 00:23:47.085541] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.283 [2024-07-16 00:23:47.085546] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.283 [2024-07-16 00:23:47.085551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56208 len:8 PRP1 0x0 PRP2 0x0 00:22:39.283 [2024-07-16 00:23:47.085557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.283 [2024-07-16 00:23:47.085599] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x117c380 was disconnected and freed. reset controller. 00:22:39.283 [2024-07-16 00:23:47.085607] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:22:39.283 [2024-07-16 00:23:47.085628] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:39.283 [2024-07-16 00:23:47.085636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.283 [2024-07-16 00:23:47.085643] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:39.283 [2024-07-16 00:23:47.085649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.283 [2024-07-16 00:23:47.085656] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:39.283 [2024-07-16 00:23:47.085663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.283 [2024-07-16 00:23:47.085670] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:39.283 [2024-07-16 00:23:47.085677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.284 [2024-07-16 00:23:47.085683] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:39.284 [2024-07-16 00:23:47.085711] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1540 (9): Bad file descriptor 00:22:39.284 [2024-07-16 00:23:47.089418] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:39.284 [2024-07-16 00:23:47.203217] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:39.284 [2024-07-16 00:23:51.473267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:75440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.284 [2024-07-16 00:23:51.473302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.284 [2024-07-16 00:23:51.473318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:75448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.284 [2024-07-16 00:23:51.473325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.284 [2024-07-16 00:23:51.473334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:75456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.284 [2024-07-16 00:23:51.473341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.284 [2024-07-16 00:23:51.473349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:75464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.284 [2024-07-16 00:23:51.473360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.284 [2024-07-16 00:23:51.473369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:75472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.284 [2024-07-16 00:23:51.473375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.284 [2024-07-16 00:23:51.473384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:75480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.284 [2024-07-16 00:23:51.473390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.284 [2024-07-16 00:23:51.473398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:75488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.284 [2024-07-16 00:23:51.473405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.284 [2024-07-16 00:23:51.473413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:75496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.284 [2024-07-16 00:23:51.473420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.284 [2024-07-16 00:23:51.473428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:75504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.284 [2024-07-16 00:23:51.473434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.284 [2024-07-16 00:23:51.473442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:75512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.284 [2024-07-16 00:23:51.473449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.284 [2024-07-16 00:23:51.473457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:75520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.284 [2024-07-16 00:23:51.473464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.284 [2024-07-16 00:23:51.473472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:75528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.284 [2024-07-16 00:23:51.473478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.284 [2024-07-16 00:23:51.473486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:75536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.284 [2024-07-16 00:23:51.473492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.284 [2024-07-16 00:23:51.473500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:75544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.284 [2024-07-16 00:23:51.473508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.284 [2024-07-16 00:23:51.473516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:75552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.284 [2024-07-16 00:23:51.473522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.284 [2024-07-16 00:23:51.473530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:75560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.284 [2024-07-16 00:23:51.473537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.284 [2024-07-16 00:23:51.473547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:75568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.284 [2024-07-16 00:23:51.473554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.284 [2024-07-16 00:23:51.473562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:75576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.284 [2024-07-16 00:23:51.473568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.284 [2024-07-16 00:23:51.473576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:75584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.284 [2024-07-16 00:23:51.473583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.284 [2024-07-16 00:23:51.473591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:75592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.284 [2024-07-16 00:23:51.473597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.284 [2024-07-16 00:23:51.473605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:75600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.284 [2024-07-16 00:23:51.473612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.284 [2024-07-16 00:23:51.473620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:75608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.284 [2024-07-16 00:23:51.473626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.284 [2024-07-16 00:23:51.473634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:75616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.284 [2024-07-16 00:23:51.473641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.284 [2024-07-16 00:23:51.473648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:75624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.284 [2024-07-16 00:23:51.473655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.284 [2024-07-16 00:23:51.473663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:75632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.284 [2024-07-16 00:23:51.473670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.284 [2024-07-16 00:23:51.473678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:75640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.284 [2024-07-16 00:23:51.473684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.284 [2024-07-16 00:23:51.473692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:75648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.284 [2024-07-16 00:23:51.473699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.284 [2024-07-16 00:23:51.473707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:75656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.284 [2024-07-16 00:23:51.473713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.284 [2024-07-16 00:23:51.473721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:75664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.284 [2024-07-16 00:23:51.473729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.284 [2024-07-16 00:23:51.473737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:75672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.284 [2024-07-16 00:23:51.473745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.284 [2024-07-16 00:23:51.473753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:75680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.284 [2024-07-16 00:23:51.473760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.284 [2024-07-16 00:23:51.473768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:75688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.284 [2024-07-16 00:23:51.473775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.284 [2024-07-16 00:23:51.473782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:75696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.284 [2024-07-16 00:23:51.473789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.284 [2024-07-16 00:23:51.473797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:75704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.284 [2024-07-16 00:23:51.473804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.284 [2024-07-16 00:23:51.473812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:75712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.284 [2024-07-16 00:23:51.473818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.284 [2024-07-16 00:23:51.473826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:75720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.284 [2024-07-16 00:23:51.473833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.284 [2024-07-16 00:23:51.473840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:75728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.284 [2024-07-16 00:23:51.473847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.284 [2024-07-16 00:23:51.473855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:75736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.284 [2024-07-16 00:23:51.473862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.284 [2024-07-16 00:23:51.473870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:75744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.284 [2024-07-16 00:23:51.473876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.284 [2024-07-16 00:23:51.473884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:75752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.284 [2024-07-16 00:23:51.473890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.285 [2024-07-16 00:23:51.473898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:75760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.285 [2024-07-16 00:23:51.473905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.285 [2024-07-16 00:23:51.473914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:75768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.285 [2024-07-16 00:23:51.473920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.285 [2024-07-16 00:23:51.473928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:75776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.285 [2024-07-16 00:23:51.473935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.285 [2024-07-16 00:23:51.473943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:75784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.285 [2024-07-16 00:23:51.473949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.285 [2024-07-16 00:23:51.473957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:75792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.285 [2024-07-16 00:23:51.473964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.285 [2024-07-16 00:23:51.473972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:75800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.285 [2024-07-16 00:23:51.473979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.285 [2024-07-16 00:23:51.473987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:75808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.285 [2024-07-16 00:23:51.473993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.285 [2024-07-16 00:23:51.474001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:75816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.285 [2024-07-16 00:23:51.474008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.285 [2024-07-16 00:23:51.474015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:75824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.285 [2024-07-16 00:23:51.474023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.285 [2024-07-16 00:23:51.474041] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.285 [2024-07-16 00:23:51.474048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75832 len:8 PRP1 0x0 PRP2 0x0 00:22:39.285 [2024-07-16 00:23:51.474054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.285 [2024-07-16 00:23:51.474063] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.285 [2024-07-16 00:23:51.474069] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.285 [2024-07-16 00:23:51.474074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75840 len:8 PRP1 0x0 PRP2 0x0 00:22:39.285 [2024-07-16 00:23:51.474081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.285 [2024-07-16 00:23:51.474088] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.285 [2024-07-16 00:23:51.474093] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.285 [2024-07-16 00:23:51.474099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75848 len:8 PRP1 0x0 PRP2 0x0 00:22:39.285 [2024-07-16 00:23:51.474106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.285 [2024-07-16 00:23:51.474112] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.285 [2024-07-16 00:23:51.474119] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.285 [2024-07-16 00:23:51.474124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75856 len:8 PRP1 0x0 PRP2 0x0 00:22:39.285 [2024-07-16 00:23:51.474131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.285 [2024-07-16 00:23:51.474137] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.285 [2024-07-16 00:23:51.474142] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.285 [2024-07-16 00:23:51.474148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75864 len:8 PRP1 0x0 PRP2 0x0 00:22:39.285 [2024-07-16 00:23:51.474154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.285 [2024-07-16 00:23:51.474161] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.285 [2024-07-16 00:23:51.474165] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.285 [2024-07-16 00:23:51.474171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:75392 len:8 PRP1 0x0 PRP2 0x0 00:22:39.285 [2024-07-16 00:23:51.474177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.285 [2024-07-16 00:23:51.474184] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.285 [2024-07-16 00:23:51.474189] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.285 [2024-07-16 00:23:51.474194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:75400 len:8 PRP1 0x0 PRP2 0x0 00:22:39.285 [2024-07-16 00:23:51.474201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.285 [2024-07-16 00:23:51.474207] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.285 [2024-07-16 00:23:51.474213] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.285 [2024-07-16 00:23:51.474219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:75408 len:8 PRP1 0x0 PRP2 0x0 00:22:39.285 [2024-07-16 00:23:51.474230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.285 [2024-07-16 00:23:51.474237] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.285 [2024-07-16 00:23:51.474243] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.285 [2024-07-16 00:23:51.474249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:75416 len:8 PRP1 0x0 PRP2 0x0 00:22:39.285 [2024-07-16 00:23:51.474256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.285 [2024-07-16 00:23:51.474263] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.285 [2024-07-16 00:23:51.474268] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.285 [2024-07-16 00:23:51.474274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:75424 len:8 PRP1 0x0 PRP2 0x0 00:22:39.285 [2024-07-16 00:23:51.474280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.285 [2024-07-16 00:23:51.474287] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.285 [2024-07-16 00:23:51.474293] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.285 [2024-07-16 00:23:51.474298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:75432 len:8 PRP1 0x0 PRP2 0x0 00:22:39.285 [2024-07-16 00:23:51.474305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.285 [2024-07-16 00:23:51.474313] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.285 [2024-07-16 00:23:51.474318] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.285 [2024-07-16 00:23:51.474324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75872 len:8 PRP1 0x0 PRP2 0x0 00:22:39.285 [2024-07-16 00:23:51.474331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.285 [2024-07-16 00:23:51.474338] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.285 [2024-07-16 00:23:51.474343] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.285 [2024-07-16 00:23:51.474349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75880 len:8 PRP1 0x0 PRP2 0x0 00:22:39.285 [2024-07-16 00:23:51.474355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.285 [2024-07-16 00:23:51.474362] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.285 [2024-07-16 00:23:51.474367] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.285 [2024-07-16 00:23:51.474374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75888 len:8 PRP1 0x0 PRP2 0x0 00:22:39.285 [2024-07-16 00:23:51.474380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.285 [2024-07-16 00:23:51.474387] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.285 [2024-07-16 00:23:51.474392] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.285 [2024-07-16 00:23:51.474398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75896 len:8 PRP1 0x0 PRP2 0x0 00:22:39.285 [2024-07-16 00:23:51.474404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.285 [2024-07-16 00:23:51.474411] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.285 [2024-07-16 00:23:51.474416] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.285 [2024-07-16 00:23:51.474421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75904 len:8 PRP1 0x0 PRP2 0x0 00:22:39.285 [2024-07-16 00:23:51.474428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.285 [2024-07-16 00:23:51.474435] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.285 [2024-07-16 00:23:51.474441] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.285 [2024-07-16 00:23:51.474446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75912 len:8 PRP1 0x0 PRP2 0x0 00:22:39.285 [2024-07-16 00:23:51.474453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.285 [2024-07-16 00:23:51.474460] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.286 [2024-07-16 00:23:51.474465] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.286 [2024-07-16 00:23:51.474471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75920 len:8 PRP1 0x0 PRP2 0x0 00:22:39.286 [2024-07-16 00:23:51.474477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.286 [2024-07-16 00:23:51.474484] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.286 [2024-07-16 00:23:51.474489] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.286 [2024-07-16 00:23:51.474494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75928 len:8 PRP1 0x0 PRP2 0x0 00:22:39.286 [2024-07-16 00:23:51.474502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.286 [2024-07-16 00:23:51.474508] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.286 [2024-07-16 00:23:51.474513] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.286 [2024-07-16 00:23:51.474519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75936 len:8 PRP1 0x0 PRP2 0x0 00:22:39.286 [2024-07-16 00:23:51.474525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.286 [2024-07-16 00:23:51.474532] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.286 [2024-07-16 00:23:51.474538] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.286 [2024-07-16 00:23:51.474543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75944 len:8 PRP1 0x0 PRP2 0x0 00:22:39.286 [2024-07-16 00:23:51.474550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.286 [2024-07-16 00:23:51.474557] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.286 [2024-07-16 00:23:51.474562] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.286 [2024-07-16 00:23:51.474567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75952 len:8 PRP1 0x0 PRP2 0x0 00:22:39.286 [2024-07-16 00:23:51.474574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.286 [2024-07-16 00:23:51.474580] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.286 [2024-07-16 00:23:51.474585] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.286 [2024-07-16 00:23:51.474591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75960 len:8 PRP1 0x0 PRP2 0x0 00:22:39.286 [2024-07-16 00:23:51.474597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.286 [2024-07-16 00:23:51.474604] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.286 [2024-07-16 00:23:51.474609] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.286 [2024-07-16 00:23:51.474614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75968 len:8 PRP1 0x0 PRP2 0x0 00:22:39.286 [2024-07-16 00:23:51.474621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.286 [2024-07-16 00:23:51.474628] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.286 [2024-07-16 00:23:51.474633] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.286 [2024-07-16 00:23:51.474638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75976 len:8 PRP1 0x0 PRP2 0x0 00:22:39.286 [2024-07-16 00:23:51.474645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.286 [2024-07-16 00:23:51.474651] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.286 [2024-07-16 00:23:51.474657] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.286 [2024-07-16 00:23:51.474662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75984 len:8 PRP1 0x0 PRP2 0x0 00:22:39.286 [2024-07-16 00:23:51.474669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.286 [2024-07-16 00:23:51.474675] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.286 [2024-07-16 00:23:51.474681] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.286 [2024-07-16 00:23:51.474688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75992 len:8 PRP1 0x0 PRP2 0x0 00:22:39.286 [2024-07-16 00:23:51.474694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.286 [2024-07-16 00:23:51.474701] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.286 [2024-07-16 00:23:51.474707] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.286 [2024-07-16 00:23:51.474712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76000 len:8 PRP1 0x0 PRP2 0x0 00:22:39.286 [2024-07-16 00:23:51.474718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.286 [2024-07-16 00:23:51.474725] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.286 [2024-07-16 00:23:51.474730] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.286 [2024-07-16 00:23:51.474735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76008 len:8 PRP1 0x0 PRP2 0x0 00:22:39.286 [2024-07-16 00:23:51.474741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.286 [2024-07-16 00:23:51.474748] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.286 [2024-07-16 00:23:51.474753] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.286 [2024-07-16 00:23:51.474759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76016 len:8 PRP1 0x0 PRP2 0x0 00:22:39.286 [2024-07-16 00:23:51.474765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.286 [2024-07-16 00:23:51.474771] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.286 [2024-07-16 00:23:51.474776] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.286 [2024-07-16 00:23:51.474781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76024 len:8 PRP1 0x0 PRP2 0x0 00:22:39.286 [2024-07-16 00:23:51.474787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.286 [2024-07-16 00:23:51.474794] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.286 [2024-07-16 00:23:51.474799] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.286 [2024-07-16 00:23:51.474804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76032 len:8 PRP1 0x0 PRP2 0x0 00:22:39.286 [2024-07-16 00:23:51.474811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.286 [2024-07-16 00:23:51.474819] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.286 [2024-07-16 00:23:51.474824] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.286 [2024-07-16 00:23:51.474830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76040 len:8 PRP1 0x0 PRP2 0x0 00:22:39.286 [2024-07-16 00:23:51.474836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.286 [2024-07-16 00:23:51.474843] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.286 [2024-07-16 00:23:51.474847] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.286 [2024-07-16 00:23:51.474853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76048 len:8 PRP1 0x0 PRP2 0x0 00:22:39.286 [2024-07-16 00:23:51.474860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.286 [2024-07-16 00:23:51.474868] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.286 [2024-07-16 00:23:51.474873] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.286 [2024-07-16 00:23:51.474878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76056 len:8 PRP1 0x0 PRP2 0x0 00:22:39.286 [2024-07-16 00:23:51.474884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.286 [2024-07-16 00:23:51.474891] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.286 [2024-07-16 00:23:51.474896] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.286 [2024-07-16 00:23:51.474901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76064 len:8 PRP1 0x0 PRP2 0x0 00:22:39.286 [2024-07-16 00:23:51.474908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.286 [2024-07-16 00:23:51.474915] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.286 [2024-07-16 00:23:51.474920] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.286 [2024-07-16 00:23:51.474925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76072 len:8 PRP1 0x0 PRP2 0x0 00:22:39.286 [2024-07-16 00:23:51.474931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.286 [2024-07-16 00:23:51.474938] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.286 [2024-07-16 00:23:51.474943] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.286 [2024-07-16 00:23:51.474949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76080 len:8 PRP1 0x0 PRP2 0x0 00:22:39.286 [2024-07-16 00:23:51.474955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.286 [2024-07-16 00:23:51.474961] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.286 [2024-07-16 00:23:51.474967] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.286 [2024-07-16 00:23:51.474972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76088 len:8 PRP1 0x0 PRP2 0x0 00:22:39.286 [2024-07-16 00:23:51.474978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.286 [2024-07-16 00:23:51.474985] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.286 [2024-07-16 00:23:51.474990] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.286 [2024-07-16 00:23:51.474995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76096 len:8 PRP1 0x0 PRP2 0x0 00:22:39.286 [2024-07-16 00:23:51.475001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.286 [2024-07-16 00:23:51.475009] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.286 [2024-07-16 00:23:51.475015] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.286 [2024-07-16 00:23:51.475020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76104 len:8 PRP1 0x0 PRP2 0x0 00:22:39.286 [2024-07-16 00:23:51.475027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.286 [2024-07-16 00:23:51.475033] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.286 [2024-07-16 00:23:51.475038] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.286 [2024-07-16 00:23:51.475044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76112 len:8 PRP1 0x0 PRP2 0x0 00:22:39.286 [2024-07-16 00:23:51.475052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.286 [2024-07-16 00:23:51.475058] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.287 [2024-07-16 00:23:51.475063] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.287 [2024-07-16 00:23:51.475069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76120 len:8 PRP1 0x0 PRP2 0x0 00:22:39.287 [2024-07-16 00:23:51.475075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.287 [2024-07-16 00:23:51.475082] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.287 [2024-07-16 00:23:51.475087] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.287 [2024-07-16 00:23:51.475092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76128 len:8 PRP1 0x0 PRP2 0x0 00:22:39.287 [2024-07-16 00:23:51.475099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.287 [2024-07-16 00:23:51.475105] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.287 [2024-07-16 00:23:51.475110] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.287 [2024-07-16 00:23:51.475115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76136 len:8 PRP1 0x0 PRP2 0x0 00:22:39.287 [2024-07-16 00:23:51.475122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.287 [2024-07-16 00:23:51.475128] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.287 [2024-07-16 00:23:51.475133] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.287 [2024-07-16 00:23:51.475139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76144 len:8 PRP1 0x0 PRP2 0x0 00:22:39.287 [2024-07-16 00:23:51.475145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.287 [2024-07-16 00:23:51.475152] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.287 [2024-07-16 00:23:51.475157] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.287 [2024-07-16 00:23:51.475162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76152 len:8 PRP1 0x0 PRP2 0x0 00:22:39.287 [2024-07-16 00:23:51.475168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.287 [2024-07-16 00:23:51.475175] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.287 [2024-07-16 00:23:51.475180] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.287 [2024-07-16 00:23:51.475185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76160 len:8 PRP1 0x0 PRP2 0x0 00:22:39.287 [2024-07-16 00:23:51.475192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.287 [2024-07-16 00:23:51.475200] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.287 [2024-07-16 00:23:51.475205] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.287 [2024-07-16 00:23:51.475212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76168 len:8 PRP1 0x0 PRP2 0x0 00:22:39.287 [2024-07-16 00:23:51.475219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.287 [2024-07-16 00:23:51.475230] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.287 [2024-07-16 00:23:51.484610] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.287 [2024-07-16 00:23:51.484625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76176 len:8 PRP1 0x0 PRP2 0x0 00:22:39.287 [2024-07-16 00:23:51.484635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.287 [2024-07-16 00:23:51.484644] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.287 [2024-07-16 00:23:51.484651] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.287 [2024-07-16 00:23:51.484658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76184 len:8 PRP1 0x0 PRP2 0x0 00:22:39.287 [2024-07-16 00:23:51.484667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.287 [2024-07-16 00:23:51.484676] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.287 [2024-07-16 00:23:51.484683] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.287 [2024-07-16 00:23:51.484691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76192 len:8 PRP1 0x0 PRP2 0x0 00:22:39.287 [2024-07-16 00:23:51.484699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.287 [2024-07-16 00:23:51.484708] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.287 [2024-07-16 00:23:51.484715] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.287 [2024-07-16 00:23:51.484722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76200 len:8 PRP1 0x0 PRP2 0x0 00:22:39.287 [2024-07-16 00:23:51.484731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.287 [2024-07-16 00:23:51.484740] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.287 [2024-07-16 00:23:51.484747] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.287 [2024-07-16 00:23:51.484754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76208 len:8 PRP1 0x0 PRP2 0x0 00:22:39.287 [2024-07-16 00:23:51.484762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.287 [2024-07-16 00:23:51.484772] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.287 [2024-07-16 00:23:51.484778] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.287 [2024-07-16 00:23:51.484786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76216 len:8 PRP1 0x0 PRP2 0x0 00:22:39.287 [2024-07-16 00:23:51.484794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.287 [2024-07-16 00:23:51.484803] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.287 [2024-07-16 00:23:51.484810] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.287 [2024-07-16 00:23:51.484818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76224 len:8 PRP1 0x0 PRP2 0x0 00:22:39.287 [2024-07-16 00:23:51.484826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.287 [2024-07-16 00:23:51.484836] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.287 [2024-07-16 00:23:51.484843] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.287 [2024-07-16 00:23:51.484851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76232 len:8 PRP1 0x0 PRP2 0x0 00:22:39.287 [2024-07-16 00:23:51.484859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.287 [2024-07-16 00:23:51.484868] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.287 [2024-07-16 00:23:51.484879] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.287 [2024-07-16 00:23:51.484886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76240 len:8 PRP1 0x0 PRP2 0x0 00:22:39.287 [2024-07-16 00:23:51.484895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.287 [2024-07-16 00:23:51.484904] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.287 [2024-07-16 00:23:51.484911] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.287 [2024-07-16 00:23:51.484918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76248 len:8 PRP1 0x0 PRP2 0x0 00:22:39.287 [2024-07-16 00:23:51.484927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.287 [2024-07-16 00:23:51.484936] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.287 [2024-07-16 00:23:51.484943] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.287 [2024-07-16 00:23:51.484950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76256 len:8 PRP1 0x0 PRP2 0x0 00:22:39.287 [2024-07-16 00:23:51.484959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.287 [2024-07-16 00:23:51.484968] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.287 [2024-07-16 00:23:51.484975] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.287 [2024-07-16 00:23:51.484982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76264 len:8 PRP1 0x0 PRP2 0x0 00:22:39.287 [2024-07-16 00:23:51.484990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.287 [2024-07-16 00:23:51.484999] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.287 [2024-07-16 00:23:51.485006] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.287 [2024-07-16 00:23:51.485013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76272 len:8 PRP1 0x0 PRP2 0x0 00:22:39.287 [2024-07-16 00:23:51.485022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.287 [2024-07-16 00:23:51.485031] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.287 [2024-07-16 00:23:51.485038] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.287 [2024-07-16 00:23:51.485045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76280 len:8 PRP1 0x0 PRP2 0x0 00:22:39.287 [2024-07-16 00:23:51.485054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.287 [2024-07-16 00:23:51.485063] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.287 [2024-07-16 00:23:51.485070] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.287 [2024-07-16 00:23:51.485077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76288 len:8 PRP1 0x0 PRP2 0x0 00:22:39.287 [2024-07-16 00:23:51.485085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.287 [2024-07-16 00:23:51.485095] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.287 [2024-07-16 00:23:51.485101] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.287 [2024-07-16 00:23:51.485108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76296 len:8 PRP1 0x0 PRP2 0x0 00:22:39.287 [2024-07-16 00:23:51.485117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.287 [2024-07-16 00:23:51.485128] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.287 [2024-07-16 00:23:51.485135] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.287 [2024-07-16 00:23:51.485142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76304 len:8 PRP1 0x0 PRP2 0x0 00:22:39.287 [2024-07-16 00:23:51.485150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.287 [2024-07-16 00:23:51.485159] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.287 [2024-07-16 00:23:51.485166] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.287 [2024-07-16 00:23:51.485174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76312 len:8 PRP1 0x0 PRP2 0x0 00:22:39.287 [2024-07-16 00:23:51.485182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.287 [2024-07-16 00:23:51.485191] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.288 [2024-07-16 00:23:51.485198] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.288 [2024-07-16 00:23:51.485205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76320 len:8 PRP1 0x0 PRP2 0x0 00:22:39.288 [2024-07-16 00:23:51.485214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.288 [2024-07-16 00:23:51.485223] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.288 [2024-07-16 00:23:51.485236] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.288 [2024-07-16 00:23:51.485243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76328 len:8 PRP1 0x0 PRP2 0x0 00:22:39.288 [2024-07-16 00:23:51.485252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.288 [2024-07-16 00:23:51.485261] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.288 [2024-07-16 00:23:51.485268] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.288 [2024-07-16 00:23:51.485275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76336 len:8 PRP1 0x0 PRP2 0x0 00:22:39.288 [2024-07-16 00:23:51.485283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.288 [2024-07-16 00:23:51.485292] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.288 [2024-07-16 00:23:51.485299] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.288 [2024-07-16 00:23:51.485307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76344 len:8 PRP1 0x0 PRP2 0x0 00:22:39.288 [2024-07-16 00:23:51.485315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.288 [2024-07-16 00:23:51.485324] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.288 [2024-07-16 00:23:51.485331] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.288 [2024-07-16 00:23:51.485338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76352 len:8 PRP1 0x0 PRP2 0x0 00:22:39.288 [2024-07-16 00:23:51.485347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.288 [2024-07-16 00:23:51.485356] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.288 [2024-07-16 00:23:51.485363] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.288 [2024-07-16 00:23:51.485371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76360 len:8 PRP1 0x0 PRP2 0x0 00:22:39.288 [2024-07-16 00:23:51.485381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.288 [2024-07-16 00:23:51.485390] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.288 [2024-07-16 00:23:51.485397] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.288 [2024-07-16 00:23:51.485404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76368 len:8 PRP1 0x0 PRP2 0x0 00:22:39.288 [2024-07-16 00:23:51.485413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.288 [2024-07-16 00:23:51.485421] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.288 [2024-07-16 00:23:51.485428] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.288 [2024-07-16 00:23:51.485436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76376 len:8 PRP1 0x0 PRP2 0x0 00:22:39.288 [2024-07-16 00:23:51.485444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.288 [2024-07-16 00:23:51.485453] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.288 [2024-07-16 00:23:51.485461] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.288 [2024-07-16 00:23:51.485468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76384 len:8 PRP1 0x0 PRP2 0x0 00:22:39.288 [2024-07-16 00:23:51.485477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.288 [2024-07-16 00:23:51.485485] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.288 [2024-07-16 00:23:51.485492] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.288 [2024-07-16 00:23:51.485500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76392 len:8 PRP1 0x0 PRP2 0x0 00:22:39.288 [2024-07-16 00:23:51.485508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.288 [2024-07-16 00:23:51.485517] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.288 [2024-07-16 00:23:51.485524] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.288 [2024-07-16 00:23:51.485531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76400 len:8 PRP1 0x0 PRP2 0x0 00:22:39.288 [2024-07-16 00:23:51.485540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.288 [2024-07-16 00:23:51.485548] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.288 [2024-07-16 00:23:51.485555] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.288 [2024-07-16 00:23:51.485563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76408 len:8 PRP1 0x0 PRP2 0x0 00:22:39.288 [2024-07-16 00:23:51.485571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.288 [2024-07-16 00:23:51.485617] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x117c170 was disconnected and freed. reset controller. 00:22:39.288 [2024-07-16 00:23:51.485628] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:22:39.288 [2024-07-16 00:23:51.485653] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:39.288 [2024-07-16 00:23:51.485663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.288 [2024-07-16 00:23:51.485672] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:39.288 [2024-07-16 00:23:51.485683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.288 [2024-07-16 00:23:51.485693] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:39.288 [2024-07-16 00:23:51.485702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.288 [2024-07-16 00:23:51.485711] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:39.288 [2024-07-16 00:23:51.485720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.288 [2024-07-16 00:23:51.485728] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:39.288 [2024-07-16 00:23:51.485755] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1540 (9): Bad file descriptor 00:22:39.288 [2024-07-16 00:23:51.489618] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:39.288 [2024-07-16 00:23:51.559867] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:39.288 00:22:39.288 Latency(us) 00:22:39.288 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:39.288 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:39.288 Verification LBA range: start 0x0 length 0x4000 00:22:39.288 NVMe0n1 : 15.00 10762.40 42.04 1042.79 0.00 10820.69 833.45 21883.33 00:22:39.288 =================================================================================================================== 00:22:39.288 Total : 10762.40 42.04 1042.79 0.00 10820.69 833.45 21883.33 00:22:39.288 Received shutdown signal, test time was about 15.000000 seconds 00:22:39.288 00:22:39.288 Latency(us) 00:22:39.288 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:39.288 =================================================================================================================== 00:22:39.288 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:39.288 00:23:57 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:22:39.288 00:23:57 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:22:39.288 00:23:57 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:22:39.288 00:23:57 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1604781 00:22:39.288 00:23:57 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:22:39.288 00:23:57 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1604781 /var/tmp/bdevperf.sock 00:22:39.288 00:23:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@823 -- # '[' -z 1604781 ']' 00:22:39.288 00:23:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:39.288 00:23:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@828 -- # local max_retries=100 00:22:39.288 00:23:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:39.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:39.288 00:23:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # xtrace_disable 00:22:39.288 00:23:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:39.856 00:23:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:22:39.856 00:23:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # return 0 00:22:39.856 00:23:58 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:40.115 [2024-07-16 00:23:58.791758] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:40.115 00:23:58 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:40.374 [2024-07-16 00:23:58.972256] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:22:40.374 00:23:59 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:40.633 NVMe0n1 00:22:40.633 00:23:59 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:40.891 00:22:40.891 00:23:59 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:41.150 00:22:41.408 00:24:00 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:41.408 00:24:00 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:22:41.408 00:24:00 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:41.666 00:24:00 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:22:44.962 00:24:03 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:44.962 00:24:03 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:22:44.962 00:24:03 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1605830 00:22:44.962 00:24:03 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:44.962 00:24:03 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 1605830 00:22:45.945 0 00:22:45.945 00:24:04 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:45.945 [2024-07-16 00:23:57.807417] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:22:45.945 [2024-07-16 00:23:57.807465] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1604781 ] 00:22:45.945 [2024-07-16 00:23:57.860838] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:45.945 [2024-07-16 00:23:57.930482] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:45.945 [2024-07-16 00:24:00.345260] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:22:45.945 [2024-07-16 00:24:00.345316] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.945 [2024-07-16 00:24:00.345328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.945 [2024-07-16 00:24:00.345338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.945 [2024-07-16 00:24:00.345346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.945 [2024-07-16 00:24:00.345354] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.945 [2024-07-16 00:24:00.345361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.945 [2024-07-16 00:24:00.345369] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.945 [2024-07-16 00:24:00.345375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.945 [2024-07-16 00:24:00.345383] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:45.945 [2024-07-16 00:24:00.345412] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:45.945 [2024-07-16 00:24:00.345428] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e6e540 (9): Bad file descriptor 00:22:45.945 [2024-07-16 00:24:00.478412] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:45.945 Running I/O for 1 seconds... 00:22:45.945 00:22:45.945 Latency(us) 00:22:45.945 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:45.945 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:45.945 Verification LBA range: start 0x0 length 0x4000 00:22:45.945 NVMe0n1 : 1.01 11072.58 43.25 0.00 0.00 11515.66 2393.49 10314.80 00:22:45.945 =================================================================================================================== 00:22:45.945 Total : 11072.58 43.25 0.00 0.00 11515.66 2393.49 10314.80 00:22:45.945 00:24:04 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:45.945 00:24:04 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:22:46.202 00:24:04 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:46.460 00:24:05 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:46.460 00:24:05 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:22:46.460 00:24:05 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:46.719 00:24:05 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:22:49.998 00:24:08 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:49.998 00:24:08 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:22:49.998 00:24:08 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 1604781 00:22:49.998 00:24:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@942 -- # '[' -z 1604781 ']' 00:22:49.998 00:24:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # kill -0 1604781 00:22:49.998 00:24:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@947 -- # uname 00:22:49.999 00:24:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:22:49.999 00:24:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1604781 00:22:49.999 00:24:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:22:49.999 00:24:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:22:49.999 00:24:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1604781' 00:22:49.999 killing process with pid 1604781 00:22:49.999 00:24:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@961 -- # kill 1604781 00:22:49.999 00:24:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # wait 1604781 00:22:49.999 00:24:08 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:22:49.999 00:24:08 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:50.258 00:24:09 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:22:50.258 00:24:09 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:50.258 00:24:09 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:22:50.258 00:24:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:50.258 00:24:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:22:50.258 00:24:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:50.258 00:24:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:22:50.258 00:24:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:50.258 00:24:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:50.258 rmmod nvme_tcp 00:22:50.258 rmmod nvme_fabrics 00:22:50.258 rmmod nvme_keyring 00:22:50.258 00:24:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:50.258 00:24:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:22:50.258 00:24:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:22:50.258 00:24:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 1601622 ']' 00:22:50.258 00:24:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 1601622 00:22:50.258 00:24:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@942 -- # '[' -z 1601622 ']' 00:22:50.258 00:24:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # kill -0 1601622 00:22:50.258 00:24:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@947 -- # uname 00:22:50.258 00:24:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:22:50.258 00:24:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1601622 00:22:50.517 00:24:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:22:50.517 00:24:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:22:50.517 00:24:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1601622' 00:22:50.517 killing process with pid 1601622 00:22:50.517 00:24:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@961 -- # kill 1601622 00:22:50.517 00:24:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # wait 1601622 00:22:50.517 00:24:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:50.517 00:24:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:50.517 00:24:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:50.517 00:24:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:50.517 00:24:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:50.517 00:24:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:50.517 00:24:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:50.517 00:24:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:53.053 00:24:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:53.053 00:22:53.053 real 0m37.607s 00:22:53.053 user 2m3.095s 00:22:53.053 sys 0m7.012s 00:22:53.053 00:24:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1118 -- # xtrace_disable 00:22:53.053 00:24:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:53.053 ************************************ 00:22:53.053 END TEST nvmf_failover 00:22:53.053 ************************************ 00:22:53.053 00:24:11 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:22:53.053 00:24:11 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:22:53.053 00:24:11 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:22:53.053 00:24:11 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:22:53.053 00:24:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:53.053 ************************************ 00:22:53.053 START TEST nvmf_host_discovery 00:22:53.053 ************************************ 00:22:53.053 00:24:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:22:53.053 * Looking for test storage... 00:22:53.053 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:53.053 00:24:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:53.053 00:24:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:22:53.053 00:24:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:53.053 00:24:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:53.053 00:24:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:53.053 00:24:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:53.053 00:24:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:53.053 00:24:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:53.053 00:24:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:53.053 00:24:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:53.053 00:24:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:53.053 00:24:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:53.053 00:24:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:53.053 00:24:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:53.053 00:24:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:53.053 00:24:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:53.053 00:24:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:53.053 00:24:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:53.053 00:24:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:53.053 00:24:11 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:53.053 00:24:11 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:53.053 00:24:11 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:53.053 00:24:11 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:53.053 00:24:11 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:53.053 00:24:11 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:53.053 00:24:11 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:22:53.053 00:24:11 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:53.053 00:24:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:22:53.053 00:24:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:53.053 00:24:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:53.053 00:24:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:53.053 00:24:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:53.053 00:24:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:53.053 00:24:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:53.053 00:24:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:53.053 00:24:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:53.053 00:24:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:22:53.053 00:24:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:22:53.053 00:24:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:22:53.053 00:24:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:22:53.053 00:24:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:22:53.053 00:24:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:22:53.053 00:24:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:22:53.053 00:24:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:53.053 00:24:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:53.053 00:24:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:53.053 00:24:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:53.053 00:24:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:53.053 00:24:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:53.053 00:24:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:53.053 00:24:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:53.053 00:24:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:53.053 00:24:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:53.053 00:24:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:22:53.053 00:24:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:58.328 00:24:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:58.328 00:24:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:22:58.328 00:24:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:58.328 00:24:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:58.328 00:24:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:58.328 00:24:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:58.328 00:24:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:58.328 00:24:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:22:58.328 00:24:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:58.328 00:24:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:22:58.328 00:24:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:22:58.328 00:24:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:22:58.328 00:24:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:22:58.328 00:24:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:22:58.328 00:24:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:22:58.328 00:24:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:58.328 00:24:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:58.328 00:24:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:58.328 00:24:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:58.328 00:24:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:58.328 00:24:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:58.328 00:24:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:58.328 00:24:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:58.328 00:24:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:58.328 00:24:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:58.328 00:24:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:58.328 00:24:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:58.328 00:24:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:58.328 00:24:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:58.328 00:24:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:58.328 00:24:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:58.328 00:24:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:58.328 00:24:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:58.328 00:24:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:58.328 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:58.328 00:24:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:58.328 00:24:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:58.328 00:24:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:58.328 00:24:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:58.328 00:24:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:58.328 00:24:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:58.328 00:24:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:58.328 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:58.328 00:24:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:58.328 00:24:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:58.328 00:24:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:58.328 00:24:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:58.328 00:24:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:58.328 00:24:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:58.328 00:24:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:58.328 00:24:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:58.328 00:24:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:58.328 00:24:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:58.328 00:24:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:58.328 00:24:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:58.328 00:24:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:58.328 00:24:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:58.328 00:24:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:58.328 00:24:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:58.328 Found net devices under 0000:86:00.0: cvl_0_0 00:22:58.328 00:24:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:58.328 00:24:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:58.328 00:24:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:58.328 00:24:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:58.328 00:24:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:58.328 00:24:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:58.328 00:24:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:58.328 00:24:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:58.328 00:24:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:58.328 Found net devices under 0000:86:00.1: cvl_0_1 00:22:58.328 00:24:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:58.328 00:24:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:58.328 00:24:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:22:58.328 00:24:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:58.328 00:24:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:58.328 00:24:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:58.328 00:24:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:58.328 00:24:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:58.329 00:24:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:58.329 00:24:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:58.329 00:24:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:58.329 00:24:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:58.329 00:24:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:58.329 00:24:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:58.329 00:24:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:58.329 00:24:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:58.329 00:24:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:58.329 00:24:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:58.329 00:24:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:58.329 00:24:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:58.329 00:24:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:58.329 00:24:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:58.329 00:24:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:58.329 00:24:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:58.329 00:24:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:58.329 00:24:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:58.329 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:58.329 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.168 ms 00:22:58.329 00:22:58.329 --- 10.0.0.2 ping statistics --- 00:22:58.329 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:58.329 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:22:58.329 00:24:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:58.329 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:58.329 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:22:58.329 00:22:58.329 --- 10.0.0.1 ping statistics --- 00:22:58.329 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:58.329 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:22:58.329 00:24:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:58.329 00:24:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:22:58.329 00:24:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:58.329 00:24:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:58.329 00:24:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:58.329 00:24:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:58.329 00:24:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:58.329 00:24:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:58.329 00:24:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:58.329 00:24:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:22:58.329 00:24:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:58.329 00:24:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:58.329 00:24:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:58.329 00:24:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=1610465 00:22:58.329 00:24:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:58.329 00:24:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 1610465 00:22:58.329 00:24:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@823 -- # '[' -z 1610465 ']' 00:22:58.329 00:24:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:58.329 00:24:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@828 -- # local max_retries=100 00:22:58.329 00:24:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:58.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:58.329 00:24:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # xtrace_disable 00:22:58.329 00:24:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:58.329 [2024-07-16 00:24:16.872701] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:22:58.329 [2024-07-16 00:24:16.872750] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:58.329 [2024-07-16 00:24:16.929353] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:58.329 [2024-07-16 00:24:17.010085] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:58.329 [2024-07-16 00:24:17.010120] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:58.329 [2024-07-16 00:24:17.010126] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:58.329 [2024-07-16 00:24:17.010132] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:58.329 [2024-07-16 00:24:17.010137] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:58.329 [2024-07-16 00:24:17.010158] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:58.897 00:24:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:22:58.897 00:24:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@856 -- # return 0 00:22:58.897 00:24:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:58.897 00:24:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:58.897 00:24:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:58.897 00:24:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:58.897 00:24:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:58.897 00:24:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:22:58.897 00:24:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:58.897 [2024-07-16 00:24:17.705060] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:58.897 00:24:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:22:58.897 00:24:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:22:58.897 00:24:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:22:58.897 00:24:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:58.897 [2024-07-16 00:24:17.717184] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:22:58.897 00:24:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:22:58.897 00:24:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:22:58.897 00:24:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:22:58.897 00:24:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:58.897 null0 00:22:58.897 00:24:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:22:58.897 00:24:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:22:58.897 00:24:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:22:58.897 00:24:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:58.897 null1 00:22:58.897 00:24:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:22:58.897 00:24:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:22:58.897 00:24:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:22:58.897 00:24:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:58.897 00:24:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:22:58.897 00:24:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=1610682 00:22:58.897 00:24:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 1610682 /tmp/host.sock 00:22:58.897 00:24:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:22:58.897 00:24:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@823 -- # '[' -z 1610682 ']' 00:22:58.897 00:24:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@827 -- # local rpc_addr=/tmp/host.sock 00:22:58.897 00:24:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@828 -- # local max_retries=100 00:22:58.897 00:24:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:22:58.897 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:22:58.897 00:24:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # xtrace_disable 00:22:58.897 00:24:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:59.156 [2024-07-16 00:24:17.791121] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:22:59.156 [2024-07-16 00:24:17.791162] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1610682 ] 00:22:59.156 [2024-07-16 00:24:17.844686] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:59.156 [2024-07-16 00:24:17.924441] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:00.092 00:24:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:23:00.092 00:24:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@856 -- # return 0 00:23:00.092 00:24:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:00.092 00:24:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:23:00.092 00:24:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:00.092 00:24:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:00.092 00:24:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:00.092 00:24:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:23:00.092 00:24:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:00.092 00:24:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:00.092 00:24:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:00.092 00:24:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:23:00.092 00:24:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:23:00.092 00:24:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:00.092 00:24:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:00.092 00:24:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:00.092 00:24:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:00.092 00:24:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:00.092 00:24:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:00.092 00:24:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:00.092 00:24:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:23:00.092 00:24:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:23:00.092 00:24:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:00.092 00:24:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:00.092 00:24:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:00.092 00:24:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:00.092 00:24:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:00.092 00:24:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:00.092 00:24:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:00.092 00:24:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:23:00.092 00:24:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:23:00.092 00:24:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:00.092 00:24:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:00.092 00:24:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:00.092 00:24:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:23:00.092 00:24:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:00.092 00:24:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:00.092 00:24:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:00.092 00:24:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:00.092 00:24:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:00.092 00:24:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:00.092 00:24:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:00.092 00:24:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:23:00.092 00:24:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:23:00.092 00:24:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:00.092 00:24:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:00.092 00:24:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:00.092 00:24:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:00.092 00:24:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:00.092 00:24:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:00.092 00:24:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:00.092 00:24:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:23:00.092 00:24:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:23:00.092 00:24:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:00.092 00:24:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:00.092 00:24:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:00.092 00:24:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:23:00.092 00:24:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:00.092 00:24:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:00.092 00:24:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:00.092 00:24:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:00.092 00:24:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:00.092 00:24:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:00.092 00:24:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:00.092 00:24:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:23:00.092 00:24:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:23:00.092 00:24:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:00.092 00:24:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:00.092 00:24:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:00.092 00:24:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:00.092 00:24:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:00.092 00:24:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:00.092 00:24:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:00.092 00:24:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:23:00.092 00:24:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:00.092 00:24:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:00.092 00:24:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:00.092 [2024-07-16 00:24:18.940471] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:00.351 00:24:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:00.351 00:24:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:23:00.351 00:24:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:00.351 00:24:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:00.351 00:24:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:00.351 00:24:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:00.351 00:24:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:00.351 00:24:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:00.351 00:24:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:00.351 00:24:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:23:00.351 00:24:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:23:00.351 00:24:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:00.351 00:24:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:00.351 00:24:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:00.351 00:24:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:00.351 00:24:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:00.351 00:24:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:00.351 00:24:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:00.351 00:24:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:23:00.351 00:24:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:23:00.351 00:24:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:00.351 00:24:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:00.351 00:24:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@906 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:00.351 00:24:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@907 -- # local max=10 00:23:00.351 00:24:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@908 -- # (( max-- )) 00:23:00.351 00:24:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:00.351 00:24:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # get_notification_count 00:23:00.351 00:24:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:00.351 00:24:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:00.351 00:24:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:00.351 00:24:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:00.351 00:24:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:00.351 00:24:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:00.351 00:24:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:23:00.351 00:24:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # (( notification_count == expected_count )) 00:23:00.351 00:24:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # return 0 00:23:00.351 00:24:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:23:00.351 00:24:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:00.352 00:24:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:00.352 00:24:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:00.352 00:24:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:00.352 00:24:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@906 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:00.352 00:24:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@907 -- # local max=10 00:23:00.352 00:24:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@908 -- # (( max-- )) 00:23:00.352 00:24:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:00.352 00:24:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # get_subsystem_names 00:23:00.352 00:24:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:00.352 00:24:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:00.352 00:24:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:00.352 00:24:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:00.352 00:24:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:00.352 00:24:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:00.352 00:24:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:00.352 00:24:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # [[ '' == \n\v\m\e\0 ]] 00:23:00.352 00:24:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # sleep 1 00:23:00.919 [2024-07-16 00:24:19.654833] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:00.919 [2024-07-16 00:24:19.654857] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:00.919 [2024-07-16 00:24:19.654870] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:00.919 [2024-07-16 00:24:19.741121] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:23:01.179 [2024-07-16 00:24:19.797928] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:01.179 [2024-07-16 00:24:19.797948] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:01.438 00:24:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@908 -- # (( max-- )) 00:23:01.438 00:24:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:01.438 00:24:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # get_subsystem_names 00:23:01.438 00:24:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:01.438 00:24:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:01.438 00:24:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:01.438 00:24:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:01.438 00:24:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:01.438 00:24:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:01.438 00:24:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:01.438 00:24:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:01.438 00:24:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # return 0 00:23:01.438 00:24:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:01.438 00:24:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@906 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:01.438 00:24:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@907 -- # local max=10 00:23:01.438 00:24:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@908 -- # (( max-- )) 00:23:01.438 00:24:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:23:01.438 00:24:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # get_bdev_list 00:23:01.438 00:24:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:01.438 00:24:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:01.438 00:24:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:01.438 00:24:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:01.438 00:24:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:01.438 00:24:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:01.438 00:24:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:01.438 00:24:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:23:01.438 00:24:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # return 0 00:23:01.438 00:24:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:01.438 00:24:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@906 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:01.438 00:24:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@907 -- # local max=10 00:23:01.438 00:24:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@908 -- # (( max-- )) 00:23:01.438 00:24:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:23:01.438 00:24:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # get_subsystem_paths nvme0 00:23:01.438 00:24:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:01.438 00:24:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:01.438 00:24:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:01.438 00:24:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:01.438 00:24:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:01.438 00:24:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:01.438 00:24:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:01.697 00:24:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # [[ 4420 == \4\4\2\0 ]] 00:23:01.697 00:24:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # return 0 00:23:01.697 00:24:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:23:01.697 00:24:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:23:01.697 00:24:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:01.697 00:24:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@906 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:01.697 00:24:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@907 -- # local max=10 00:23:01.697 00:24:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@908 -- # (( max-- )) 00:23:01.697 00:24:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:01.697 00:24:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # get_notification_count 00:23:01.697 00:24:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:01.697 00:24:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:01.697 00:24:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:01.697 00:24:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:01.697 00:24:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:01.697 00:24:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:23:01.697 00:24:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:23:01.697 00:24:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # (( notification_count == expected_count )) 00:23:01.697 00:24:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # return 0 00:23:01.697 00:24:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:23:01.697 00:24:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:01.697 00:24:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:01.697 00:24:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:01.697 00:24:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:01.697 00:24:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@906 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:01.697 00:24:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@907 -- # local max=10 00:23:01.697 00:24:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@908 -- # (( max-- )) 00:23:01.697 00:24:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:01.697 00:24:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # get_bdev_list 00:23:01.697 00:24:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:01.697 00:24:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:01.697 00:24:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:01.697 00:24:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:01.697 00:24:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:01.697 00:24:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:01.697 00:24:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:01.697 00:24:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:01.697 00:24:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # return 0 00:23:01.697 00:24:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:23:01.697 00:24:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:23:01.697 00:24:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:01.697 00:24:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@906 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:01.697 00:24:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@907 -- # local max=10 00:23:01.697 00:24:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@908 -- # (( max-- )) 00:23:01.697 00:24:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:01.697 00:24:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # get_notification_count 00:23:01.697 00:24:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:23:01.697 00:24:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:01.697 00:24:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:01.697 00:24:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:01.697 00:24:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:01.697 00:24:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:23:01.697 00:24:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:01.697 00:24:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # (( notification_count == expected_count )) 00:23:01.697 00:24:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # return 0 00:23:01.697 00:24:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:23:01.697 00:24:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:01.697 00:24:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:01.697 [2024-07-16 00:24:20.448641] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:01.697 [2024-07-16 00:24:20.448940] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:01.697 [2024-07-16 00:24:20.448964] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:01.697 00:24:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:01.697 00:24:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:01.697 00:24:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@906 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:01.697 00:24:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@907 -- # local max=10 00:23:01.697 00:24:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@908 -- # (( max-- )) 00:23:01.697 00:24:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:01.697 00:24:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # get_subsystem_names 00:23:01.698 00:24:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:01.698 00:24:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:01.698 00:24:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:01.698 00:24:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:01.698 00:24:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:01.698 00:24:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:01.698 00:24:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:01.698 00:24:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:01.698 00:24:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # return 0 00:23:01.698 00:24:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:01.698 00:24:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@906 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:01.698 00:24:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@907 -- # local max=10 00:23:01.698 00:24:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@908 -- # (( max-- )) 00:23:01.698 00:24:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:01.698 00:24:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # get_bdev_list 00:23:01.698 00:24:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:01.698 00:24:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:01.698 00:24:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:01.698 00:24:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:01.698 00:24:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:01.698 00:24:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:01.698 00:24:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:01.698 [2024-07-16 00:24:20.536214] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:23:01.698 00:24:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:01.698 00:24:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # return 0 00:23:01.698 00:24:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:01.698 00:24:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@906 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:01.955 00:24:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@907 -- # local max=10 00:23:01.955 00:24:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@908 -- # (( max-- )) 00:23:01.955 00:24:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:01.955 00:24:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # get_subsystem_paths nvme0 00:23:01.955 00:24:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:01.955 00:24:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:01.955 00:24:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:01.955 00:24:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:01.955 00:24:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:01.955 00:24:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:01.955 00:24:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:01.955 00:24:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:23:01.955 00:24:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # sleep 1 00:23:02.214 [2024-07-16 00:24:20.842444] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:02.214 [2024-07-16 00:24:20.842463] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:02.214 [2024-07-16 00:24:20.842468] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:02.781 00:24:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@908 -- # (( max-- )) 00:23:02.781 00:24:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:02.781 00:24:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # get_subsystem_paths nvme0 00:23:02.781 00:24:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:02.781 00:24:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:02.781 00:24:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:02.781 00:24:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:02.781 00:24:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:02.781 00:24:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:02.781 00:24:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:03.040 00:24:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:23:03.040 00:24:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # return 0 00:23:03.040 00:24:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:23:03.040 00:24:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:03.040 00:24:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:03.040 00:24:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@906 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:03.040 00:24:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@907 -- # local max=10 00:23:03.040 00:24:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@908 -- # (( max-- )) 00:23:03.040 00:24:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:03.040 00:24:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # get_notification_count 00:23:03.040 00:24:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:03.040 00:24:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:03.040 00:24:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:03.040 00:24:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:03.040 00:24:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:03.040 00:24:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:03.040 00:24:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:03.040 00:24:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # (( notification_count == expected_count )) 00:23:03.040 00:24:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # return 0 00:23:03.040 00:24:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:03.040 00:24:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:03.040 00:24:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:03.040 [2024-07-16 00:24:21.709062] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:03.041 [2024-07-16 00:24:21.709083] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:03.041 [2024-07-16 00:24:21.710436] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:03.041 [2024-07-16 00:24:21.710452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.041 [2024-07-16 00:24:21.710461] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:03.041 [2024-07-16 00:24:21.710471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.041 [2024-07-16 00:24:21.710494] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:03.041 [2024-07-16 00:24:21.710501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.041 [2024-07-16 00:24:21.710508] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:03.041 [2024-07-16 00:24:21.710525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.041 [2024-07-16 00:24:21.710532] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ddf10 is same with the state(5) to be set 00:23:03.041 00:24:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:03.041 00:24:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:03.041 00:24:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@906 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:03.041 00:24:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@907 -- # local max=10 00:23:03.041 00:24:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@908 -- # (( max-- )) 00:23:03.041 00:24:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:03.041 00:24:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # get_subsystem_names 00:23:03.041 00:24:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:03.041 00:24:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:03.041 00:24:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:03.041 00:24:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:03.041 00:24:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:03.041 00:24:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:03.041 [2024-07-16 00:24:21.720451] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22ddf10 (9): Bad file descriptor 00:23:03.041 [2024-07-16 00:24:21.730488] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:03.041 [2024-07-16 00:24:21.730809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:03.041 [2024-07-16 00:24:21.730824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22ddf10 with addr=10.0.0.2, port=4420 00:23:03.041 [2024-07-16 00:24:21.730832] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ddf10 is same with the state(5) to be set 00:23:03.041 [2024-07-16 00:24:21.730844] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22ddf10 (9): Bad file descriptor 00:23:03.041 [2024-07-16 00:24:21.730855] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:03.041 [2024-07-16 00:24:21.730862] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:03.041 [2024-07-16 00:24:21.730869] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:03.041 [2024-07-16 00:24:21.730879] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:03.041 00:24:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:03.041 [2024-07-16 00:24:21.740547] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:03.041 [2024-07-16 00:24:21.740846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:03.041 [2024-07-16 00:24:21.740858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22ddf10 with addr=10.0.0.2, port=4420 00:23:03.041 [2024-07-16 00:24:21.740869] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ddf10 is same with the state(5) to be set 00:23:03.041 [2024-07-16 00:24:21.740879] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22ddf10 (9): Bad file descriptor 00:23:03.041 [2024-07-16 00:24:21.740889] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:03.041 [2024-07-16 00:24:21.740895] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:03.041 [2024-07-16 00:24:21.740901] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:03.041 [2024-07-16 00:24:21.740911] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:03.041 [2024-07-16 00:24:21.750595] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:03.041 [2024-07-16 00:24:21.750818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:03.041 [2024-07-16 00:24:21.750830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22ddf10 with addr=10.0.0.2, port=4420 00:23:03.041 [2024-07-16 00:24:21.750837] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ddf10 is same with the state(5) to be set 00:23:03.041 [2024-07-16 00:24:21.750846] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22ddf10 (9): Bad file descriptor 00:23:03.041 [2024-07-16 00:24:21.750856] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:03.041 [2024-07-16 00:24:21.750862] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:03.041 [2024-07-16 00:24:21.750868] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:03.041 [2024-07-16 00:24:21.750877] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:03.041 [2024-07-16 00:24:21.760645] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:03.041 [2024-07-16 00:24:21.760965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:03.041 [2024-07-16 00:24:21.760978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22ddf10 with addr=10.0.0.2, port=4420 00:23:03.041 [2024-07-16 00:24:21.760985] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ddf10 is same with the state(5) to be set 00:23:03.041 [2024-07-16 00:24:21.760997] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22ddf10 (9): Bad file descriptor 00:23:03.041 [2024-07-16 00:24:21.761006] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:03.041 [2024-07-16 00:24:21.761012] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:03.041 [2024-07-16 00:24:21.761019] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:03.041 [2024-07-16 00:24:21.761029] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:03.041 00:24:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:03.041 00:24:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # return 0 00:23:03.041 00:24:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:03.041 00:24:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@906 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:03.041 00:24:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@907 -- # local max=10 00:23:03.041 00:24:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@908 -- # (( max-- )) 00:23:03.041 00:24:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:03.041 00:24:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # get_bdev_list 00:23:03.041 00:24:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:03.041 00:24:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:03.041 00:24:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:03.041 [2024-07-16 00:24:21.770700] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:03.041 [2024-07-16 00:24:21.770988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:03.041 [2024-07-16 00:24:21.771002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22ddf10 with addr=10.0.0.2, port=4420 00:23:03.041 [2024-07-16 00:24:21.771009] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ddf10 is same with the state(5) to be set 00:23:03.041 [2024-07-16 00:24:21.771020] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22ddf10 (9): Bad file descriptor 00:23:03.041 [2024-07-16 00:24:21.771030] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:03.041 [2024-07-16 00:24:21.771036] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:03.041 [2024-07-16 00:24:21.771043] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:03.041 [2024-07-16 00:24:21.771052] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:03.041 00:24:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:03.041 00:24:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:03.041 00:24:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:03.041 [2024-07-16 00:24:21.780753] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:03.041 [2024-07-16 00:24:21.781099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:03.041 [2024-07-16 00:24:21.781111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22ddf10 with addr=10.0.0.2, port=4420 00:23:03.041 [2024-07-16 00:24:21.781119] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ddf10 is same with the state(5) to be set 00:23:03.041 [2024-07-16 00:24:21.781129] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22ddf10 (9): Bad file descriptor 00:23:03.041 [2024-07-16 00:24:21.781139] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:03.041 [2024-07-16 00:24:21.781145] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:03.041 [2024-07-16 00:24:21.781152] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:03.041 [2024-07-16 00:24:21.781161] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:03.041 [2024-07-16 00:24:21.790806] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:03.041 [2024-07-16 00:24:21.791103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:03.041 [2024-07-16 00:24:21.791115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22ddf10 with addr=10.0.0.2, port=4420 00:23:03.041 [2024-07-16 00:24:21.791122] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ddf10 is same with the state(5) to be set 00:23:03.041 [2024-07-16 00:24:21.791132] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22ddf10 (9): Bad file descriptor 00:23:03.041 [2024-07-16 00:24:21.791142] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:03.041 [2024-07-16 00:24:21.791148] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:03.041 [2024-07-16 00:24:21.791155] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:03.041 [2024-07-16 00:24:21.791167] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:03.042 [2024-07-16 00:24:21.797773] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:23:03.042 [2024-07-16 00:24:21.797790] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:03.042 00:24:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:03.042 00:24:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:03.042 00:24:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # return 0 00:23:03.042 00:24:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:03.042 00:24:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@906 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:03.042 00:24:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@907 -- # local max=10 00:23:03.042 00:24:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@908 -- # (( max-- )) 00:23:03.042 00:24:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:23:03.042 00:24:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # get_subsystem_paths nvme0 00:23:03.042 00:24:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:03.042 00:24:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:03.042 00:24:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:03.042 00:24:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:03.042 00:24:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:03.042 00:24:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:03.042 00:24:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:03.042 00:24:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # [[ 4421 == \4\4\2\1 ]] 00:23:03.042 00:24:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # return 0 00:23:03.042 00:24:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:23:03.042 00:24:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:03.042 00:24:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:03.042 00:24:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@906 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:03.042 00:24:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@907 -- # local max=10 00:23:03.042 00:24:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@908 -- # (( max-- )) 00:23:03.042 00:24:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:03.042 00:24:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # get_notification_count 00:23:03.042 00:24:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:03.042 00:24:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:03.042 00:24:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:03.042 00:24:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:03.042 00:24:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:03.320 00:24:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:03.320 00:24:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:03.320 00:24:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # (( notification_count == expected_count )) 00:23:03.320 00:24:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # return 0 00:23:03.320 00:24:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:23:03.320 00:24:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:03.320 00:24:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:03.320 00:24:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:03.320 00:24:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:23:03.320 00:24:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@906 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:23:03.320 00:24:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@907 -- # local max=10 00:23:03.320 00:24:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@908 -- # (( max-- )) 00:23:03.320 00:24:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:23:03.320 00:24:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # get_subsystem_names 00:23:03.320 00:24:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:03.320 00:24:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:03.320 00:24:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:03.320 00:24:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:03.320 00:24:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:03.320 00:24:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:03.320 00:24:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:03.320 00:24:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # [[ '' == '' ]] 00:23:03.320 00:24:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # return 0 00:23:03.320 00:24:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:23:03.320 00:24:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@906 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:23:03.320 00:24:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@907 -- # local max=10 00:23:03.320 00:24:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@908 -- # (( max-- )) 00:23:03.320 00:24:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:23:03.320 00:24:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # get_bdev_list 00:23:03.320 00:24:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:03.320 00:24:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:03.320 00:24:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:03.321 00:24:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:03.321 00:24:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:03.321 00:24:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:03.321 00:24:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:03.321 00:24:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # [[ '' == '' ]] 00:23:03.321 00:24:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # return 0 00:23:03.321 00:24:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:23:03.321 00:24:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:23:03.321 00:24:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:03.321 00:24:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@906 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:03.321 00:24:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@907 -- # local max=10 00:23:03.321 00:24:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@908 -- # (( max-- )) 00:23:03.321 00:24:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:03.321 00:24:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # get_notification_count 00:23:03.321 00:24:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:03.321 00:24:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:03.321 00:24:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:03.321 00:24:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:03.321 00:24:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:03.321 00:24:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:23:03.321 00:24:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:23:03.321 00:24:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # (( notification_count == expected_count )) 00:23:03.321 00:24:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # return 0 00:23:03.321 00:24:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:03.321 00:24:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:03.321 00:24:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:04.694 [2024-07-16 00:24:23.123838] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:04.694 [2024-07-16 00:24:23.123854] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:04.694 [2024-07-16 00:24:23.123865] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:04.694 [2024-07-16 00:24:23.212154] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:23:04.694 [2024-07-16 00:24:23.319790] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:04.694 [2024-07-16 00:24:23.319817] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:04.694 00:24:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:04.694 00:24:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:04.694 00:24:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@642 -- # local es=0 00:23:04.694 00:24:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@644 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:04.694 00:24:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@630 -- # local arg=rpc_cmd 00:23:04.694 00:24:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:23:04.694 00:24:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@634 -- # type -t rpc_cmd 00:23:04.694 00:24:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:23:04.694 00:24:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@645 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:04.694 00:24:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:04.694 00:24:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:04.694 request: 00:23:04.694 { 00:23:04.694 "name": "nvme", 00:23:04.694 "trtype": "tcp", 00:23:04.694 "traddr": "10.0.0.2", 00:23:04.694 "adrfam": "ipv4", 00:23:04.694 "trsvcid": "8009", 00:23:04.694 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:04.694 "wait_for_attach": true, 00:23:04.694 "method": "bdev_nvme_start_discovery", 00:23:04.694 "req_id": 1 00:23:04.694 } 00:23:04.694 Got JSON-RPC error response 00:23:04.694 response: 00:23:04.694 { 00:23:04.694 "code": -17, 00:23:04.694 "message": "File exists" 00:23:04.694 } 00:23:04.694 00:24:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 1 == 0 ]] 00:23:04.694 00:24:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@645 -- # es=1 00:23:04.694 00:24:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:23:04.694 00:24:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:23:04.694 00:24:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:23:04.694 00:24:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:23:04.694 00:24:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:04.694 00:24:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:04.694 00:24:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:04.694 00:24:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:04.694 00:24:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:04.694 00:24:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:04.694 00:24:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:04.694 00:24:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:23:04.694 00:24:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:23:04.694 00:24:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:04.694 00:24:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:04.694 00:24:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:04.694 00:24:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:04.694 00:24:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:04.694 00:24:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:04.694 00:24:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:04.694 00:24:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:04.694 00:24:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:04.694 00:24:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@642 -- # local es=0 00:23:04.694 00:24:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@644 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:04.694 00:24:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@630 -- # local arg=rpc_cmd 00:23:04.694 00:24:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:23:04.694 00:24:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@634 -- # type -t rpc_cmd 00:23:04.694 00:24:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:23:04.694 00:24:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@645 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:04.694 00:24:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:04.694 00:24:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:04.694 request: 00:23:04.694 { 00:23:04.694 "name": "nvme_second", 00:23:04.694 "trtype": "tcp", 00:23:04.694 "traddr": "10.0.0.2", 00:23:04.694 "adrfam": "ipv4", 00:23:04.694 "trsvcid": "8009", 00:23:04.694 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:04.694 "wait_for_attach": true, 00:23:04.694 "method": "bdev_nvme_start_discovery", 00:23:04.694 "req_id": 1 00:23:04.694 } 00:23:04.694 Got JSON-RPC error response 00:23:04.694 response: 00:23:04.694 { 00:23:04.694 "code": -17, 00:23:04.694 "message": "File exists" 00:23:04.694 } 00:23:04.694 00:24:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 1 == 0 ]] 00:23:04.694 00:24:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@645 -- # es=1 00:23:04.694 00:24:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:23:04.694 00:24:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:23:04.694 00:24:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:23:04.694 00:24:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:23:04.694 00:24:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:04.694 00:24:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:04.694 00:24:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:04.694 00:24:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:04.694 00:24:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:04.694 00:24:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:04.694 00:24:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:04.694 00:24:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:23:04.694 00:24:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:23:04.694 00:24:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:04.694 00:24:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:04.694 00:24:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:04.694 00:24:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:04.694 00:24:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:04.694 00:24:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:04.694 00:24:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:04.952 00:24:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:04.952 00:24:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:04.952 00:24:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@642 -- # local es=0 00:23:04.952 00:24:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@644 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:04.952 00:24:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@630 -- # local arg=rpc_cmd 00:23:04.952 00:24:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:23:04.952 00:24:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@634 -- # type -t rpc_cmd 00:23:04.952 00:24:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:23:04.952 00:24:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@645 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:04.952 00:24:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:04.952 00:24:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:05.936 [2024-07-16 00:24:24.571357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:05.936 [2024-07-16 00:24:24.571385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x231aa00 with addr=10.0.0.2, port=8010 00:23:05.936 [2024-07-16 00:24:24.571399] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:05.936 [2024-07-16 00:24:24.571406] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:05.936 [2024-07-16 00:24:24.571412] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:06.873 [2024-07-16 00:24:25.573713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:06.873 [2024-07-16 00:24:25.573738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x231aa00 with addr=10.0.0.2, port=8010 00:23:06.873 [2024-07-16 00:24:25.573748] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:06.873 [2024-07-16 00:24:25.573754] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:06.873 [2024-07-16 00:24:25.573760] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:07.809 [2024-07-16 00:24:26.575909] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:23:07.809 request: 00:23:07.809 { 00:23:07.809 "name": "nvme_second", 00:23:07.809 "trtype": "tcp", 00:23:07.809 "traddr": "10.0.0.2", 00:23:07.809 "adrfam": "ipv4", 00:23:07.809 "trsvcid": "8010", 00:23:07.809 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:07.809 "wait_for_attach": false, 00:23:07.809 "attach_timeout_ms": 3000, 00:23:07.809 "method": "bdev_nvme_start_discovery", 00:23:07.809 "req_id": 1 00:23:07.809 } 00:23:07.809 Got JSON-RPC error response 00:23:07.809 response: 00:23:07.809 { 00:23:07.809 "code": -110, 00:23:07.809 "message": "Connection timed out" 00:23:07.809 } 00:23:07.809 00:24:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 1 == 0 ]] 00:23:07.809 00:24:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@645 -- # es=1 00:23:07.809 00:24:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:23:07.809 00:24:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:23:07.809 00:24:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:23:07.810 00:24:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:23:07.810 00:24:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:07.810 00:24:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:07.810 00:24:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:07.810 00:24:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:07.810 00:24:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:07.810 00:24:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:07.810 00:24:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:07.810 00:24:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:23:07.810 00:24:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:23:07.810 00:24:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 1610682 00:23:07.810 00:24:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:23:07.810 00:24:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:07.810 00:24:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:23:07.810 00:24:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:07.810 00:24:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:23:07.810 00:24:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:07.810 00:24:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:07.810 rmmod nvme_tcp 00:23:07.810 rmmod nvme_fabrics 00:23:08.068 rmmod nvme_keyring 00:23:08.068 00:24:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:08.068 00:24:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:23:08.068 00:24:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:23:08.069 00:24:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 1610465 ']' 00:23:08.069 00:24:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 1610465 00:23:08.069 00:24:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@942 -- # '[' -z 1610465 ']' 00:23:08.069 00:24:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@946 -- # kill -0 1610465 00:23:08.069 00:24:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@947 -- # uname 00:23:08.069 00:24:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:23:08.069 00:24:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1610465 00:23:08.069 00:24:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:23:08.069 00:24:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:23:08.069 00:24:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1610465' 00:23:08.069 killing process with pid 1610465 00:23:08.069 00:24:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@961 -- # kill 1610465 00:23:08.069 00:24:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # wait 1610465 00:23:08.069 00:24:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:08.069 00:24:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:08.069 00:24:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:08.069 00:24:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:08.069 00:24:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:08.069 00:24:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:08.069 00:24:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:08.069 00:24:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:10.606 00:24:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:10.606 00:23:10.606 real 0m17.500s 00:23:10.606 user 0m22.275s 00:23:10.606 sys 0m5.156s 00:23:10.606 00:24:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1118 -- # xtrace_disable 00:23:10.606 00:24:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:10.606 ************************************ 00:23:10.606 END TEST nvmf_host_discovery 00:23:10.606 ************************************ 00:23:10.606 00:24:29 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:23:10.606 00:24:29 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:23:10.606 00:24:29 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:23:10.606 00:24:29 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:23:10.606 00:24:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:10.606 ************************************ 00:23:10.606 START TEST nvmf_host_multipath_status 00:23:10.606 ************************************ 00:23:10.606 00:24:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:23:10.606 * Looking for test storage... 00:23:10.606 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:10.606 00:24:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:10.606 00:24:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:23:10.606 00:24:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:10.606 00:24:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:10.606 00:24:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:10.606 00:24:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:10.606 00:24:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:10.606 00:24:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:10.606 00:24:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:10.606 00:24:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:10.606 00:24:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:10.606 00:24:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:10.606 00:24:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:10.606 00:24:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:10.606 00:24:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:10.606 00:24:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:10.606 00:24:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:10.606 00:24:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:10.606 00:24:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:10.606 00:24:29 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:10.606 00:24:29 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:10.606 00:24:29 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:10.606 00:24:29 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:10.606 00:24:29 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:10.606 00:24:29 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:10.606 00:24:29 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:23:10.606 00:24:29 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:10.606 00:24:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:23:10.606 00:24:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:10.606 00:24:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:10.606 00:24:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:10.606 00:24:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:10.606 00:24:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:10.606 00:24:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:10.606 00:24:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:10.606 00:24:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:10.606 00:24:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:10.606 00:24:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:10.606 00:24:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:10.606 00:24:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:23:10.606 00:24:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:10.606 00:24:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:23:10.606 00:24:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:23:10.606 00:24:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:10.606 00:24:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:10.606 00:24:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:10.606 00:24:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:10.606 00:24:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:10.606 00:24:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:10.606 00:24:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:10.607 00:24:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:10.607 00:24:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:10.607 00:24:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:10.607 00:24:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:23:10.607 00:24:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:15.883 00:24:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:15.883 00:24:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:23:15.884 00:24:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:15.884 00:24:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:15.884 00:24:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:15.884 00:24:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:15.884 00:24:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:15.884 00:24:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:23:15.884 00:24:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:15.884 00:24:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:23:15.884 00:24:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:23:15.884 00:24:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:23:15.884 00:24:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:23:15.884 00:24:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:23:15.884 00:24:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:23:15.884 00:24:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:15.884 00:24:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:15.884 00:24:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:15.884 00:24:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:15.884 00:24:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:15.884 00:24:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:15.884 00:24:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:15.884 00:24:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:15.884 00:24:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:15.884 00:24:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:15.884 00:24:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:15.884 00:24:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:15.884 00:24:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:15.884 00:24:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:15.884 00:24:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:15.884 00:24:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:15.884 00:24:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:15.884 00:24:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:15.884 00:24:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:15.884 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:15.884 00:24:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:15.884 00:24:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:15.884 00:24:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:15.884 00:24:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:15.884 00:24:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:15.884 00:24:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:15.884 00:24:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:15.884 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:15.884 00:24:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:15.884 00:24:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:15.884 00:24:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:15.884 00:24:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:15.884 00:24:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:15.884 00:24:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:15.884 00:24:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:15.884 00:24:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:15.884 00:24:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:15.884 00:24:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:15.884 00:24:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:15.884 00:24:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:15.884 00:24:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:15.884 00:24:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:15.884 00:24:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:15.884 00:24:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:15.884 Found net devices under 0000:86:00.0: cvl_0_0 00:23:15.884 00:24:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:15.884 00:24:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:15.884 00:24:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:15.884 00:24:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:15.884 00:24:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:15.884 00:24:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:15.884 00:24:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:15.884 00:24:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:15.884 00:24:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:15.884 Found net devices under 0000:86:00.1: cvl_0_1 00:23:15.884 00:24:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:15.884 00:24:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:15.884 00:24:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:23:15.884 00:24:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:15.884 00:24:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:15.884 00:24:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:15.884 00:24:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:15.884 00:24:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:15.884 00:24:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:15.884 00:24:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:15.884 00:24:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:15.884 00:24:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:15.884 00:24:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:15.884 00:24:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:15.884 00:24:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:15.884 00:24:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:15.884 00:24:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:15.884 00:24:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:15.884 00:24:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:15.884 00:24:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:15.884 00:24:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:15.884 00:24:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:15.884 00:24:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:15.884 00:24:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:15.884 00:24:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:15.884 00:24:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:15.884 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:15.884 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.174 ms 00:23:15.884 00:23:15.884 --- 10.0.0.2 ping statistics --- 00:23:15.884 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:15.884 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:23:15.884 00:24:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:15.884 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:15.884 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.262 ms 00:23:15.884 00:23:15.884 --- 10.0.0.1 ping statistics --- 00:23:15.884 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:15.884 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:23:15.884 00:24:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:15.884 00:24:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:23:15.884 00:24:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:15.884 00:24:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:15.884 00:24:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:15.884 00:24:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:15.884 00:24:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:15.884 00:24:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:15.884 00:24:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:15.884 00:24:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:23:15.884 00:24:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:15.884 00:24:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:15.884 00:24:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:15.884 00:24:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=1615756 00:23:15.884 00:24:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 1615756 00:23:15.884 00:24:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:23:15.884 00:24:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@823 -- # '[' -z 1615756 ']' 00:23:15.884 00:24:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:15.884 00:24:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@828 -- # local max_retries=100 00:23:15.885 00:24:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:15.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:15.885 00:24:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # xtrace_disable 00:23:15.885 00:24:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:15.885 [2024-07-16 00:24:34.564926] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:23:15.885 [2024-07-16 00:24:34.564970] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:15.885 [2024-07-16 00:24:34.620451] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:15.885 [2024-07-16 00:24:34.699953] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:15.885 [2024-07-16 00:24:34.699988] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:15.885 [2024-07-16 00:24:34.699995] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:15.885 [2024-07-16 00:24:34.700001] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:15.885 [2024-07-16 00:24:34.700006] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:15.885 [2024-07-16 00:24:34.700047] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:15.885 [2024-07-16 00:24:34.700050] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:16.821 00:24:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:23:16.821 00:24:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # return 0 00:23:16.821 00:24:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:16.821 00:24:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:16.821 00:24:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:16.821 00:24:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:16.821 00:24:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1615756 00:23:16.822 00:24:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:16.822 [2024-07-16 00:24:35.567691] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:16.822 00:24:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:17.082 Malloc0 00:23:17.082 00:24:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:23:17.340 00:24:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:17.340 00:24:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:17.599 [2024-07-16 00:24:36.248519] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:17.599 00:24:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:17.599 [2024-07-16 00:24:36.412950] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:17.599 00:24:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:23:17.599 00:24:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1616020 00:23:17.599 00:24:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:17.599 00:24:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1616020 /var/tmp/bdevperf.sock 00:23:17.599 00:24:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@823 -- # '[' -z 1616020 ']' 00:23:17.599 00:24:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:17.599 00:24:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@828 -- # local max_retries=100 00:23:17.599 00:24:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:17.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:17.599 00:24:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # xtrace_disable 00:23:17.599 00:24:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:17.858 00:24:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:23:17.858 00:24:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # return 0 00:23:17.858 00:24:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:23:18.116 00:24:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:23:18.683 Nvme0n1 00:23:18.683 00:24:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:23:18.942 Nvme0n1 00:23:18.942 00:24:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:23:18.942 00:24:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:23:20.847 00:24:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:23:20.847 00:24:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:23:21.106 00:24:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:21.365 00:24:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:23:22.313 00:24:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:23:22.313 00:24:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:22.313 00:24:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:22.313 00:24:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:22.572 00:24:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:22.572 00:24:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:22.572 00:24:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:22.572 00:24:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:22.572 00:24:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:22.572 00:24:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:22.572 00:24:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:22.572 00:24:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:22.830 00:24:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:22.830 00:24:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:22.830 00:24:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:22.830 00:24:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:23.088 00:24:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:23.088 00:24:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:23.088 00:24:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:23.088 00:24:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:23.088 00:24:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:23.088 00:24:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:23.088 00:24:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:23.088 00:24:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:23.346 00:24:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:23.346 00:24:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:23:23.346 00:24:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:23.604 00:24:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:23.861 00:24:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:23:24.796 00:24:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:23:24.796 00:24:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:24.796 00:24:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:24.796 00:24:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:25.070 00:24:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:25.070 00:24:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:25.070 00:24:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:25.070 00:24:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:25.070 00:24:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:25.070 00:24:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:25.070 00:24:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:25.070 00:24:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:25.342 00:24:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:25.342 00:24:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:25.342 00:24:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:25.342 00:24:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:25.600 00:24:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:25.600 00:24:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:25.600 00:24:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:25.600 00:24:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:25.600 00:24:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:25.600 00:24:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:25.600 00:24:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:25.600 00:24:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:25.858 00:24:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:25.858 00:24:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:23:25.858 00:24:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:26.116 00:24:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:23:26.375 00:24:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:23:27.311 00:24:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:23:27.311 00:24:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:27.311 00:24:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:27.311 00:24:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:27.568 00:24:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:27.568 00:24:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:27.568 00:24:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:27.568 00:24:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:27.568 00:24:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:27.568 00:24:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:27.568 00:24:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:27.568 00:24:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:27.826 00:24:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:27.826 00:24:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:27.826 00:24:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:27.826 00:24:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:28.084 00:24:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:28.084 00:24:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:28.084 00:24:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:28.084 00:24:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:28.084 00:24:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:28.084 00:24:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:28.084 00:24:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:28.084 00:24:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:28.342 00:24:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:28.342 00:24:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:23:28.342 00:24:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:28.600 00:24:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:28.859 00:24:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:23:29.795 00:24:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:23:29.795 00:24:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:29.795 00:24:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:29.795 00:24:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:30.054 00:24:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:30.054 00:24:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:30.054 00:24:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:30.054 00:24:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:30.313 00:24:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:30.313 00:24:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:30.313 00:24:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:30.313 00:24:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:30.313 00:24:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:30.313 00:24:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:30.313 00:24:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:30.313 00:24:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:30.572 00:24:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:30.572 00:24:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:30.572 00:24:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:30.572 00:24:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:30.830 00:24:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:30.830 00:24:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:30.830 00:24:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:30.830 00:24:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:30.830 00:24:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:30.830 00:24:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:23:30.830 00:24:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:23:31.088 00:24:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:31.347 00:24:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:23:32.283 00:24:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:23:32.283 00:24:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:32.283 00:24:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:32.283 00:24:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:32.542 00:24:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:32.542 00:24:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:32.542 00:24:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:32.542 00:24:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:32.542 00:24:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:32.542 00:24:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:32.542 00:24:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:32.542 00:24:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:32.802 00:24:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:32.802 00:24:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:32.802 00:24:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:32.802 00:24:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:33.061 00:24:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:33.061 00:24:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:23:33.061 00:24:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:33.061 00:24:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:33.061 00:24:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:33.061 00:24:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:33.061 00:24:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:33.062 00:24:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:33.320 00:24:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:33.320 00:24:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:23:33.320 00:24:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:23:33.579 00:24:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:33.838 00:24:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:23:34.774 00:24:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:23:34.774 00:24:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:34.774 00:24:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:34.774 00:24:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:35.032 00:24:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:35.032 00:24:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:35.032 00:24:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:35.032 00:24:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:35.032 00:24:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:35.032 00:24:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:35.032 00:24:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:35.032 00:24:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:35.290 00:24:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:35.290 00:24:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:35.291 00:24:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:35.291 00:24:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:35.549 00:24:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:35.550 00:24:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:23:35.550 00:24:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:35.550 00:24:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:35.550 00:24:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:35.550 00:24:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:35.550 00:24:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:35.550 00:24:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:35.809 00:24:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:35.809 00:24:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:23:36.067 00:24:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:23:36.067 00:24:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:23:36.325 00:24:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:36.325 00:24:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:23:37.705 00:24:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:23:37.705 00:24:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:37.705 00:24:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:37.705 00:24:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:37.705 00:24:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:37.705 00:24:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:37.705 00:24:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:37.705 00:24:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:37.705 00:24:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:37.705 00:24:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:37.705 00:24:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:37.705 00:24:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:37.965 00:24:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:37.965 00:24:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:37.965 00:24:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:37.965 00:24:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:38.223 00:24:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:38.223 00:24:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:38.223 00:24:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:38.223 00:24:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:38.223 00:24:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:38.223 00:24:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:38.223 00:24:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:38.223 00:24:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:38.482 00:24:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:38.482 00:24:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:23:38.482 00:24:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:38.741 00:24:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:39.000 00:24:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:23:40.000 00:24:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:23:40.000 00:24:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:40.000 00:24:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:40.000 00:24:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:40.000 00:24:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:40.000 00:24:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:40.000 00:24:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:40.000 00:24:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:40.259 00:24:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:40.259 00:24:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:40.259 00:24:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:40.259 00:24:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:40.518 00:24:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:40.518 00:24:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:40.518 00:24:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:40.518 00:24:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:40.518 00:24:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:40.518 00:24:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:40.776 00:24:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:40.776 00:24:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:40.776 00:24:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:40.776 00:24:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:40.776 00:24:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:40.776 00:24:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:41.034 00:24:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:41.034 00:24:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:23:41.034 00:24:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:41.293 00:24:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:23:41.293 00:25:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:23:42.667 00:25:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:23:42.667 00:25:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:42.667 00:25:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:42.667 00:25:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:42.667 00:25:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:42.667 00:25:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:42.667 00:25:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:42.667 00:25:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:42.667 00:25:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:42.667 00:25:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:42.667 00:25:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:42.667 00:25:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:42.926 00:25:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:42.926 00:25:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:42.926 00:25:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:42.926 00:25:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:43.186 00:25:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:43.186 00:25:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:43.186 00:25:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:43.186 00:25:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:43.445 00:25:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:43.445 00:25:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:43.445 00:25:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:43.445 00:25:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:43.445 00:25:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:43.445 00:25:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:23:43.445 00:25:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:43.704 00:25:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:43.964 00:25:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:23:44.913 00:25:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:23:44.913 00:25:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:44.913 00:25:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:44.913 00:25:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:45.172 00:25:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:45.172 00:25:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:45.172 00:25:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:45.172 00:25:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:45.172 00:25:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:45.172 00:25:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:45.172 00:25:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:45.172 00:25:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:45.432 00:25:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:45.432 00:25:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:45.432 00:25:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:45.432 00:25:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:45.690 00:25:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:45.690 00:25:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:45.690 00:25:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:45.690 00:25:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:45.949 00:25:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:45.949 00:25:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:45.949 00:25:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:45.949 00:25:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:45.949 00:25:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:45.949 00:25:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1616020 00:23:45.949 00:25:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@942 -- # '[' -z 1616020 ']' 00:23:45.949 00:25:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # kill -0 1616020 00:23:45.949 00:25:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@947 -- # uname 00:23:45.949 00:25:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:23:45.949 00:25:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1616020 00:23:46.212 00:25:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # process_name=reactor_2 00:23:46.212 00:25:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' reactor_2 = sudo ']' 00:23:46.212 00:25:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1616020' 00:23:46.212 killing process with pid 1616020 00:23:46.212 00:25:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@961 -- # kill 1616020 00:23:46.212 00:25:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # wait 1616020 00:23:46.212 Connection closed with partial response: 00:23:46.212 00:23:46.212 00:23:46.212 00:25:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1616020 00:23:46.212 00:25:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:46.212 [2024-07-16 00:24:36.459716] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:23:46.212 [2024-07-16 00:24:36.459766] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1616020 ] 00:23:46.212 [2024-07-16 00:24:36.509751] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:46.212 [2024-07-16 00:24:36.584121] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:46.212 Running I/O for 90 seconds... 00:23:46.212 [2024-07-16 00:24:49.817216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:20472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.212 [2024-07-16 00:24:49.817259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:46.212 [2024-07-16 00:24:49.817292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:19600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.212 [2024-07-16 00:24:49.817301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:46.212 [2024-07-16 00:24:49.817315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:19608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.212 [2024-07-16 00:24:49.817322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:46.212 [2024-07-16 00:24:49.817335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:19616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.212 [2024-07-16 00:24:49.817342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:46.212 [2024-07-16 00:24:49.817354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:19624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.212 [2024-07-16 00:24:49.817361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:46.212 [2024-07-16 00:24:49.817373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:19632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.212 [2024-07-16 00:24:49.817380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:46.212 [2024-07-16 00:24:49.817392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.212 [2024-07-16 00:24:49.817400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:46.212 [2024-07-16 00:24:49.817412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:19648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.212 [2024-07-16 00:24:49.817419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:46.212 [2024-07-16 00:24:49.817431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.212 [2024-07-16 00:24:49.817438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:46.212 [2024-07-16 00:24:49.817454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:19664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.212 [2024-07-16 00:24:49.817462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:46.212 [2024-07-16 00:24:49.817563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:19672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.212 [2024-07-16 00:24:49.817580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:46.212 [2024-07-16 00:24:49.817595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:19680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.212 [2024-07-16 00:24:49.817603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:46.212 [2024-07-16 00:24:49.817616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:19688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.212 [2024-07-16 00:24:49.817623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:46.212 [2024-07-16 00:24:49.817636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:19696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.212 [2024-07-16 00:24:49.817643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:46.212 [2024-07-16 00:24:49.817655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:19704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.212 [2024-07-16 00:24:49.817662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:46.212 [2024-07-16 00:24:49.817675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:19712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.212 [2024-07-16 00:24:49.817682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:46.212 [2024-07-16 00:24:49.817694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:19720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.212 [2024-07-16 00:24:49.817701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:46.212 [2024-07-16 00:24:49.817715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:19728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.212 [2024-07-16 00:24:49.817722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:46.212 [2024-07-16 00:24:49.817735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:19736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.212 [2024-07-16 00:24:49.817741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:46.212 [2024-07-16 00:24:49.817754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:19744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.212 [2024-07-16 00:24:49.817760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:46.212 [2024-07-16 00:24:49.817773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.212 [2024-07-16 00:24:49.817779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:46.212 [2024-07-16 00:24:49.817791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:19760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.212 [2024-07-16 00:24:49.817799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:46.212 [2024-07-16 00:24:49.817811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:19768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.212 [2024-07-16 00:24:49.817818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:46.212 [2024-07-16 00:24:49.817833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:20480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.212 [2024-07-16 00:24:49.817840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:46.212 [2024-07-16 00:24:49.817852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.212 [2024-07-16 00:24:49.817859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:46.212 [2024-07-16 00:24:49.817872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:19784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.212 [2024-07-16 00:24:49.817879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:46.212 [2024-07-16 00:24:49.817892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:19792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.212 [2024-07-16 00:24:49.817899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:46.212 [2024-07-16 00:24:49.817911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.212 [2024-07-16 00:24:49.817918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:46.212 [2024-07-16 00:24:49.817930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:19808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.212 [2024-07-16 00:24:49.817938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:46.212 [2024-07-16 00:24:49.817950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.212 [2024-07-16 00:24:49.817957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:46.212 [2024-07-16 00:24:49.817969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:19824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.212 [2024-07-16 00:24:49.817976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:46.212 [2024-07-16 00:24:49.817988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:20488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.212 [2024-07-16 00:24:49.817996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:46.212 [2024-07-16 00:24:49.818008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:20496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.212 [2024-07-16 00:24:49.818015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:46.212 [2024-07-16 00:24:49.818027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:20504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.213 [2024-07-16 00:24:49.818034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:46.213 [2024-07-16 00:24:49.818046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:20512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.213 [2024-07-16 00:24:49.818053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:46.213 [2024-07-16 00:24:49.818067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:20520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.213 [2024-07-16 00:24:49.818074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:46.213 [2024-07-16 00:24:49.818087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:20528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.213 [2024-07-16 00:24:49.818094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:46.213 [2024-07-16 00:24:49.818372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:20536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.213 [2024-07-16 00:24:49.818382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:46.213 [2024-07-16 00:24:49.818397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:19832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.213 [2024-07-16 00:24:49.818404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:46.213 [2024-07-16 00:24:49.818418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:19840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.213 [2024-07-16 00:24:49.818425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:46.213 [2024-07-16 00:24:49.818439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:19848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.213 [2024-07-16 00:24:49.818447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:46.213 [2024-07-16 00:24:49.818461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:19856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.213 [2024-07-16 00:24:49.818468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:46.213 [2024-07-16 00:24:49.818482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:19864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.213 [2024-07-16 00:24:49.818488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:46.213 [2024-07-16 00:24:49.818503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:19872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.213 [2024-07-16 00:24:49.818510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:46.213 [2024-07-16 00:24:49.818524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:19880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.213 [2024-07-16 00:24:49.818530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:46.213 [2024-07-16 00:24:49.818544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:19888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.213 [2024-07-16 00:24:49.818551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:46.213 [2024-07-16 00:24:49.818566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:19896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.213 [2024-07-16 00:24:49.818573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:46.213 [2024-07-16 00:24:49.818587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:19904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.213 [2024-07-16 00:24:49.818596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:46.213 [2024-07-16 00:24:49.818610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:19912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.213 [2024-07-16 00:24:49.818617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:46.213 [2024-07-16 00:24:49.818632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.213 [2024-07-16 00:24:49.818638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:46.213 [2024-07-16 00:24:49.818653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:19928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.213 [2024-07-16 00:24:49.818660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:46.213 [2024-07-16 00:24:49.818675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.213 [2024-07-16 00:24:49.818681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:46.213 [2024-07-16 00:24:49.818696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:19944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.213 [2024-07-16 00:24:49.818702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:46.213 [2024-07-16 00:24:49.818716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.213 [2024-07-16 00:24:49.818724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:46.213 [2024-07-16 00:24:49.818771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:20544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.213 [2024-07-16 00:24:49.818780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:46.213 [2024-07-16 00:24:49.818795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:19960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.213 [2024-07-16 00:24:49.818802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:46.213 [2024-07-16 00:24:49.818818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:19968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.213 [2024-07-16 00:24:49.818825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:46.213 [2024-07-16 00:24:49.818841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:19976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.213 [2024-07-16 00:24:49.818848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:46.213 [2024-07-16 00:24:49.818863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:19984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.213 [2024-07-16 00:24:49.818869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:46.213 [2024-07-16 00:24:49.818885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:19992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.213 [2024-07-16 00:24:49.818893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:46.213 [2024-07-16 00:24:49.818909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.213 [2024-07-16 00:24:49.818916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:46.213 [2024-07-16 00:24:49.818931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.213 [2024-07-16 00:24:49.818938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:46.213 [2024-07-16 00:24:49.818953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:20016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.213 [2024-07-16 00:24:49.818960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:46.213 [2024-07-16 00:24:49.818975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:20024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.213 [2024-07-16 00:24:49.818981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:46.213 [2024-07-16 00:24:49.818997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.213 [2024-07-16 00:24:49.819004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:46.213 [2024-07-16 00:24:49.819020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.213 [2024-07-16 00:24:49.819027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:46.213 [2024-07-16 00:24:49.819042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:20048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.213 [2024-07-16 00:24:49.819049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:46.213 [2024-07-16 00:24:49.819065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.213 [2024-07-16 00:24:49.819072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:46.213 [2024-07-16 00:24:49.819087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:20064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.213 [2024-07-16 00:24:49.819093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:46.213 [2024-07-16 00:24:49.819108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.213 [2024-07-16 00:24:49.819115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:46.213 [2024-07-16 00:24:49.819131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:20080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.213 [2024-07-16 00:24:49.819138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:46.213 [2024-07-16 00:24:49.819153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:20088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.213 [2024-07-16 00:24:49.819161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:46.213 [2024-07-16 00:24:49.819178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:20096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.213 [2024-07-16 00:24:49.819185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:46.213 [2024-07-16 00:24:49.819200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.213 [2024-07-16 00:24:49.819206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:46.214 [2024-07-16 00:24:49.819222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.214 [2024-07-16 00:24:49.819234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:46.214 [2024-07-16 00:24:49.819266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:20120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.214 [2024-07-16 00:24:49.819273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:46.214 [2024-07-16 00:24:49.819289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:20128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.214 [2024-07-16 00:24:49.819296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:46.214 [2024-07-16 00:24:49.819312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:20136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.214 [2024-07-16 00:24:49.819319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:46.214 [2024-07-16 00:24:49.819336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:20144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.214 [2024-07-16 00:24:49.819344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:46.214 [2024-07-16 00:24:49.819359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:20152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.214 [2024-07-16 00:24:49.819366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:46.214 [2024-07-16 00:24:49.819382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:20160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.214 [2024-07-16 00:24:49.819389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:46.214 [2024-07-16 00:24:49.819406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:20168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.214 [2024-07-16 00:24:49.819414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:46.214 [2024-07-16 00:24:49.819429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:20176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.214 [2024-07-16 00:24:49.819436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:46.214 [2024-07-16 00:24:49.819452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.214 [2024-07-16 00:24:49.819459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:46.214 [2024-07-16 00:24:49.819477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:20192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.214 [2024-07-16 00:24:49.819484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:46.214 [2024-07-16 00:24:49.819499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:20200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.214 [2024-07-16 00:24:49.819507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:46.214 [2024-07-16 00:24:49.819522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.214 [2024-07-16 00:24:49.819529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:46.214 [2024-07-16 00:24:49.819545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:20216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.214 [2024-07-16 00:24:49.819551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:46.214 [2024-07-16 00:24:49.819568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:20224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.214 [2024-07-16 00:24:49.819575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:46.214 [2024-07-16 00:24:49.819591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.214 [2024-07-16 00:24:49.819598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:46.214 [2024-07-16 00:24:49.819614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:20240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.214 [2024-07-16 00:24:49.819621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:46.214 [2024-07-16 00:24:49.819637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:20248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.214 [2024-07-16 00:24:49.819644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:46.214 [2024-07-16 00:24:49.819660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.214 [2024-07-16 00:24:49.819666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:46.214 [2024-07-16 00:24:49.819683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:20264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.214 [2024-07-16 00:24:49.819689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:46.214 [2024-07-16 00:24:49.819705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:20272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.214 [2024-07-16 00:24:49.819712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:46.214 [2024-07-16 00:24:49.819727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:20552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.214 [2024-07-16 00:24:49.819735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:46.214 [2024-07-16 00:24:49.819752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.214 [2024-07-16 00:24:49.819759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:46.214 [2024-07-16 00:24:49.819783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:20280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.214 [2024-07-16 00:24:49.819790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:46.214 [2024-07-16 00:24:49.819806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:20288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.214 [2024-07-16 00:24:49.819812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:46.214 [2024-07-16 00:24:49.819828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.214 [2024-07-16 00:24:49.819834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:46.214 [2024-07-16 00:24:49.819851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:20304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.214 [2024-07-16 00:24:49.819858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:46.214 [2024-07-16 00:24:49.819957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:20312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.214 [2024-07-16 00:24:49.819966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:46.214 [2024-07-16 00:24:49.819985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:20320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.214 [2024-07-16 00:24:49.819992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:46.214 [2024-07-16 00:24:49.820012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:20328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.214 [2024-07-16 00:24:49.820019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:46.214 [2024-07-16 00:24:49.820038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:20336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.214 [2024-07-16 00:24:49.820044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.214 [2024-07-16 00:24:49.820063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:20344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.214 [2024-07-16 00:24:49.820070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.214 [2024-07-16 00:24:49.820089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:20352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.214 [2024-07-16 00:24:49.820096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:46.214 [2024-07-16 00:24:49.820114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:20360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.214 [2024-07-16 00:24:49.820121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:46.214 [2024-07-16 00:24:49.820140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:20368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.214 [2024-07-16 00:24:49.820148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:46.214 [2024-07-16 00:24:49.820167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:20376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.214 [2024-07-16 00:24:49.820175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:46.214 [2024-07-16 00:24:49.820193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:20384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.214 [2024-07-16 00:24:49.820200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:46.214 [2024-07-16 00:24:49.820219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:20392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.214 [2024-07-16 00:24:49.820230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:46.214 [2024-07-16 00:24:49.820250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:20400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.214 [2024-07-16 00:24:49.820257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:46.214 [2024-07-16 00:24:49.820277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:20408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.214 [2024-07-16 00:24:49.820284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:46.214 [2024-07-16 00:24:49.820302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.214 [2024-07-16 00:24:49.820311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:46.215 [2024-07-16 00:24:49.820330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.215 [2024-07-16 00:24:49.820337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:46.215 [2024-07-16 00:24:49.820356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:20432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.215 [2024-07-16 00:24:49.820363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:46.215 [2024-07-16 00:24:49.820382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:20440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.215 [2024-07-16 00:24:49.820388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:46.215 [2024-07-16 00:24:49.820407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:20448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.215 [2024-07-16 00:24:49.820414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:46.215 [2024-07-16 00:24:49.820433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:20456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.215 [2024-07-16 00:24:49.820440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:46.215 [2024-07-16 00:24:49.820459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:20464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.215 [2024-07-16 00:24:49.820467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:46.215 [2024-07-16 00:24:49.820487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:20568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.215 [2024-07-16 00:24:49.820493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:46.215 [2024-07-16 00:24:49.820512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:20576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.215 [2024-07-16 00:24:49.820519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:46.215 [2024-07-16 00:24:49.820538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:20584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.215 [2024-07-16 00:24:49.820545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:46.215 [2024-07-16 00:24:49.820563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:20592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.215 [2024-07-16 00:24:49.820571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:46.215 [2024-07-16 00:24:49.820589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:20600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.215 [2024-07-16 00:24:49.820596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:46.215 [2024-07-16 00:24:49.820614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:20608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.215 [2024-07-16 00:24:49.820621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:46.215 [2024-07-16 00:24:49.820640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:20616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.215 [2024-07-16 00:24:49.820648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:46.215 [2024-07-16 00:25:02.596499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:54520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.215 [2024-07-16 00:25:02.596542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:46.215 [2024-07-16 00:25:02.596575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:54536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.215 [2024-07-16 00:25:02.596585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:46.215 [2024-07-16 00:25:02.596599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:54552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.215 [2024-07-16 00:25:02.596606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:46.215 [2024-07-16 00:25:02.596619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:54568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.215 [2024-07-16 00:25:02.596626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:46.215 [2024-07-16 00:25:02.596638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:54584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.215 [2024-07-16 00:25:02.596646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:46.215 [2024-07-16 00:25:02.596667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:54600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.215 [2024-07-16 00:25:02.596675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:46.215 [2024-07-16 00:25:02.596687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:54616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.215 [2024-07-16 00:25:02.596694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:46.215 [2024-07-16 00:25:02.596706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:54632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.215 [2024-07-16 00:25:02.596714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:46.215 [2024-07-16 00:25:02.596726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:54648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.215 [2024-07-16 00:25:02.596733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:46.215 [2024-07-16 00:25:02.596745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:54664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.215 [2024-07-16 00:25:02.596756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:46.215 [2024-07-16 00:25:02.596769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:54680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.215 [2024-07-16 00:25:02.596777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:46.215 [2024-07-16 00:25:02.596789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:54696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.215 [2024-07-16 00:25:02.596796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:46.215 [2024-07-16 00:25:02.596808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:54712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.215 [2024-07-16 00:25:02.596814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:46.215 [2024-07-16 00:25:02.596826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:54728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.215 [2024-07-16 00:25:02.596836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:46.215 [2024-07-16 00:25:02.596849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:54744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.215 [2024-07-16 00:25:02.596856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:46.215 [2024-07-16 00:25:02.596868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:54760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.215 [2024-07-16 00:25:02.596875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:46.215 [2024-07-16 00:25:02.596889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:53816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.215 [2024-07-16 00:25:02.596897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:46.215 [2024-07-16 00:25:02.596914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:53848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.215 [2024-07-16 00:25:02.596922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:46.215 [2024-07-16 00:25:02.596935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:53880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.215 [2024-07-16 00:25:02.596942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:46.215 [2024-07-16 00:25:02.596954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:53912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.215 [2024-07-16 00:25:02.596961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:46.215 [2024-07-16 00:25:02.596973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:53944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.215 [2024-07-16 00:25:02.596980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:46.215 [2024-07-16 00:25:02.596993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:53976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.215 [2024-07-16 00:25:02.596999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:46.215 [2024-07-16 00:25:02.597011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:54776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.215 [2024-07-16 00:25:02.597019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:46.215 [2024-07-16 00:25:02.597031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:54792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.215 [2024-07-16 00:25:02.597039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:46.215 [2024-07-16 00:25:02.597052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:54008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.215 [2024-07-16 00:25:02.597058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:46.215 [2024-07-16 00:25:02.597071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:54040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.215 [2024-07-16 00:25:02.597078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:46.215 [2024-07-16 00:25:02.597090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:54072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.215 [2024-07-16 00:25:02.597096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:46.215 [2024-07-16 00:25:02.597108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:54104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.216 [2024-07-16 00:25:02.597115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:46.216 [2024-07-16 00:25:02.597127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:54136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.216 [2024-07-16 00:25:02.597134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:46.216 [2024-07-16 00:25:02.597813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:54152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.216 [2024-07-16 00:25:02.597834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:46.216 [2024-07-16 00:25:02.597851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:54184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.216 [2024-07-16 00:25:02.597858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:46.216 [2024-07-16 00:25:02.597870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:54216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.216 [2024-07-16 00:25:02.597877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:46.216 [2024-07-16 00:25:02.597890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:54248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.216 [2024-07-16 00:25:02.597898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:46.216 [2024-07-16 00:25:02.597910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:54280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.216 [2024-07-16 00:25:02.597917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:46.216 [2024-07-16 00:25:02.597929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:54312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.216 [2024-07-16 00:25:02.597936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:46.216 [2024-07-16 00:25:02.597948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:54344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.216 [2024-07-16 00:25:02.597955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:46.216 [2024-07-16 00:25:02.597967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:54376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.216 [2024-07-16 00:25:02.597973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:46.216 [2024-07-16 00:25:02.597986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:54408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.216 [2024-07-16 00:25:02.597992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:46.216 [2024-07-16 00:25:02.598005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:54440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.216 [2024-07-16 00:25:02.598012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:46.216 [2024-07-16 00:25:02.598024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:54472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.216 [2024-07-16 00:25:02.598031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:46.216 [2024-07-16 00:25:02.598045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:54824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.216 [2024-07-16 00:25:02.598052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:46.216 [2024-07-16 00:25:02.598064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:54840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.216 [2024-07-16 00:25:02.598073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:46.216 [2024-07-16 00:25:02.598086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:54160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.216 [2024-07-16 00:25:02.598092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:46.216 [2024-07-16 00:25:02.598104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:54192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.216 [2024-07-16 00:25:02.598112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:46.216 [2024-07-16 00:25:02.598124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:54224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.216 [2024-07-16 00:25:02.598131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:46.216 [2024-07-16 00:25:02.598143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:54256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.216 [2024-07-16 00:25:02.598149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:46.216 [2024-07-16 00:25:02.598161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:54288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.216 [2024-07-16 00:25:02.598169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:46.216 [2024-07-16 00:25:02.598181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:54320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.216 [2024-07-16 00:25:02.598187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:46.216 [2024-07-16 00:25:02.598199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:54352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.217 [2024-07-16 00:25:02.598206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:46.217 [2024-07-16 00:25:02.598218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:54384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.217 [2024-07-16 00:25:02.598230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:46.217 [2024-07-16 00:25:02.598243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:54416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.217 [2024-07-16 00:25:02.598266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:46.217 [2024-07-16 00:25:02.598279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:54448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.217 [2024-07-16 00:25:02.598287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:46.217 [2024-07-16 00:25:02.598299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:54480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.217 [2024-07-16 00:25:02.598306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:46.217 [2024-07-16 00:25:02.598318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:54848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.217 [2024-07-16 00:25:02.598325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:46.217 [2024-07-16 00:25:02.598339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:54864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.217 [2024-07-16 00:25:02.598347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:46.217 [2024-07-16 00:25:02.598359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:54880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.217 [2024-07-16 00:25:02.598366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:46.217 [2024-07-16 00:25:02.598379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:54896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.217 [2024-07-16 00:25:02.598386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:46.217 [2024-07-16 00:25:02.598398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:54912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.217 [2024-07-16 00:25:02.598406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:46.217 [2024-07-16 00:25:02.598418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:54928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.217 [2024-07-16 00:25:02.598425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:46.217 [2024-07-16 00:25:02.598437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:54496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.217 [2024-07-16 00:25:02.598444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:46.217 [2024-07-16 00:25:02.598458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:54944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.217 [2024-07-16 00:25:02.598465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:46.217 [2024-07-16 00:25:02.598477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:54960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.217 [2024-07-16 00:25:02.598484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:46.217 [2024-07-16 00:25:02.598496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:54976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.217 [2024-07-16 00:25:02.598503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:46.217 [2024-07-16 00:25:02.598516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:54992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:46.217 [2024-07-16 00:25:02.598523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:46.217 Received shutdown signal, test time was about 27.050496 seconds 00:23:46.217 00:23:46.217 Latency(us) 00:23:46.217 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:46.217 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:46.217 Verification LBA range: start 0x0 length 0x4000 00:23:46.217 Nvme0n1 : 27.05 10240.09 40.00 0.00 0.00 12479.77 171.85 3019898.88 00:23:46.217 =================================================================================================================== 00:23:46.217 Total : 10240.09 40.00 0.00 0.00 12479.77 171.85 3019898.88 00:23:46.217 00:25:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:46.476 00:25:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:23:46.477 00:25:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:46.477 00:25:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:23:46.477 00:25:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:46.477 00:25:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:23:46.477 00:25:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:46.477 00:25:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:23:46.477 00:25:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:46.477 00:25:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:46.477 rmmod nvme_tcp 00:23:46.477 rmmod nvme_fabrics 00:23:46.477 rmmod nvme_keyring 00:23:46.477 00:25:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:46.477 00:25:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:23:46.477 00:25:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:23:46.477 00:25:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 1615756 ']' 00:23:46.477 00:25:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 1615756 00:23:46.477 00:25:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@942 -- # '[' -z 1615756 ']' 00:23:46.477 00:25:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # kill -0 1615756 00:23:46.477 00:25:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@947 -- # uname 00:23:46.477 00:25:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:23:46.477 00:25:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1615756 00:23:46.477 00:25:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:23:46.477 00:25:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:23:46.477 00:25:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1615756' 00:23:46.477 killing process with pid 1615756 00:23:46.477 00:25:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@961 -- # kill 1615756 00:23:46.477 00:25:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # wait 1615756 00:23:46.736 00:25:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:46.736 00:25:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:46.736 00:25:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:46.736 00:25:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:46.736 00:25:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:46.736 00:25:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:46.736 00:25:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:46.736 00:25:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:49.271 00:25:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:49.271 00:23:49.271 real 0m38.500s 00:23:49.271 user 1m44.133s 00:23:49.271 sys 0m10.394s 00:23:49.271 00:25:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1118 -- # xtrace_disable 00:23:49.271 00:25:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:49.271 ************************************ 00:23:49.271 END TEST nvmf_host_multipath_status 00:23:49.271 ************************************ 00:23:49.271 00:25:07 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:23:49.271 00:25:07 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:23:49.271 00:25:07 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:23:49.271 00:25:07 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:23:49.271 00:25:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:49.271 ************************************ 00:23:49.271 START TEST nvmf_discovery_remove_ifc 00:23:49.271 ************************************ 00:23:49.271 00:25:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:23:49.271 * Looking for test storage... 00:23:49.271 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:49.271 00:25:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:49.271 00:25:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:23:49.271 00:25:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:49.271 00:25:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:49.271 00:25:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:49.271 00:25:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:49.271 00:25:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:49.271 00:25:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:49.271 00:25:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:49.271 00:25:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:49.272 00:25:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:49.272 00:25:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:49.272 00:25:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:49.272 00:25:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:49.272 00:25:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:49.272 00:25:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:49.272 00:25:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:49.272 00:25:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:49.272 00:25:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:49.272 00:25:07 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:49.272 00:25:07 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:49.272 00:25:07 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:49.272 00:25:07 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:49.272 00:25:07 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:49.272 00:25:07 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:49.272 00:25:07 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:23:49.272 00:25:07 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:49.272 00:25:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:23:49.272 00:25:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:49.272 00:25:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:49.272 00:25:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:49.272 00:25:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:49.272 00:25:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:49.272 00:25:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:49.272 00:25:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:49.272 00:25:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:49.272 00:25:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:23:49.272 00:25:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:23:49.272 00:25:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:23:49.272 00:25:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:23:49.272 00:25:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:23:49.272 00:25:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:23:49.272 00:25:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:23:49.272 00:25:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:49.272 00:25:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:49.272 00:25:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:49.272 00:25:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:49.272 00:25:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:49.272 00:25:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:49.272 00:25:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:49.272 00:25:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:49.272 00:25:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:49.272 00:25:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:49.272 00:25:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:23:49.272 00:25:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:54.565 00:25:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:54.565 00:25:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:23:54.565 00:25:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:54.565 00:25:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:54.566 00:25:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:54.566 00:25:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:54.566 00:25:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:54.566 00:25:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:23:54.566 00:25:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:54.566 00:25:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:23:54.566 00:25:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:23:54.566 00:25:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:23:54.566 00:25:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:23:54.566 00:25:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:23:54.566 00:25:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:23:54.566 00:25:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:54.566 00:25:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:54.566 00:25:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:54.566 00:25:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:54.566 00:25:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:54.566 00:25:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:54.566 00:25:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:54.566 00:25:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:54.566 00:25:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:54.566 00:25:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:54.566 00:25:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:54.566 00:25:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:54.566 00:25:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:54.566 00:25:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:54.566 00:25:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:54.566 00:25:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:54.566 00:25:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:54.566 00:25:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:54.566 00:25:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:54.566 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:54.566 00:25:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:54.566 00:25:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:54.566 00:25:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:54.566 00:25:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:54.566 00:25:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:54.566 00:25:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:54.566 00:25:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:54.566 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:54.566 00:25:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:54.566 00:25:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:54.566 00:25:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:54.566 00:25:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:54.566 00:25:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:54.566 00:25:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:54.566 00:25:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:54.566 00:25:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:54.566 00:25:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:54.566 00:25:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:54.566 00:25:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:54.566 00:25:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:54.566 00:25:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:54.566 00:25:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:54.566 00:25:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:54.566 00:25:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:54.566 Found net devices under 0000:86:00.0: cvl_0_0 00:23:54.566 00:25:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:54.566 00:25:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:54.566 00:25:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:54.566 00:25:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:54.566 00:25:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:54.566 00:25:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:54.566 00:25:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:54.566 00:25:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:54.566 00:25:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:54.566 Found net devices under 0000:86:00.1: cvl_0_1 00:23:54.566 00:25:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:54.566 00:25:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:54.566 00:25:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:23:54.566 00:25:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:54.566 00:25:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:54.566 00:25:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:54.566 00:25:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:54.566 00:25:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:54.566 00:25:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:54.566 00:25:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:54.566 00:25:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:54.566 00:25:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:54.566 00:25:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:54.566 00:25:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:54.566 00:25:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:54.566 00:25:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:54.566 00:25:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:54.566 00:25:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:54.566 00:25:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:54.566 00:25:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:54.566 00:25:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:54.566 00:25:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:54.566 00:25:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:54.566 00:25:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:54.566 00:25:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:54.566 00:25:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:54.566 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:54.566 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.166 ms 00:23:54.566 00:23:54.566 --- 10.0.0.2 ping statistics --- 00:23:54.566 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:54.566 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:23:54.566 00:25:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:54.566 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:54.566 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.293 ms 00:23:54.566 00:23:54.566 --- 10.0.0.1 ping statistics --- 00:23:54.566 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:54.566 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:23:54.566 00:25:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:54.566 00:25:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:23:54.566 00:25:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:54.566 00:25:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:54.566 00:25:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:54.566 00:25:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:54.566 00:25:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:54.566 00:25:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:54.566 00:25:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:54.566 00:25:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:23:54.566 00:25:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:54.566 00:25:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:54.566 00:25:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:54.566 00:25:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=1624311 00:23:54.566 00:25:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 1624311 00:23:54.566 00:25:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:54.566 00:25:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@823 -- # '[' -z 1624311 ']' 00:23:54.566 00:25:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:54.566 00:25:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@828 -- # local max_retries=100 00:23:54.566 00:25:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:54.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:54.566 00:25:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # xtrace_disable 00:23:54.566 00:25:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:54.566 [2024-07-16 00:25:13.016338] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:23:54.566 [2024-07-16 00:25:13.016384] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:54.566 [2024-07-16 00:25:13.074787] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:54.566 [2024-07-16 00:25:13.147836] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:54.566 [2024-07-16 00:25:13.147879] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:54.566 [2024-07-16 00:25:13.147886] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:54.566 [2024-07-16 00:25:13.147892] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:54.566 [2024-07-16 00:25:13.147897] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:54.566 [2024-07-16 00:25:13.147918] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:55.133 00:25:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:23:55.133 00:25:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@856 -- # return 0 00:23:55.133 00:25:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:55.133 00:25:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:55.133 00:25:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:55.133 00:25:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:55.133 00:25:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:23:55.133 00:25:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:55.133 00:25:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:55.133 [2024-07-16 00:25:13.862157] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:55.133 [2024-07-16 00:25:13.870278] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:23:55.133 null0 00:23:55.133 [2024-07-16 00:25:13.902296] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:55.133 00:25:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:55.133 00:25:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=1624548 00:23:55.133 00:25:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1624548 /tmp/host.sock 00:23:55.133 00:25:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@823 -- # '[' -z 1624548 ']' 00:23:55.133 00:25:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:23:55.133 00:25:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@827 -- # local rpc_addr=/tmp/host.sock 00:23:55.133 00:25:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@828 -- # local max_retries=100 00:23:55.133 00:25:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:55.133 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:55.133 00:25:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # xtrace_disable 00:23:55.133 00:25:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:55.133 [2024-07-16 00:25:13.956209] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:23:55.133 [2024-07-16 00:25:13.956259] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1624548 ] 00:23:55.392 [2024-07-16 00:25:14.010431] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:55.392 [2024-07-16 00:25:14.084094] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:55.959 00:25:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:23:55.959 00:25:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@856 -- # return 0 00:23:55.959 00:25:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:55.959 00:25:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:23:55.959 00:25:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:55.959 00:25:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:55.959 00:25:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:55.959 00:25:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:23:55.959 00:25:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:55.959 00:25:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:56.218 00:25:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:56.218 00:25:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:23:56.218 00:25:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:56.218 00:25:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:57.153 [2024-07-16 00:25:15.850251] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:57.153 [2024-07-16 00:25:15.850271] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:57.153 [2024-07-16 00:25:15.850285] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:57.153 [2024-07-16 00:25:15.938570] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:23:57.411 [2024-07-16 00:25:16.042505] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:23:57.411 [2024-07-16 00:25:16.042548] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:23:57.411 [2024-07-16 00:25:16.042567] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:23:57.411 [2024-07-16 00:25:16.042580] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:57.411 [2024-07-16 00:25:16.042598] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:57.411 00:25:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:57.411 00:25:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:23:57.411 00:25:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:57.411 00:25:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:57.411 00:25:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:57.411 00:25:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:57.411 00:25:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:57.411 00:25:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:57.411 00:25:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:57.411 [2024-07-16 00:25:16.049351] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x11a3e30 was disconnected and freed. delete nvme_qpair. 00:23:57.411 00:25:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:57.411 00:25:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:23:57.411 00:25:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:23:57.411 00:25:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:23:57.411 00:25:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:23:57.411 00:25:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:57.411 00:25:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:57.411 00:25:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:57.411 00:25:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:57.411 00:25:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:57.411 00:25:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:57.411 00:25:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:57.411 00:25:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:57.411 00:25:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:57.411 00:25:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:58.851 00:25:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:58.851 00:25:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:58.851 00:25:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:58.851 00:25:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:58.851 00:25:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:58.851 00:25:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:58.851 00:25:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:58.851 00:25:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:58.851 00:25:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:58.851 00:25:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:59.786 00:25:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:59.786 00:25:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:59.786 00:25:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:59.786 00:25:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:59.786 00:25:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:59.786 00:25:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:59.786 00:25:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:59.786 00:25:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:59.786 00:25:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:59.787 00:25:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:00.723 00:25:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:00.723 00:25:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:00.723 00:25:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:00.723 00:25:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:00.723 00:25:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:00.723 00:25:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:00.723 00:25:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:00.723 00:25:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:00.723 00:25:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:00.723 00:25:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:01.658 00:25:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:01.658 00:25:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:01.658 00:25:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:01.658 00:25:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:01.658 00:25:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:01.658 00:25:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:01.658 00:25:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:01.658 00:25:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:01.658 00:25:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:01.658 00:25:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:03.035 00:25:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:03.035 00:25:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:03.035 00:25:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:03.035 00:25:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:03.035 00:25:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:03.035 00:25:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:03.035 00:25:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:03.035 00:25:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:03.035 [2024-07-16 00:25:21.483915] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:24:03.035 [2024-07-16 00:25:21.483952] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:03.035 [2024-07-16 00:25:21.483964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.035 [2024-07-16 00:25:21.483972] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:03.035 [2024-07-16 00:25:21.483979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.035 [2024-07-16 00:25:21.483986] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:03.035 [2024-07-16 00:25:21.483993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.035 [2024-07-16 00:25:21.484000] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:03.035 [2024-07-16 00:25:21.484006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.035 [2024-07-16 00:25:21.484014] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:03.035 [2024-07-16 00:25:21.484026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.035 [2024-07-16 00:25:21.484033] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116a690 is same with the state(5) to be set 00:24:03.035 [2024-07-16 00:25:21.493939] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116a690 (9): Bad file descriptor 00:24:03.035 00:25:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:03.035 00:25:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:03.035 [2024-07-16 00:25:21.503978] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:03.970 00:25:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:03.970 00:25:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:03.970 00:25:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:03.970 00:25:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:03.970 00:25:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:03.970 00:25:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:03.970 00:25:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:03.970 [2024-07-16 00:25:22.530269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:24:03.970 [2024-07-16 00:25:22.530306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116a690 with addr=10.0.0.2, port=4420 00:24:03.970 [2024-07-16 00:25:22.530319] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116a690 is same with the state(5) to be set 00:24:03.970 [2024-07-16 00:25:22.530343] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116a690 (9): Bad file descriptor 00:24:03.970 [2024-07-16 00:25:22.530745] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:03.970 [2024-07-16 00:25:22.530766] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:03.970 [2024-07-16 00:25:22.530775] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:03.970 [2024-07-16 00:25:22.530785] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:03.970 [2024-07-16 00:25:22.530803] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:03.970 [2024-07-16 00:25:22.530813] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:03.970 00:25:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:03.970 00:25:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:03.970 00:25:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:04.906 [2024-07-16 00:25:23.533292] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:04.906 [2024-07-16 00:25:23.533314] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:04.906 [2024-07-16 00:25:23.533321] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:04.906 [2024-07-16 00:25:23.533328] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:24:04.906 [2024-07-16 00:25:23.533339] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:04.906 [2024-07-16 00:25:23.533356] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:24:04.906 [2024-07-16 00:25:23.533378] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:04.906 [2024-07-16 00:25:23.533387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.906 [2024-07-16 00:25:23.533396] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:04.906 [2024-07-16 00:25:23.533403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.907 [2024-07-16 00:25:23.533410] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:04.907 [2024-07-16 00:25:23.533417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.907 [2024-07-16 00:25:23.533424] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:04.907 [2024-07-16 00:25:23.533430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.907 [2024-07-16 00:25:23.533438] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:04.907 [2024-07-16 00:25:23.533445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.907 [2024-07-16 00:25:23.533452] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:24:04.907 [2024-07-16 00:25:23.533659] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1169a80 (9): Bad file descriptor 00:24:04.907 [2024-07-16 00:25:23.534669] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:24:04.907 [2024-07-16 00:25:23.534680] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:24:04.907 00:25:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:04.907 00:25:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:04.907 00:25:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:04.907 00:25:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:04.907 00:25:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:04.907 00:25:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:04.907 00:25:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:04.907 00:25:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:04.907 00:25:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:24:04.907 00:25:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:04.907 00:25:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:04.907 00:25:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:24:04.907 00:25:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:04.907 00:25:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:04.907 00:25:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:04.907 00:25:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:04.907 00:25:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:04.907 00:25:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:04.907 00:25:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:04.907 00:25:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:04.907 00:25:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:04.907 00:25:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:06.283 00:25:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:06.283 00:25:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:06.283 00:25:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:06.283 00:25:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:06.283 00:25:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:06.283 00:25:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:06.283 00:25:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:06.283 00:25:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:06.283 00:25:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:06.283 00:25:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:06.851 [2024-07-16 00:25:25.544957] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:06.851 [2024-07-16 00:25:25.544974] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:06.851 [2024-07-16 00:25:25.544988] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:06.851 [2024-07-16 00:25:25.673392] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:24:07.110 [2024-07-16 00:25:25.776759] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:24:07.110 [2024-07-16 00:25:25.776791] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:24:07.110 [2024-07-16 00:25:25.776808] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:24:07.110 [2024-07-16 00:25:25.776820] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:24:07.110 [2024-07-16 00:25:25.776827] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:07.110 [2024-07-16 00:25:25.784404] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x11808d0 was disconnected and freed. delete nvme_qpair. 00:24:07.110 00:25:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:07.110 00:25:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:07.110 00:25:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:07.110 00:25:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:07.110 00:25:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:07.110 00:25:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:07.110 00:25:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:07.110 00:25:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:07.110 00:25:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:24:07.110 00:25:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:24:07.110 00:25:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 1624548 00:24:07.110 00:25:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@942 -- # '[' -z 1624548 ']' 00:24:07.110 00:25:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@946 -- # kill -0 1624548 00:24:07.110 00:25:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@947 -- # uname 00:24:07.110 00:25:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:24:07.110 00:25:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1624548 00:24:07.110 00:25:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:24:07.110 00:25:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:24:07.110 00:25:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1624548' 00:24:07.110 killing process with pid 1624548 00:24:07.110 00:25:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@961 -- # kill 1624548 00:24:07.110 00:25:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # wait 1624548 00:24:07.369 00:25:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:24:07.369 00:25:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:07.369 00:25:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:24:07.369 00:25:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:07.369 00:25:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:24:07.369 00:25:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:07.369 00:25:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:07.369 rmmod nvme_tcp 00:24:07.369 rmmod nvme_fabrics 00:24:07.369 rmmod nvme_keyring 00:24:07.369 00:25:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:07.369 00:25:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:24:07.369 00:25:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:24:07.369 00:25:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 1624311 ']' 00:24:07.369 00:25:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 1624311 00:24:07.369 00:25:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@942 -- # '[' -z 1624311 ']' 00:24:07.369 00:25:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@946 -- # kill -0 1624311 00:24:07.369 00:25:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@947 -- # uname 00:24:07.369 00:25:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:24:07.369 00:25:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1624311 00:24:07.369 00:25:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:24:07.369 00:25:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:24:07.369 00:25:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1624311' 00:24:07.369 killing process with pid 1624311 00:24:07.369 00:25:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@961 -- # kill 1624311 00:24:07.369 00:25:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # wait 1624311 00:24:07.628 00:25:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:07.628 00:25:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:07.628 00:25:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:07.628 00:25:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:07.628 00:25:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:07.628 00:25:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:07.628 00:25:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:07.628 00:25:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:10.164 00:25:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:10.164 00:24:10.164 real 0m20.805s 00:24:10.164 user 0m26.398s 00:24:10.164 sys 0m5.227s 00:24:10.164 00:25:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1118 -- # xtrace_disable 00:24:10.164 00:25:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:10.164 ************************************ 00:24:10.164 END TEST nvmf_discovery_remove_ifc 00:24:10.164 ************************************ 00:24:10.164 00:25:28 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:24:10.164 00:25:28 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:24:10.164 00:25:28 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:24:10.164 00:25:28 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:24:10.164 00:25:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:10.164 ************************************ 00:24:10.164 START TEST nvmf_identify_kernel_target 00:24:10.164 ************************************ 00:24:10.164 00:25:28 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:24:10.164 * Looking for test storage... 00:24:10.164 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:10.164 00:25:28 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:10.164 00:25:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:24:10.164 00:25:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:10.164 00:25:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:10.164 00:25:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:10.164 00:25:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:10.164 00:25:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:10.164 00:25:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:10.164 00:25:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:10.164 00:25:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:10.164 00:25:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:10.164 00:25:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:10.164 00:25:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:10.164 00:25:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:10.164 00:25:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:10.164 00:25:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:10.164 00:25:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:10.164 00:25:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:10.164 00:25:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:10.164 00:25:28 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:10.164 00:25:28 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:10.164 00:25:28 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:10.165 00:25:28 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.165 00:25:28 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.165 00:25:28 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.165 00:25:28 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:24:10.165 00:25:28 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.165 00:25:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:24:10.165 00:25:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:10.165 00:25:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:10.165 00:25:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:10.165 00:25:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:10.165 00:25:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:10.165 00:25:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:10.165 00:25:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:10.165 00:25:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:10.165 00:25:28 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:24:10.165 00:25:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:10.165 00:25:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:10.165 00:25:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:10.165 00:25:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:10.165 00:25:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:10.165 00:25:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:10.165 00:25:28 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:10.165 00:25:28 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:10.165 00:25:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:10.165 00:25:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:10.165 00:25:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:24:10.165 00:25:28 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:24:15.438 00:25:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:15.438 00:25:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:24:15.438 00:25:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:15.438 00:25:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:15.438 00:25:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:15.438 00:25:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:15.438 00:25:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:15.438 00:25:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:24:15.438 00:25:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:15.438 00:25:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:24:15.438 00:25:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:24:15.438 00:25:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:24:15.438 00:25:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:24:15.438 00:25:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:24:15.438 00:25:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:24:15.439 00:25:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:15.439 00:25:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:15.439 00:25:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:15.439 00:25:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:15.439 00:25:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:15.439 00:25:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:15.439 00:25:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:15.439 00:25:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:15.439 00:25:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:15.439 00:25:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:15.439 00:25:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:15.439 00:25:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:15.439 00:25:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:15.439 00:25:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:15.439 00:25:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:15.439 00:25:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:15.439 00:25:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:15.439 00:25:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:15.439 00:25:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:15.439 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:15.439 00:25:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:15.439 00:25:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:15.439 00:25:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:15.439 00:25:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:15.439 00:25:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:15.439 00:25:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:15.439 00:25:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:15.439 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:15.439 00:25:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:15.439 00:25:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:15.439 00:25:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:15.439 00:25:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:15.439 00:25:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:15.439 00:25:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:15.439 00:25:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:15.439 00:25:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:15.439 00:25:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:15.439 00:25:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:15.439 00:25:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:15.439 00:25:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:15.439 00:25:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:15.439 00:25:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:15.439 00:25:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:15.439 00:25:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:15.439 Found net devices under 0000:86:00.0: cvl_0_0 00:24:15.439 00:25:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:15.439 00:25:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:15.439 00:25:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:15.439 00:25:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:15.439 00:25:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:15.439 00:25:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:15.439 00:25:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:15.439 00:25:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:15.439 00:25:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:15.439 Found net devices under 0000:86:00.1: cvl_0_1 00:24:15.439 00:25:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:15.439 00:25:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:15.439 00:25:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:24:15.439 00:25:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:15.439 00:25:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:15.439 00:25:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:15.439 00:25:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:15.439 00:25:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:15.439 00:25:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:15.439 00:25:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:15.439 00:25:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:15.439 00:25:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:15.439 00:25:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:15.439 00:25:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:15.439 00:25:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:15.439 00:25:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:15.439 00:25:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:15.439 00:25:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:15.439 00:25:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:15.439 00:25:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:15.439 00:25:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:15.439 00:25:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:15.439 00:25:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:15.439 00:25:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:15.439 00:25:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:15.439 00:25:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:15.439 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:15.439 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.270 ms 00:24:15.439 00:24:15.439 --- 10.0.0.2 ping statistics --- 00:24:15.439 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:15.439 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:24:15.439 00:25:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:15.439 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:15.439 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.271 ms 00:24:15.439 00:24:15.439 --- 10.0.0.1 ping statistics --- 00:24:15.439 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:15.439 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:24:15.439 00:25:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:15.439 00:25:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:24:15.439 00:25:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:15.439 00:25:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:15.439 00:25:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:15.439 00:25:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:15.439 00:25:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:15.439 00:25:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:15.439 00:25:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:15.439 00:25:34 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:24:15.439 00:25:34 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:24:15.439 00:25:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:24:15.439 00:25:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:15.439 00:25:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:15.439 00:25:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:15.439 00:25:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:15.439 00:25:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:15.439 00:25:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:15.439 00:25:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:15.440 00:25:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:15.440 00:25:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:15.440 00:25:34 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:24:15.440 00:25:34 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:24:15.440 00:25:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:24:15.440 00:25:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:24:15.440 00:25:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:15.440 00:25:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:15.440 00:25:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:15.440 00:25:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:24:15.440 00:25:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:24:15.440 00:25:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:24:15.440 00:25:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:15.440 00:25:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:24:17.975 Waiting for block devices as requested 00:24:17.975 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:24:17.975 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:24:17.975 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:24:17.975 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:24:17.975 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:24:17.975 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:24:17.975 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:24:17.975 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:24:18.234 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:24:18.234 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:24:18.234 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:24:18.234 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:24:18.493 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:24:18.493 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:24:18.493 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:24:18.752 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:24:18.752 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:24:18.752 00:25:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:24:18.752 00:25:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:18.752 00:25:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:24:18.752 00:25:37 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1656 -- # local device=nvme0n1 00:24:18.752 00:25:37 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1658 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:18.752 00:25:37 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1659 -- # [[ none != none ]] 00:24:18.752 00:25:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:24:18.752 00:25:37 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:24:18.752 00:25:37 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:24:18.752 No valid GPT data, bailing 00:24:18.752 00:25:37 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:18.752 00:25:37 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:24:18.752 00:25:37 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:24:18.752 00:25:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:24:18.752 00:25:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:24:18.752 00:25:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:18.752 00:25:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:18.752 00:25:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:18.752 00:25:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:24:18.752 00:25:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:24:18.752 00:25:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:24:18.752 00:25:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:24:18.752 00:25:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:24:18.752 00:25:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:24:18.752 00:25:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:24:18.752 00:25:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:24:18.752 00:25:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:19.021 00:25:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:24:19.021 00:24:19.021 Discovery Log Number of Records 2, Generation counter 2 00:24:19.021 =====Discovery Log Entry 0====== 00:24:19.021 trtype: tcp 00:24:19.021 adrfam: ipv4 00:24:19.021 subtype: current discovery subsystem 00:24:19.021 treq: not specified, sq flow control disable supported 00:24:19.021 portid: 1 00:24:19.021 trsvcid: 4420 00:24:19.021 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:19.021 traddr: 10.0.0.1 00:24:19.021 eflags: none 00:24:19.021 sectype: none 00:24:19.021 =====Discovery Log Entry 1====== 00:24:19.021 trtype: tcp 00:24:19.021 adrfam: ipv4 00:24:19.021 subtype: nvme subsystem 00:24:19.021 treq: not specified, sq flow control disable supported 00:24:19.021 portid: 1 00:24:19.021 trsvcid: 4420 00:24:19.021 subnqn: nqn.2016-06.io.spdk:testnqn 00:24:19.021 traddr: 10.0.0.1 00:24:19.021 eflags: none 00:24:19.021 sectype: none 00:24:19.021 00:25:37 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:24:19.021 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:24:19.021 ===================================================== 00:24:19.021 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:19.021 ===================================================== 00:24:19.021 Controller Capabilities/Features 00:24:19.021 ================================ 00:24:19.021 Vendor ID: 0000 00:24:19.021 Subsystem Vendor ID: 0000 00:24:19.021 Serial Number: fbf94edda48e67db211b 00:24:19.021 Model Number: Linux 00:24:19.021 Firmware Version: 6.7.0-68 00:24:19.021 Recommended Arb Burst: 0 00:24:19.021 IEEE OUI Identifier: 00 00 00 00:24:19.021 Multi-path I/O 00:24:19.021 May have multiple subsystem ports: No 00:24:19.021 May have multiple controllers: No 00:24:19.021 Associated with SR-IOV VF: No 00:24:19.021 Max Data Transfer Size: Unlimited 00:24:19.021 Max Number of Namespaces: 0 00:24:19.021 Max Number of I/O Queues: 1024 00:24:19.021 NVMe Specification Version (VS): 1.3 00:24:19.021 NVMe Specification Version (Identify): 1.3 00:24:19.021 Maximum Queue Entries: 1024 00:24:19.021 Contiguous Queues Required: No 00:24:19.021 Arbitration Mechanisms Supported 00:24:19.021 Weighted Round Robin: Not Supported 00:24:19.021 Vendor Specific: Not Supported 00:24:19.021 Reset Timeout: 7500 ms 00:24:19.021 Doorbell Stride: 4 bytes 00:24:19.021 NVM Subsystem Reset: Not Supported 00:24:19.021 Command Sets Supported 00:24:19.021 NVM Command Set: Supported 00:24:19.021 Boot Partition: Not Supported 00:24:19.021 Memory Page Size Minimum: 4096 bytes 00:24:19.021 Memory Page Size Maximum: 4096 bytes 00:24:19.021 Persistent Memory Region: Not Supported 00:24:19.021 Optional Asynchronous Events Supported 00:24:19.021 Namespace Attribute Notices: Not Supported 00:24:19.021 Firmware Activation Notices: Not Supported 00:24:19.021 ANA Change Notices: Not Supported 00:24:19.021 PLE Aggregate Log Change Notices: Not Supported 00:24:19.021 LBA Status Info Alert Notices: Not Supported 00:24:19.021 EGE Aggregate Log Change Notices: Not Supported 00:24:19.021 Normal NVM Subsystem Shutdown event: Not Supported 00:24:19.021 Zone Descriptor Change Notices: Not Supported 00:24:19.021 Discovery Log Change Notices: Supported 00:24:19.021 Controller Attributes 00:24:19.021 128-bit Host Identifier: Not Supported 00:24:19.021 Non-Operational Permissive Mode: Not Supported 00:24:19.021 NVM Sets: Not Supported 00:24:19.021 Read Recovery Levels: Not Supported 00:24:19.021 Endurance Groups: Not Supported 00:24:19.021 Predictable Latency Mode: Not Supported 00:24:19.021 Traffic Based Keep ALive: Not Supported 00:24:19.021 Namespace Granularity: Not Supported 00:24:19.021 SQ Associations: Not Supported 00:24:19.021 UUID List: Not Supported 00:24:19.021 Multi-Domain Subsystem: Not Supported 00:24:19.021 Fixed Capacity Management: Not Supported 00:24:19.022 Variable Capacity Management: Not Supported 00:24:19.022 Delete Endurance Group: Not Supported 00:24:19.022 Delete NVM Set: Not Supported 00:24:19.022 Extended LBA Formats Supported: Not Supported 00:24:19.022 Flexible Data Placement Supported: Not Supported 00:24:19.022 00:24:19.022 Controller Memory Buffer Support 00:24:19.022 ================================ 00:24:19.022 Supported: No 00:24:19.022 00:24:19.022 Persistent Memory Region Support 00:24:19.022 ================================ 00:24:19.022 Supported: No 00:24:19.022 00:24:19.022 Admin Command Set Attributes 00:24:19.022 ============================ 00:24:19.022 Security Send/Receive: Not Supported 00:24:19.022 Format NVM: Not Supported 00:24:19.022 Firmware Activate/Download: Not Supported 00:24:19.022 Namespace Management: Not Supported 00:24:19.022 Device Self-Test: Not Supported 00:24:19.022 Directives: Not Supported 00:24:19.022 NVMe-MI: Not Supported 00:24:19.022 Virtualization Management: Not Supported 00:24:19.022 Doorbell Buffer Config: Not Supported 00:24:19.022 Get LBA Status Capability: Not Supported 00:24:19.022 Command & Feature Lockdown Capability: Not Supported 00:24:19.022 Abort Command Limit: 1 00:24:19.022 Async Event Request Limit: 1 00:24:19.022 Number of Firmware Slots: N/A 00:24:19.022 Firmware Slot 1 Read-Only: N/A 00:24:19.022 Firmware Activation Without Reset: N/A 00:24:19.022 Multiple Update Detection Support: N/A 00:24:19.022 Firmware Update Granularity: No Information Provided 00:24:19.022 Per-Namespace SMART Log: No 00:24:19.022 Asymmetric Namespace Access Log Page: Not Supported 00:24:19.022 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:19.022 Command Effects Log Page: Not Supported 00:24:19.022 Get Log Page Extended Data: Supported 00:24:19.022 Telemetry Log Pages: Not Supported 00:24:19.022 Persistent Event Log Pages: Not Supported 00:24:19.022 Supported Log Pages Log Page: May Support 00:24:19.022 Commands Supported & Effects Log Page: Not Supported 00:24:19.022 Feature Identifiers & Effects Log Page:May Support 00:24:19.022 NVMe-MI Commands & Effects Log Page: May Support 00:24:19.022 Data Area 4 for Telemetry Log: Not Supported 00:24:19.022 Error Log Page Entries Supported: 1 00:24:19.022 Keep Alive: Not Supported 00:24:19.022 00:24:19.022 NVM Command Set Attributes 00:24:19.022 ========================== 00:24:19.022 Submission Queue Entry Size 00:24:19.022 Max: 1 00:24:19.022 Min: 1 00:24:19.022 Completion Queue Entry Size 00:24:19.022 Max: 1 00:24:19.022 Min: 1 00:24:19.022 Number of Namespaces: 0 00:24:19.022 Compare Command: Not Supported 00:24:19.022 Write Uncorrectable Command: Not Supported 00:24:19.022 Dataset Management Command: Not Supported 00:24:19.022 Write Zeroes Command: Not Supported 00:24:19.022 Set Features Save Field: Not Supported 00:24:19.022 Reservations: Not Supported 00:24:19.022 Timestamp: Not Supported 00:24:19.022 Copy: Not Supported 00:24:19.022 Volatile Write Cache: Not Present 00:24:19.022 Atomic Write Unit (Normal): 1 00:24:19.022 Atomic Write Unit (PFail): 1 00:24:19.022 Atomic Compare & Write Unit: 1 00:24:19.022 Fused Compare & Write: Not Supported 00:24:19.022 Scatter-Gather List 00:24:19.022 SGL Command Set: Supported 00:24:19.022 SGL Keyed: Not Supported 00:24:19.022 SGL Bit Bucket Descriptor: Not Supported 00:24:19.022 SGL Metadata Pointer: Not Supported 00:24:19.022 Oversized SGL: Not Supported 00:24:19.022 SGL Metadata Address: Not Supported 00:24:19.022 SGL Offset: Supported 00:24:19.022 Transport SGL Data Block: Not Supported 00:24:19.022 Replay Protected Memory Block: Not Supported 00:24:19.022 00:24:19.022 Firmware Slot Information 00:24:19.022 ========================= 00:24:19.022 Active slot: 0 00:24:19.022 00:24:19.022 00:24:19.022 Error Log 00:24:19.022 ========= 00:24:19.022 00:24:19.022 Active Namespaces 00:24:19.022 ================= 00:24:19.022 Discovery Log Page 00:24:19.022 ================== 00:24:19.022 Generation Counter: 2 00:24:19.022 Number of Records: 2 00:24:19.022 Record Format: 0 00:24:19.022 00:24:19.022 Discovery Log Entry 0 00:24:19.022 ---------------------- 00:24:19.022 Transport Type: 3 (TCP) 00:24:19.022 Address Family: 1 (IPv4) 00:24:19.022 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:19.022 Entry Flags: 00:24:19.022 Duplicate Returned Information: 0 00:24:19.022 Explicit Persistent Connection Support for Discovery: 0 00:24:19.022 Transport Requirements: 00:24:19.022 Secure Channel: Not Specified 00:24:19.022 Port ID: 1 (0x0001) 00:24:19.022 Controller ID: 65535 (0xffff) 00:24:19.022 Admin Max SQ Size: 32 00:24:19.022 Transport Service Identifier: 4420 00:24:19.022 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:19.022 Transport Address: 10.0.0.1 00:24:19.022 Discovery Log Entry 1 00:24:19.022 ---------------------- 00:24:19.022 Transport Type: 3 (TCP) 00:24:19.022 Address Family: 1 (IPv4) 00:24:19.022 Subsystem Type: 2 (NVM Subsystem) 00:24:19.022 Entry Flags: 00:24:19.022 Duplicate Returned Information: 0 00:24:19.022 Explicit Persistent Connection Support for Discovery: 0 00:24:19.022 Transport Requirements: 00:24:19.022 Secure Channel: Not Specified 00:24:19.022 Port ID: 1 (0x0001) 00:24:19.022 Controller ID: 65535 (0xffff) 00:24:19.022 Admin Max SQ Size: 32 00:24:19.022 Transport Service Identifier: 4420 00:24:19.022 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:24:19.022 Transport Address: 10.0.0.1 00:24:19.022 00:25:37 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:19.022 get_feature(0x01) failed 00:24:19.022 get_feature(0x02) failed 00:24:19.022 get_feature(0x04) failed 00:24:19.022 ===================================================== 00:24:19.022 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:19.022 ===================================================== 00:24:19.022 Controller Capabilities/Features 00:24:19.022 ================================ 00:24:19.022 Vendor ID: 0000 00:24:19.022 Subsystem Vendor ID: 0000 00:24:19.022 Serial Number: 734ba2a84bb6c3b8f2df 00:24:19.022 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:24:19.022 Firmware Version: 6.7.0-68 00:24:19.022 Recommended Arb Burst: 6 00:24:19.022 IEEE OUI Identifier: 00 00 00 00:24:19.022 Multi-path I/O 00:24:19.022 May have multiple subsystem ports: Yes 00:24:19.022 May have multiple controllers: Yes 00:24:19.022 Associated with SR-IOV VF: No 00:24:19.022 Max Data Transfer Size: Unlimited 00:24:19.022 Max Number of Namespaces: 1024 00:24:19.022 Max Number of I/O Queues: 128 00:24:19.022 NVMe Specification Version (VS): 1.3 00:24:19.022 NVMe Specification Version (Identify): 1.3 00:24:19.022 Maximum Queue Entries: 1024 00:24:19.022 Contiguous Queues Required: No 00:24:19.022 Arbitration Mechanisms Supported 00:24:19.022 Weighted Round Robin: Not Supported 00:24:19.022 Vendor Specific: Not Supported 00:24:19.022 Reset Timeout: 7500 ms 00:24:19.022 Doorbell Stride: 4 bytes 00:24:19.022 NVM Subsystem Reset: Not Supported 00:24:19.022 Command Sets Supported 00:24:19.022 NVM Command Set: Supported 00:24:19.022 Boot Partition: Not Supported 00:24:19.022 Memory Page Size Minimum: 4096 bytes 00:24:19.022 Memory Page Size Maximum: 4096 bytes 00:24:19.022 Persistent Memory Region: Not Supported 00:24:19.022 Optional Asynchronous Events Supported 00:24:19.022 Namespace Attribute Notices: Supported 00:24:19.022 Firmware Activation Notices: Not Supported 00:24:19.022 ANA Change Notices: Supported 00:24:19.022 PLE Aggregate Log Change Notices: Not Supported 00:24:19.022 LBA Status Info Alert Notices: Not Supported 00:24:19.022 EGE Aggregate Log Change Notices: Not Supported 00:24:19.022 Normal NVM Subsystem Shutdown event: Not Supported 00:24:19.022 Zone Descriptor Change Notices: Not Supported 00:24:19.022 Discovery Log Change Notices: Not Supported 00:24:19.022 Controller Attributes 00:24:19.022 128-bit Host Identifier: Supported 00:24:19.022 Non-Operational Permissive Mode: Not Supported 00:24:19.022 NVM Sets: Not Supported 00:24:19.022 Read Recovery Levels: Not Supported 00:24:19.022 Endurance Groups: Not Supported 00:24:19.022 Predictable Latency Mode: Not Supported 00:24:19.022 Traffic Based Keep ALive: Supported 00:24:19.022 Namespace Granularity: Not Supported 00:24:19.022 SQ Associations: Not Supported 00:24:19.022 UUID List: Not Supported 00:24:19.022 Multi-Domain Subsystem: Not Supported 00:24:19.022 Fixed Capacity Management: Not Supported 00:24:19.022 Variable Capacity Management: Not Supported 00:24:19.022 Delete Endurance Group: Not Supported 00:24:19.022 Delete NVM Set: Not Supported 00:24:19.022 Extended LBA Formats Supported: Not Supported 00:24:19.022 Flexible Data Placement Supported: Not Supported 00:24:19.022 00:24:19.022 Controller Memory Buffer Support 00:24:19.022 ================================ 00:24:19.022 Supported: No 00:24:19.022 00:24:19.022 Persistent Memory Region Support 00:24:19.022 ================================ 00:24:19.022 Supported: No 00:24:19.022 00:24:19.022 Admin Command Set Attributes 00:24:19.022 ============================ 00:24:19.022 Security Send/Receive: Not Supported 00:24:19.022 Format NVM: Not Supported 00:24:19.022 Firmware Activate/Download: Not Supported 00:24:19.022 Namespace Management: Not Supported 00:24:19.023 Device Self-Test: Not Supported 00:24:19.023 Directives: Not Supported 00:24:19.023 NVMe-MI: Not Supported 00:24:19.023 Virtualization Management: Not Supported 00:24:19.023 Doorbell Buffer Config: Not Supported 00:24:19.023 Get LBA Status Capability: Not Supported 00:24:19.023 Command & Feature Lockdown Capability: Not Supported 00:24:19.023 Abort Command Limit: 4 00:24:19.023 Async Event Request Limit: 4 00:24:19.023 Number of Firmware Slots: N/A 00:24:19.023 Firmware Slot 1 Read-Only: N/A 00:24:19.023 Firmware Activation Without Reset: N/A 00:24:19.023 Multiple Update Detection Support: N/A 00:24:19.023 Firmware Update Granularity: No Information Provided 00:24:19.023 Per-Namespace SMART Log: Yes 00:24:19.023 Asymmetric Namespace Access Log Page: Supported 00:24:19.023 ANA Transition Time : 10 sec 00:24:19.023 00:24:19.023 Asymmetric Namespace Access Capabilities 00:24:19.023 ANA Optimized State : Supported 00:24:19.023 ANA Non-Optimized State : Supported 00:24:19.023 ANA Inaccessible State : Supported 00:24:19.023 ANA Persistent Loss State : Supported 00:24:19.023 ANA Change State : Supported 00:24:19.023 ANAGRPID is not changed : No 00:24:19.023 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:24:19.023 00:24:19.023 ANA Group Identifier Maximum : 128 00:24:19.023 Number of ANA Group Identifiers : 128 00:24:19.023 Max Number of Allowed Namespaces : 1024 00:24:19.023 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:24:19.023 Command Effects Log Page: Supported 00:24:19.023 Get Log Page Extended Data: Supported 00:24:19.023 Telemetry Log Pages: Not Supported 00:24:19.023 Persistent Event Log Pages: Not Supported 00:24:19.023 Supported Log Pages Log Page: May Support 00:24:19.023 Commands Supported & Effects Log Page: Not Supported 00:24:19.023 Feature Identifiers & Effects Log Page:May Support 00:24:19.023 NVMe-MI Commands & Effects Log Page: May Support 00:24:19.023 Data Area 4 for Telemetry Log: Not Supported 00:24:19.023 Error Log Page Entries Supported: 128 00:24:19.023 Keep Alive: Supported 00:24:19.023 Keep Alive Granularity: 1000 ms 00:24:19.023 00:24:19.023 NVM Command Set Attributes 00:24:19.023 ========================== 00:24:19.023 Submission Queue Entry Size 00:24:19.023 Max: 64 00:24:19.023 Min: 64 00:24:19.023 Completion Queue Entry Size 00:24:19.023 Max: 16 00:24:19.023 Min: 16 00:24:19.023 Number of Namespaces: 1024 00:24:19.023 Compare Command: Not Supported 00:24:19.023 Write Uncorrectable Command: Not Supported 00:24:19.023 Dataset Management Command: Supported 00:24:19.023 Write Zeroes Command: Supported 00:24:19.023 Set Features Save Field: Not Supported 00:24:19.023 Reservations: Not Supported 00:24:19.023 Timestamp: Not Supported 00:24:19.023 Copy: Not Supported 00:24:19.023 Volatile Write Cache: Present 00:24:19.023 Atomic Write Unit (Normal): 1 00:24:19.023 Atomic Write Unit (PFail): 1 00:24:19.023 Atomic Compare & Write Unit: 1 00:24:19.023 Fused Compare & Write: Not Supported 00:24:19.023 Scatter-Gather List 00:24:19.023 SGL Command Set: Supported 00:24:19.023 SGL Keyed: Not Supported 00:24:19.023 SGL Bit Bucket Descriptor: Not Supported 00:24:19.023 SGL Metadata Pointer: Not Supported 00:24:19.023 Oversized SGL: Not Supported 00:24:19.023 SGL Metadata Address: Not Supported 00:24:19.023 SGL Offset: Supported 00:24:19.023 Transport SGL Data Block: Not Supported 00:24:19.023 Replay Protected Memory Block: Not Supported 00:24:19.023 00:24:19.023 Firmware Slot Information 00:24:19.023 ========================= 00:24:19.023 Active slot: 0 00:24:19.023 00:24:19.023 Asymmetric Namespace Access 00:24:19.023 =========================== 00:24:19.023 Change Count : 0 00:24:19.023 Number of ANA Group Descriptors : 1 00:24:19.023 ANA Group Descriptor : 0 00:24:19.023 ANA Group ID : 1 00:24:19.023 Number of NSID Values : 1 00:24:19.023 Change Count : 0 00:24:19.023 ANA State : 1 00:24:19.023 Namespace Identifier : 1 00:24:19.023 00:24:19.023 Commands Supported and Effects 00:24:19.023 ============================== 00:24:19.023 Admin Commands 00:24:19.023 -------------- 00:24:19.023 Get Log Page (02h): Supported 00:24:19.023 Identify (06h): Supported 00:24:19.023 Abort (08h): Supported 00:24:19.023 Set Features (09h): Supported 00:24:19.023 Get Features (0Ah): Supported 00:24:19.023 Asynchronous Event Request (0Ch): Supported 00:24:19.023 Keep Alive (18h): Supported 00:24:19.023 I/O Commands 00:24:19.023 ------------ 00:24:19.023 Flush (00h): Supported 00:24:19.023 Write (01h): Supported LBA-Change 00:24:19.023 Read (02h): Supported 00:24:19.023 Write Zeroes (08h): Supported LBA-Change 00:24:19.023 Dataset Management (09h): Supported 00:24:19.023 00:24:19.023 Error Log 00:24:19.023 ========= 00:24:19.023 Entry: 0 00:24:19.023 Error Count: 0x3 00:24:19.023 Submission Queue Id: 0x0 00:24:19.023 Command Id: 0x5 00:24:19.023 Phase Bit: 0 00:24:19.023 Status Code: 0x2 00:24:19.023 Status Code Type: 0x0 00:24:19.023 Do Not Retry: 1 00:24:19.023 Error Location: 0x28 00:24:19.023 LBA: 0x0 00:24:19.023 Namespace: 0x0 00:24:19.023 Vendor Log Page: 0x0 00:24:19.023 ----------- 00:24:19.023 Entry: 1 00:24:19.023 Error Count: 0x2 00:24:19.023 Submission Queue Id: 0x0 00:24:19.023 Command Id: 0x5 00:24:19.023 Phase Bit: 0 00:24:19.023 Status Code: 0x2 00:24:19.023 Status Code Type: 0x0 00:24:19.023 Do Not Retry: 1 00:24:19.023 Error Location: 0x28 00:24:19.023 LBA: 0x0 00:24:19.023 Namespace: 0x0 00:24:19.023 Vendor Log Page: 0x0 00:24:19.023 ----------- 00:24:19.023 Entry: 2 00:24:19.023 Error Count: 0x1 00:24:19.023 Submission Queue Id: 0x0 00:24:19.023 Command Id: 0x4 00:24:19.023 Phase Bit: 0 00:24:19.023 Status Code: 0x2 00:24:19.023 Status Code Type: 0x0 00:24:19.023 Do Not Retry: 1 00:24:19.023 Error Location: 0x28 00:24:19.023 LBA: 0x0 00:24:19.023 Namespace: 0x0 00:24:19.023 Vendor Log Page: 0x0 00:24:19.023 00:24:19.023 Number of Queues 00:24:19.023 ================ 00:24:19.023 Number of I/O Submission Queues: 128 00:24:19.023 Number of I/O Completion Queues: 128 00:24:19.023 00:24:19.023 ZNS Specific Controller Data 00:24:19.023 ============================ 00:24:19.023 Zone Append Size Limit: 0 00:24:19.023 00:24:19.023 00:24:19.023 Active Namespaces 00:24:19.023 ================= 00:24:19.023 get_feature(0x05) failed 00:24:19.023 Namespace ID:1 00:24:19.023 Command Set Identifier: NVM (00h) 00:24:19.023 Deallocate: Supported 00:24:19.023 Deallocated/Unwritten Error: Not Supported 00:24:19.023 Deallocated Read Value: Unknown 00:24:19.023 Deallocate in Write Zeroes: Not Supported 00:24:19.023 Deallocated Guard Field: 0xFFFF 00:24:19.023 Flush: Supported 00:24:19.023 Reservation: Not Supported 00:24:19.023 Namespace Sharing Capabilities: Multiple Controllers 00:24:19.023 Size (in LBAs): 1953525168 (931GiB) 00:24:19.023 Capacity (in LBAs): 1953525168 (931GiB) 00:24:19.023 Utilization (in LBAs): 1953525168 (931GiB) 00:24:19.023 UUID: 0c19811e-7382-421e-a5b0-b7e4caf8ed52 00:24:19.023 Thin Provisioning: Not Supported 00:24:19.023 Per-NS Atomic Units: Yes 00:24:19.023 Atomic Boundary Size (Normal): 0 00:24:19.023 Atomic Boundary Size (PFail): 0 00:24:19.023 Atomic Boundary Offset: 0 00:24:19.023 NGUID/EUI64 Never Reused: No 00:24:19.023 ANA group ID: 1 00:24:19.023 Namespace Write Protected: No 00:24:19.023 Number of LBA Formats: 1 00:24:19.023 Current LBA Format: LBA Format #00 00:24:19.023 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:19.023 00:24:19.023 00:25:37 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:24:19.023 00:25:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:19.023 00:25:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:24:19.023 00:25:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:19.023 00:25:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:24:19.023 00:25:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:19.023 00:25:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:19.023 rmmod nvme_tcp 00:24:19.023 rmmod nvme_fabrics 00:24:19.303 00:25:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:19.303 00:25:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:24:19.303 00:25:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:24:19.303 00:25:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:24:19.303 00:25:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:19.303 00:25:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:19.303 00:25:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:19.303 00:25:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:19.303 00:25:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:19.303 00:25:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:19.303 00:25:37 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:19.303 00:25:37 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:21.209 00:25:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:21.209 00:25:39 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:24:21.209 00:25:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:24:21.209 00:25:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:24:21.209 00:25:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:21.209 00:25:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:21.209 00:25:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:21.209 00:25:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:21.209 00:25:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:24:21.209 00:25:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:24:21.209 00:25:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:24:23.745 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:24:23.745 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:24:23.745 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:24:23.745 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:24:23.745 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:24:23.745 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:24:23.745 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:24:23.745 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:24:23.745 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:24:23.745 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:24:23.745 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:24:23.745 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:24:23.745 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:24:24.003 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:24:24.003 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:24:24.003 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:24:24.569 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:24:24.826 00:24:24.826 real 0m15.021s 00:24:24.826 user 0m3.655s 00:24:24.826 sys 0m7.662s 00:24:24.826 00:25:43 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1118 -- # xtrace_disable 00:24:24.826 00:25:43 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:24:24.826 ************************************ 00:24:24.826 END TEST nvmf_identify_kernel_target 00:24:24.826 ************************************ 00:24:24.826 00:25:43 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:24:24.826 00:25:43 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:24:24.826 00:25:43 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:24:24.826 00:25:43 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:24:24.826 00:25:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:24.826 ************************************ 00:24:24.826 START TEST nvmf_auth_host 00:24:24.827 ************************************ 00:24:24.827 00:25:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:24:24.827 * Looking for test storage... 00:24:24.827 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:24.827 00:25:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:24.827 00:25:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:24:24.827 00:25:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:24.827 00:25:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:24.827 00:25:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:24.827 00:25:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:24.827 00:25:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:24.827 00:25:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:24.827 00:25:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:24.827 00:25:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:24.827 00:25:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:25.085 00:25:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:25.085 00:25:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:25.085 00:25:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:25.085 00:25:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:25.085 00:25:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:25.085 00:25:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:25.085 00:25:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:25.085 00:25:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:25.085 00:25:43 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:25.085 00:25:43 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:25.085 00:25:43 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:25.085 00:25:43 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.085 00:25:43 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.085 00:25:43 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.085 00:25:43 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:24:25.085 00:25:43 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.085 00:25:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:24:25.085 00:25:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:25.085 00:25:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:25.085 00:25:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:25.085 00:25:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:25.085 00:25:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:25.085 00:25:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:25.085 00:25:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:25.085 00:25:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:25.085 00:25:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:24:25.085 00:25:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:24:25.085 00:25:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:24:25.085 00:25:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:24:25.085 00:25:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:25.085 00:25:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:25.085 00:25:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:24:25.085 00:25:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:24:25.085 00:25:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:24:25.085 00:25:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:25.085 00:25:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:25.085 00:25:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:25.085 00:25:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:25.085 00:25:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:25.085 00:25:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:25.085 00:25:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:25.085 00:25:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:25.085 00:25:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:25.085 00:25:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:25.085 00:25:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:24:25.085 00:25:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.357 00:25:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:30.357 00:25:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:24:30.357 00:25:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:30.357 00:25:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:30.357 00:25:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:30.357 00:25:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:30.357 00:25:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:30.357 00:25:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:24:30.357 00:25:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:30.357 00:25:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:24:30.357 00:25:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:24:30.357 00:25:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:24:30.357 00:25:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:24:30.357 00:25:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:24:30.357 00:25:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:24:30.357 00:25:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:30.357 00:25:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:30.357 00:25:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:30.357 00:25:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:30.357 00:25:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:30.357 00:25:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:30.357 00:25:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:30.357 00:25:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:30.357 00:25:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:30.357 00:25:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:30.357 00:25:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:30.357 00:25:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:30.357 00:25:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:30.357 00:25:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:30.357 00:25:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:30.357 00:25:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:30.357 00:25:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:30.357 00:25:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:30.357 00:25:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:30.357 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:30.357 00:25:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:30.357 00:25:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:30.358 00:25:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:30.358 00:25:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:30.358 00:25:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:30.358 00:25:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:30.358 00:25:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:30.358 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:30.358 00:25:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:30.358 00:25:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:30.358 00:25:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:30.358 00:25:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:30.358 00:25:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:30.358 00:25:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:30.358 00:25:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:30.358 00:25:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:30.358 00:25:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:30.358 00:25:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:30.358 00:25:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:30.358 00:25:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:30.358 00:25:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:30.358 00:25:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:30.358 00:25:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:30.358 00:25:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:30.358 Found net devices under 0000:86:00.0: cvl_0_0 00:24:30.358 00:25:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:30.358 00:25:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:30.358 00:25:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:30.358 00:25:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:30.358 00:25:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:30.358 00:25:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:30.358 00:25:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:30.358 00:25:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:30.358 00:25:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:30.358 Found net devices under 0000:86:00.1: cvl_0_1 00:24:30.358 00:25:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:30.358 00:25:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:30.358 00:25:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:24:30.358 00:25:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:30.358 00:25:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:30.358 00:25:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:30.358 00:25:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:30.358 00:25:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:30.358 00:25:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:30.358 00:25:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:30.358 00:25:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:30.358 00:25:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:30.358 00:25:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:30.358 00:25:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:30.358 00:25:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:30.358 00:25:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:30.358 00:25:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:30.358 00:25:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:30.358 00:25:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:30.358 00:25:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:30.358 00:25:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:30.358 00:25:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:30.358 00:25:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:30.358 00:25:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:30.358 00:25:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:30.358 00:25:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:30.358 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:30.358 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.188 ms 00:24:30.358 00:24:30.358 --- 10.0.0.2 ping statistics --- 00:24:30.358 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:30.358 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:24:30.358 00:25:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:30.358 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:30.358 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:24:30.358 00:24:30.358 --- 10.0.0.1 ping statistics --- 00:24:30.358 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:30.358 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:24:30.358 00:25:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:30.358 00:25:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:24:30.358 00:25:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:30.358 00:25:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:30.358 00:25:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:30.358 00:25:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:30.358 00:25:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:30.358 00:25:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:30.358 00:25:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:30.358 00:25:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:24:30.358 00:25:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:30.358 00:25:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:30.358 00:25:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.358 00:25:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:24:30.358 00:25:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=1636083 00:24:30.358 00:25:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 1636083 00:24:30.358 00:25:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@823 -- # '[' -z 1636083 ']' 00:24:30.358 00:25:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:30.358 00:25:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@828 -- # local max_retries=100 00:24:30.358 00:25:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:30.358 00:25:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@832 -- # xtrace_disable 00:24:30.358 00:25:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.316 00:25:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:24:31.316 00:25:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@856 -- # return 0 00:24:31.316 00:25:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:31.316 00:25:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:31.316 00:25:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.316 00:25:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:31.316 00:25:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:24:31.316 00:25:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:24:31.316 00:25:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:31.316 00:25:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:31.316 00:25:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:31.316 00:25:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:24:31.316 00:25:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:24:31.316 00:25:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:31.316 00:25:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=485fa8fa37607e6e884143f3b37a8623 00:24:31.316 00:25:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:24:31.316 00:25:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.V6d 00:24:31.316 00:25:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 485fa8fa37607e6e884143f3b37a8623 0 00:24:31.316 00:25:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 485fa8fa37607e6e884143f3b37a8623 0 00:24:31.316 00:25:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:31.316 00:25:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:31.316 00:25:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=485fa8fa37607e6e884143f3b37a8623 00:24:31.316 00:25:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:24:31.316 00:25:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:31.316 00:25:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.V6d 00:24:31.316 00:25:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.V6d 00:24:31.316 00:25:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.V6d 00:24:31.316 00:25:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:24:31.316 00:25:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:31.316 00:25:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:31.316 00:25:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:31.316 00:25:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:24:31.316 00:25:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:24:31.316 00:25:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:24:31.316 00:25:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=1443f39c9054c54dfdead85b468d5803a53c5ce099b599f5fe7f6e7f2fd092d7 00:24:31.316 00:25:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:24:31.316 00:25:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.3zl 00:24:31.316 00:25:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 1443f39c9054c54dfdead85b468d5803a53c5ce099b599f5fe7f6e7f2fd092d7 3 00:24:31.316 00:25:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 1443f39c9054c54dfdead85b468d5803a53c5ce099b599f5fe7f6e7f2fd092d7 3 00:24:31.316 00:25:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:31.316 00:25:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:31.316 00:25:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=1443f39c9054c54dfdead85b468d5803a53c5ce099b599f5fe7f6e7f2fd092d7 00:24:31.316 00:25:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:24:31.316 00:25:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:31.316 00:25:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.3zl 00:24:31.316 00:25:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.3zl 00:24:31.316 00:25:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.3zl 00:24:31.316 00:25:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:24:31.317 00:25:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:31.317 00:25:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:31.317 00:25:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:31.317 00:25:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:24:31.317 00:25:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:24:31.317 00:25:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:31.317 00:25:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=dc5d57eb1afc1fa7fdb792a8e20f88bec3b4877e833363c2 00:24:31.317 00:25:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:24:31.317 00:25:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.f4k 00:24:31.317 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key dc5d57eb1afc1fa7fdb792a8e20f88bec3b4877e833363c2 0 00:24:31.317 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 dc5d57eb1afc1fa7fdb792a8e20f88bec3b4877e833363c2 0 00:24:31.317 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:31.317 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:31.317 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=dc5d57eb1afc1fa7fdb792a8e20f88bec3b4877e833363c2 00:24:31.317 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:24:31.317 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:31.317 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.f4k 00:24:31.317 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.f4k 00:24:31.317 00:25:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.f4k 00:24:31.317 00:25:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:24:31.317 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:31.317 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:31.317 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:31.317 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:24:31.317 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:24:31.317 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:31.317 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=f9a9b27e5784b4728178db3926cc90ba66e7d0c448cdc420 00:24:31.317 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:24:31.317 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.UvO 00:24:31.317 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key f9a9b27e5784b4728178db3926cc90ba66e7d0c448cdc420 2 00:24:31.317 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 f9a9b27e5784b4728178db3926cc90ba66e7d0c448cdc420 2 00:24:31.317 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:31.317 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:31.317 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=f9a9b27e5784b4728178db3926cc90ba66e7d0c448cdc420 00:24:31.317 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:24:31.317 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:31.317 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.UvO 00:24:31.317 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.UvO 00:24:31.317 00:25:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.UvO 00:24:31.317 00:25:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:24:31.317 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:31.317 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:31.317 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:31.317 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:24:31.317 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:24:31.317 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:31.317 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=8e8d0717f4bd0107c8e5e0346029102c 00:24:31.317 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:24:31.317 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.m1G 00:24:31.317 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 8e8d0717f4bd0107c8e5e0346029102c 1 00:24:31.317 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 8e8d0717f4bd0107c8e5e0346029102c 1 00:24:31.317 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:31.317 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:31.317 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=8e8d0717f4bd0107c8e5e0346029102c 00:24:31.317 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:24:31.317 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:31.317 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.m1G 00:24:31.317 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.m1G 00:24:31.317 00:25:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.m1G 00:24:31.317 00:25:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:24:31.317 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:31.317 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:31.317 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:31.317 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:24:31.317 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:24:31.317 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:31.317 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=3e84b223486ee5e0b855e9c1a99d8cad 00:24:31.576 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:24:31.576 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.QQ3 00:24:31.576 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 3e84b223486ee5e0b855e9c1a99d8cad 1 00:24:31.576 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 3e84b223486ee5e0b855e9c1a99d8cad 1 00:24:31.576 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:31.576 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:31.576 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=3e84b223486ee5e0b855e9c1a99d8cad 00:24:31.576 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:24:31.576 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:31.576 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.QQ3 00:24:31.576 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.QQ3 00:24:31.576 00:25:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.QQ3 00:24:31.576 00:25:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:24:31.576 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:31.576 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:31.576 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:31.576 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:24:31.576 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:24:31.576 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:31.576 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=6d17df112f7bdc14c9d97e3c983a4bd569f884f206773ae3 00:24:31.576 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:24:31.576 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.JPC 00:24:31.576 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 6d17df112f7bdc14c9d97e3c983a4bd569f884f206773ae3 2 00:24:31.576 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 6d17df112f7bdc14c9d97e3c983a4bd569f884f206773ae3 2 00:24:31.576 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:31.576 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:31.576 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=6d17df112f7bdc14c9d97e3c983a4bd569f884f206773ae3 00:24:31.576 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:24:31.576 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:31.576 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.JPC 00:24:31.576 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.JPC 00:24:31.576 00:25:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.JPC 00:24:31.576 00:25:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:24:31.576 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:31.576 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:31.576 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:31.576 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:24:31.576 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:24:31.576 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:31.576 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=f40148044e03db9078cd1879ddb847a5 00:24:31.576 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:24:31.576 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.fob 00:24:31.576 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key f40148044e03db9078cd1879ddb847a5 0 00:24:31.576 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 f40148044e03db9078cd1879ddb847a5 0 00:24:31.576 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:31.576 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:31.576 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=f40148044e03db9078cd1879ddb847a5 00:24:31.576 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:24:31.576 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:31.576 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.fob 00:24:31.576 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.fob 00:24:31.576 00:25:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.fob 00:24:31.576 00:25:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:24:31.576 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:31.576 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:31.576 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:31.576 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:24:31.576 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:24:31.576 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:24:31.576 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=c5f62ecfe1a656beba23e5f739895c937afdb47b2eaf0549d2e8aaca76ef013a 00:24:31.576 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:24:31.576 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.6oO 00:24:31.576 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key c5f62ecfe1a656beba23e5f739895c937afdb47b2eaf0549d2e8aaca76ef013a 3 00:24:31.577 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 c5f62ecfe1a656beba23e5f739895c937afdb47b2eaf0549d2e8aaca76ef013a 3 00:24:31.577 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:31.577 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:31.577 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=c5f62ecfe1a656beba23e5f739895c937afdb47b2eaf0549d2e8aaca76ef013a 00:24:31.577 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:24:31.577 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:31.577 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.6oO 00:24:31.577 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.6oO 00:24:31.577 00:25:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.6oO 00:24:31.577 00:25:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:24:31.577 00:25:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1636083 00:24:31.577 00:25:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@823 -- # '[' -z 1636083 ']' 00:24:31.577 00:25:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:31.577 00:25:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@828 -- # local max_retries=100 00:24:31.577 00:25:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:31.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:31.577 00:25:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@832 -- # xtrace_disable 00:24:31.577 00:25:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.835 00:25:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:24:31.835 00:25:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@856 -- # return 0 00:24:31.835 00:25:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:31.835 00:25:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.V6d 00:24:31.835 00:25:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:31.835 00:25:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.835 00:25:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:31.835 00:25:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.3zl ]] 00:24:31.835 00:25:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.3zl 00:24:31.835 00:25:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:31.835 00:25:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.835 00:25:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:31.835 00:25:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:31.835 00:25:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.f4k 00:24:31.835 00:25:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:31.835 00:25:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.835 00:25:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:31.835 00:25:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.UvO ]] 00:24:31.835 00:25:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.UvO 00:24:31.835 00:25:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:31.835 00:25:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.835 00:25:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:31.836 00:25:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:31.836 00:25:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.m1G 00:24:31.836 00:25:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:31.836 00:25:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.836 00:25:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:31.836 00:25:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.QQ3 ]] 00:24:31.836 00:25:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.QQ3 00:24:31.836 00:25:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:31.836 00:25:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.836 00:25:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:31.836 00:25:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:31.836 00:25:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.JPC 00:24:31.836 00:25:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:31.836 00:25:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.836 00:25:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:31.836 00:25:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.fob ]] 00:24:31.836 00:25:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.fob 00:24:31.836 00:25:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:31.836 00:25:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.836 00:25:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:31.836 00:25:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:31.836 00:25:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.6oO 00:24:31.836 00:25:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:31.836 00:25:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.836 00:25:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:31.836 00:25:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:24:31.836 00:25:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:24:31.836 00:25:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:24:31.836 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:31.836 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:31.836 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:31.836 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:31.836 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:31.836 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:31.836 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:31.836 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:31.836 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:31.836 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:31.836 00:25:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:24:31.836 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:24:31.836 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:24:31.836 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:31.836 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:31.836 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:31.836 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:24:31.836 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:24:31.836 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:24:31.836 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:31.836 00:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:24:34.370 Waiting for block devices as requested 00:24:34.628 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:24:34.628 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:24:34.628 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:24:34.887 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:24:34.887 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:24:34.887 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:24:34.887 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:24:35.146 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:24:35.146 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:24:35.146 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:24:35.146 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:24:35.404 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:24:35.404 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:24:35.404 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:24:35.662 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:24:35.663 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:24:35.663 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:24:36.227 00:25:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:24:36.227 00:25:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:36.227 00:25:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:24:36.227 00:25:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1656 -- # local device=nvme0n1 00:24:36.227 00:25:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1658 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:36.227 00:25:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1659 -- # [[ none != none ]] 00:24:36.227 00:25:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:24:36.227 00:25:55 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:24:36.227 00:25:55 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:24:36.484 No valid GPT data, bailing 00:24:36.484 00:25:55 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:36.484 00:25:55 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:24:36.484 00:25:55 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:24:36.484 00:25:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:24:36.484 00:25:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:24:36.484 00:25:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:36.484 00:25:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:36.484 00:25:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:36.484 00:25:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:24:36.484 00:25:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:24:36.484 00:25:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:24:36.484 00:25:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:24:36.484 00:25:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:24:36.484 00:25:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:24:36.484 00:25:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:24:36.484 00:25:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:24:36.484 00:25:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:36.484 00:25:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:24:36.484 00:24:36.484 Discovery Log Number of Records 2, Generation counter 2 00:24:36.484 =====Discovery Log Entry 0====== 00:24:36.484 trtype: tcp 00:24:36.484 adrfam: ipv4 00:24:36.484 subtype: current discovery subsystem 00:24:36.484 treq: not specified, sq flow control disable supported 00:24:36.484 portid: 1 00:24:36.484 trsvcid: 4420 00:24:36.484 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:36.484 traddr: 10.0.0.1 00:24:36.484 eflags: none 00:24:36.484 sectype: none 00:24:36.484 =====Discovery Log Entry 1====== 00:24:36.484 trtype: tcp 00:24:36.484 adrfam: ipv4 00:24:36.484 subtype: nvme subsystem 00:24:36.484 treq: not specified, sq flow control disable supported 00:24:36.484 portid: 1 00:24:36.484 trsvcid: 4420 00:24:36.484 subnqn: nqn.2024-02.io.spdk:cnode0 00:24:36.484 traddr: 10.0.0.1 00:24:36.484 eflags: none 00:24:36.484 sectype: none 00:24:36.484 00:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:36.484 00:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:24:36.484 00:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:24:36.484 00:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:36.484 00:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:36.484 00:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:36.484 00:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:36.484 00:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:36.484 00:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGM1ZDU3ZWIxYWZjMWZhN2ZkYjc5MmE4ZTIwZjg4YmVjM2I0ODc3ZTgzMzM2M2MywVYPRg==: 00:24:36.484 00:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjlhOWIyN2U1Nzg0YjQ3MjgxNzhkYjM5MjZjYzkwYmE2NmU3ZDBjNDQ4Y2RjNDIwaDgnyA==: 00:24:36.484 00:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:36.484 00:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:36.484 00:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGM1ZDU3ZWIxYWZjMWZhN2ZkYjc5MmE4ZTIwZjg4YmVjM2I0ODc3ZTgzMzM2M2MywVYPRg==: 00:24:36.484 00:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjlhOWIyN2U1Nzg0YjQ3MjgxNzhkYjM5MjZjYzkwYmE2NmU3ZDBjNDQ4Y2RjNDIwaDgnyA==: ]] 00:24:36.484 00:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjlhOWIyN2U1Nzg0YjQ3MjgxNzhkYjM5MjZjYzkwYmE2NmU3ZDBjNDQ4Y2RjNDIwaDgnyA==: 00:24:36.484 00:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:24:36.484 00:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:24:36.484 00:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:24:36.484 00:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:36.484 00:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:24:36.484 00:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:36.484 00:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:24:36.484 00:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:36.484 00:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:36.484 00:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:36.484 00:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:36.484 00:25:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:36.484 00:25:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.484 00:25:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:36.484 00:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:36.484 00:25:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:36.484 00:25:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:36.484 00:25:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:36.484 00:25:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:36.484 00:25:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:36.484 00:25:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:36.484 00:25:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:36.484 00:25:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:36.484 00:25:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:36.484 00:25:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:36.484 00:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:36.484 00:25:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:36.484 00:25:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.742 nvme0n1 00:24:36.742 00:25:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:36.742 00:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:36.742 00:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:36.742 00:25:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:36.742 00:25:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.742 00:25:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:36.742 00:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:36.742 00:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:36.742 00:25:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:36.742 00:25:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.742 00:25:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:36.742 00:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:24:36.742 00:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:36.742 00:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:36.742 00:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:24:36.742 00:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:36.742 00:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:36.742 00:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:36.742 00:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:36.742 00:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDg1ZmE4ZmEzNzYwN2U2ZTg4NDE0M2YzYjM3YTg2MjOKRAP+: 00:24:36.742 00:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTQ0M2YzOWM5MDU0YzU0ZGZkZWFkODViNDY4ZDU4MDNhNTNjNWNlMDk5YjU5OWY1ZmU3ZjZlN2YyZmQwOTJkNxft5Wk=: 00:24:36.742 00:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:36.742 00:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:36.742 00:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDg1ZmE4ZmEzNzYwN2U2ZTg4NDE0M2YzYjM3YTg2MjOKRAP+: 00:24:36.742 00:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTQ0M2YzOWM5MDU0YzU0ZGZkZWFkODViNDY4ZDU4MDNhNTNjNWNlMDk5YjU5OWY1ZmU3ZjZlN2YyZmQwOTJkNxft5Wk=: ]] 00:24:36.742 00:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTQ0M2YzOWM5MDU0YzU0ZGZkZWFkODViNDY4ZDU4MDNhNTNjNWNlMDk5YjU5OWY1ZmU3ZjZlN2YyZmQwOTJkNxft5Wk=: 00:24:36.742 00:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:24:36.742 00:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:36.742 00:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:36.742 00:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:36.742 00:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:36.742 00:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:36.742 00:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:36.742 00:25:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:36.742 00:25:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.743 00:25:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:36.743 00:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:36.743 00:25:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:36.743 00:25:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:36.743 00:25:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:36.743 00:25:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:36.743 00:25:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:36.743 00:25:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:36.743 00:25:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:36.743 00:25:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:36.743 00:25:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:36.743 00:25:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:36.743 00:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:36.743 00:25:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:36.743 00:25:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.743 nvme0n1 00:24:36.743 00:25:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:36.743 00:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:36.743 00:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:36.743 00:25:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:36.743 00:25:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.743 00:25:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:37.001 00:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:37.001 00:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:37.001 00:25:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:37.001 00:25:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.001 00:25:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:37.001 00:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:37.001 00:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:37.001 00:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:37.001 00:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:37.001 00:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:37.001 00:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:37.001 00:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGM1ZDU3ZWIxYWZjMWZhN2ZkYjc5MmE4ZTIwZjg4YmVjM2I0ODc3ZTgzMzM2M2MywVYPRg==: 00:24:37.001 00:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjlhOWIyN2U1Nzg0YjQ3MjgxNzhkYjM5MjZjYzkwYmE2NmU3ZDBjNDQ4Y2RjNDIwaDgnyA==: 00:24:37.001 00:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:37.001 00:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:37.001 00:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGM1ZDU3ZWIxYWZjMWZhN2ZkYjc5MmE4ZTIwZjg4YmVjM2I0ODc3ZTgzMzM2M2MywVYPRg==: 00:24:37.001 00:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjlhOWIyN2U1Nzg0YjQ3MjgxNzhkYjM5MjZjYzkwYmE2NmU3ZDBjNDQ4Y2RjNDIwaDgnyA==: ]] 00:24:37.001 00:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjlhOWIyN2U1Nzg0YjQ3MjgxNzhkYjM5MjZjYzkwYmE2NmU3ZDBjNDQ4Y2RjNDIwaDgnyA==: 00:24:37.001 00:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:24:37.001 00:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:37.001 00:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:37.001 00:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:37.001 00:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:37.001 00:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:37.001 00:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:37.001 00:25:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:37.001 00:25:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.001 00:25:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:37.001 00:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:37.001 00:25:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:37.001 00:25:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:37.001 00:25:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:37.001 00:25:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:37.001 00:25:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:37.001 00:25:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:37.001 00:25:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:37.001 00:25:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:37.001 00:25:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:37.001 00:25:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:37.001 00:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:37.001 00:25:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:37.001 00:25:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.001 nvme0n1 00:24:37.001 00:25:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:37.001 00:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:37.001 00:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:37.001 00:25:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:37.001 00:25:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.001 00:25:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:37.001 00:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:37.001 00:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:37.001 00:25:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:37.001 00:25:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.001 00:25:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:37.001 00:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:37.001 00:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:24:37.001 00:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:37.001 00:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:37.001 00:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:37.001 00:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:37.001 00:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGU4ZDA3MTdmNGJkMDEwN2M4ZTVlMDM0NjAyOTEwMmM1VHFx: 00:24:37.001 00:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2U4NGIyMjM0ODZlZTVlMGI4NTVlOWMxYTk5ZDhjYWT98szC: 00:24:37.001 00:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:37.001 00:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:37.001 00:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGU4ZDA3MTdmNGJkMDEwN2M4ZTVlMDM0NjAyOTEwMmM1VHFx: 00:24:37.001 00:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2U4NGIyMjM0ODZlZTVlMGI4NTVlOWMxYTk5ZDhjYWT98szC: ]] 00:24:37.001 00:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2U4NGIyMjM0ODZlZTVlMGI4NTVlOWMxYTk5ZDhjYWT98szC: 00:24:37.001 00:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:24:37.001 00:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:37.001 00:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:37.001 00:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:37.001 00:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:37.001 00:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:37.001 00:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:37.001 00:25:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:37.001 00:25:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.001 00:25:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:37.259 00:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:37.259 00:25:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:37.259 00:25:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:37.259 00:25:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:37.259 00:25:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:37.259 00:25:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:37.259 00:25:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:37.259 00:25:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:37.259 00:25:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:37.259 00:25:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:37.259 00:25:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:37.259 00:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:37.259 00:25:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:37.259 00:25:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.259 nvme0n1 00:24:37.259 00:25:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:37.259 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:37.259 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:37.259 00:25:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:37.259 00:25:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.259 00:25:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:37.259 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:37.259 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:37.259 00:25:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:37.259 00:25:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.259 00:25:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:37.259 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:37.259 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:24:37.259 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:37.259 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:37.259 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:37.259 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:37.259 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmQxN2RmMTEyZjdiZGMxNGM5ZDk3ZTNjOTgzYTRiZDU2OWY4ODRmMjA2NzczYWUzHI5gzw==: 00:24:37.259 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjQwMTQ4MDQ0ZTAzZGI5MDc4Y2QxODc5ZGRiODQ3YTWi70fD: 00:24:37.259 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:37.260 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:37.260 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmQxN2RmMTEyZjdiZGMxNGM5ZDk3ZTNjOTgzYTRiZDU2OWY4ODRmMjA2NzczYWUzHI5gzw==: 00:24:37.260 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjQwMTQ4MDQ0ZTAzZGI5MDc4Y2QxODc5ZGRiODQ3YTWi70fD: ]] 00:24:37.260 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjQwMTQ4MDQ0ZTAzZGI5MDc4Y2QxODc5ZGRiODQ3YTWi70fD: 00:24:37.260 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:24:37.260 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:37.260 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:37.260 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:37.260 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:37.260 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:37.260 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:37.260 00:25:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:37.260 00:25:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.260 00:25:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:37.260 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:37.260 00:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:37.260 00:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:37.260 00:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:37.260 00:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:37.260 00:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:37.260 00:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:37.260 00:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:37.260 00:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:37.260 00:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:37.260 00:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:37.260 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:37.260 00:25:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:37.260 00:25:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.518 nvme0n1 00:24:37.518 00:25:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:37.518 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:37.518 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:37.518 00:25:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:37.518 00:25:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.518 00:25:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:37.518 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:37.518 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:37.518 00:25:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:37.518 00:25:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.518 00:25:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:37.518 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:37.518 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:24:37.518 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:37.518 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:37.518 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:37.518 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:37.518 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzVmNjJlY2ZlMWE2NTZiZWJhMjNlNWY3Mzk4OTVjOTM3YWZkYjQ3YjJlYWYwNTQ5ZDJlOGFhY2E3NmVmMDEzYQd+kRA=: 00:24:37.518 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:37.518 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:37.518 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:37.518 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzVmNjJlY2ZlMWE2NTZiZWJhMjNlNWY3Mzk4OTVjOTM3YWZkYjQ3YjJlYWYwNTQ5ZDJlOGFhY2E3NmVmMDEzYQd+kRA=: 00:24:37.518 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:37.518 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:24:37.518 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:37.518 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:37.518 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:37.518 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:37.518 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:37.518 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:37.518 00:25:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:37.518 00:25:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.518 00:25:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:37.518 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:37.518 00:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:37.518 00:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:37.518 00:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:37.518 00:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:37.518 00:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:37.518 00:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:37.518 00:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:37.518 00:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:37.518 00:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:37.518 00:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:37.518 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:37.518 00:25:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:37.518 00:25:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.777 nvme0n1 00:24:37.777 00:25:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:37.777 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:37.777 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:37.777 00:25:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:37.777 00:25:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.777 00:25:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:37.777 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:37.777 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:37.777 00:25:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:37.777 00:25:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.777 00:25:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:37.777 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:37.777 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:37.777 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:24:37.777 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:37.777 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:37.777 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:37.777 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:37.777 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDg1ZmE4ZmEzNzYwN2U2ZTg4NDE0M2YzYjM3YTg2MjOKRAP+: 00:24:37.777 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTQ0M2YzOWM5MDU0YzU0ZGZkZWFkODViNDY4ZDU4MDNhNTNjNWNlMDk5YjU5OWY1ZmU3ZjZlN2YyZmQwOTJkNxft5Wk=: 00:24:37.777 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:37.777 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:37.777 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDg1ZmE4ZmEzNzYwN2U2ZTg4NDE0M2YzYjM3YTg2MjOKRAP+: 00:24:37.777 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTQ0M2YzOWM5MDU0YzU0ZGZkZWFkODViNDY4ZDU4MDNhNTNjNWNlMDk5YjU5OWY1ZmU3ZjZlN2YyZmQwOTJkNxft5Wk=: ]] 00:24:37.777 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTQ0M2YzOWM5MDU0YzU0ZGZkZWFkODViNDY4ZDU4MDNhNTNjNWNlMDk5YjU5OWY1ZmU3ZjZlN2YyZmQwOTJkNxft5Wk=: 00:24:37.777 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:24:37.777 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:37.777 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:37.777 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:37.777 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:37.777 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:37.777 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:37.777 00:25:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:37.777 00:25:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.777 00:25:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:37.777 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:37.777 00:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:37.777 00:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:37.777 00:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:37.777 00:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:37.777 00:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:37.777 00:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:37.777 00:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:37.777 00:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:37.777 00:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:37.777 00:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:37.777 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:37.777 00:25:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:37.777 00:25:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.036 nvme0n1 00:24:38.036 00:25:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:38.036 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:38.036 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:38.036 00:25:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:38.036 00:25:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.036 00:25:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:38.036 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:38.036 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:38.036 00:25:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:38.036 00:25:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.036 00:25:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:38.036 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:38.036 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:24:38.036 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:38.036 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:38.036 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:38.036 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:38.036 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGM1ZDU3ZWIxYWZjMWZhN2ZkYjc5MmE4ZTIwZjg4YmVjM2I0ODc3ZTgzMzM2M2MywVYPRg==: 00:24:38.036 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjlhOWIyN2U1Nzg0YjQ3MjgxNzhkYjM5MjZjYzkwYmE2NmU3ZDBjNDQ4Y2RjNDIwaDgnyA==: 00:24:38.036 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:38.036 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:38.036 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGM1ZDU3ZWIxYWZjMWZhN2ZkYjc5MmE4ZTIwZjg4YmVjM2I0ODc3ZTgzMzM2M2MywVYPRg==: 00:24:38.036 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjlhOWIyN2U1Nzg0YjQ3MjgxNzhkYjM5MjZjYzkwYmE2NmU3ZDBjNDQ4Y2RjNDIwaDgnyA==: ]] 00:24:38.036 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjlhOWIyN2U1Nzg0YjQ3MjgxNzhkYjM5MjZjYzkwYmE2NmU3ZDBjNDQ4Y2RjNDIwaDgnyA==: 00:24:38.036 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:24:38.036 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:38.036 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:38.036 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:38.036 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:38.036 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:38.036 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:38.036 00:25:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:38.036 00:25:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.036 00:25:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:38.036 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:38.036 00:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:38.036 00:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:38.036 00:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:38.036 00:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:38.036 00:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:38.036 00:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:38.036 00:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:38.036 00:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:38.036 00:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:38.036 00:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:38.036 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:38.036 00:25:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:38.036 00:25:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.296 nvme0n1 00:24:38.296 00:25:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:38.296 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:38.296 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:38.296 00:25:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:38.296 00:25:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.296 00:25:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:38.296 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:38.296 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:38.296 00:25:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:38.296 00:25:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.296 00:25:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:38.296 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:38.296 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:24:38.296 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:38.296 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:38.296 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:38.296 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:38.296 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGU4ZDA3MTdmNGJkMDEwN2M4ZTVlMDM0NjAyOTEwMmM1VHFx: 00:24:38.296 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2U4NGIyMjM0ODZlZTVlMGI4NTVlOWMxYTk5ZDhjYWT98szC: 00:24:38.296 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:38.296 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:38.296 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGU4ZDA3MTdmNGJkMDEwN2M4ZTVlMDM0NjAyOTEwMmM1VHFx: 00:24:38.296 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2U4NGIyMjM0ODZlZTVlMGI4NTVlOWMxYTk5ZDhjYWT98szC: ]] 00:24:38.296 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2U4NGIyMjM0ODZlZTVlMGI4NTVlOWMxYTk5ZDhjYWT98szC: 00:24:38.296 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:24:38.296 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:38.296 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:38.296 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:38.296 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:38.296 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:38.296 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:38.296 00:25:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:38.296 00:25:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.296 00:25:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:38.296 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:38.296 00:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:38.296 00:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:38.296 00:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:38.296 00:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:38.296 00:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:38.296 00:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:38.296 00:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:38.296 00:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:38.296 00:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:38.296 00:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:38.296 00:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:38.296 00:25:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:38.296 00:25:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.296 nvme0n1 00:24:38.296 00:25:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:38.296 00:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:38.296 00:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:38.296 00:25:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:38.296 00:25:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.296 00:25:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:38.559 00:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:38.559 00:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:38.559 00:25:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:38.559 00:25:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.559 00:25:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:38.559 00:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:38.559 00:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:24:38.559 00:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:38.559 00:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:38.559 00:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:38.559 00:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:38.559 00:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmQxN2RmMTEyZjdiZGMxNGM5ZDk3ZTNjOTgzYTRiZDU2OWY4ODRmMjA2NzczYWUzHI5gzw==: 00:24:38.559 00:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjQwMTQ4MDQ0ZTAzZGI5MDc4Y2QxODc5ZGRiODQ3YTWi70fD: 00:24:38.559 00:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:38.559 00:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:38.559 00:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmQxN2RmMTEyZjdiZGMxNGM5ZDk3ZTNjOTgzYTRiZDU2OWY4ODRmMjA2NzczYWUzHI5gzw==: 00:24:38.559 00:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjQwMTQ4MDQ0ZTAzZGI5MDc4Y2QxODc5ZGRiODQ3YTWi70fD: ]] 00:24:38.560 00:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjQwMTQ4MDQ0ZTAzZGI5MDc4Y2QxODc5ZGRiODQ3YTWi70fD: 00:24:38.560 00:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:24:38.560 00:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:38.560 00:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:38.560 00:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:38.560 00:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:38.560 00:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:38.560 00:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:38.560 00:25:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:38.560 00:25:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.560 00:25:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:38.560 00:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:38.560 00:25:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:38.560 00:25:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:38.560 00:25:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:38.560 00:25:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:38.560 00:25:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:38.560 00:25:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:38.560 00:25:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:38.560 00:25:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:38.560 00:25:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:38.560 00:25:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:38.560 00:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:38.560 00:25:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:38.560 00:25:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.560 nvme0n1 00:24:38.560 00:25:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:38.560 00:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:38.560 00:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:38.560 00:25:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:38.560 00:25:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.560 00:25:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:38.817 00:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:38.817 00:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:38.817 00:25:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:38.817 00:25:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.817 00:25:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:38.817 00:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:38.817 00:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:24:38.817 00:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:38.818 00:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:38.818 00:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:38.818 00:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:38.818 00:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzVmNjJlY2ZlMWE2NTZiZWJhMjNlNWY3Mzk4OTVjOTM3YWZkYjQ3YjJlYWYwNTQ5ZDJlOGFhY2E3NmVmMDEzYQd+kRA=: 00:24:38.818 00:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:38.818 00:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:38.818 00:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:38.818 00:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzVmNjJlY2ZlMWE2NTZiZWJhMjNlNWY3Mzk4OTVjOTM3YWZkYjQ3YjJlYWYwNTQ5ZDJlOGFhY2E3NmVmMDEzYQd+kRA=: 00:24:38.818 00:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:38.818 00:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:24:38.818 00:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:38.818 00:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:38.818 00:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:38.818 00:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:38.818 00:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:38.818 00:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:38.818 00:25:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:38.818 00:25:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.818 00:25:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:38.818 00:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:38.818 00:25:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:38.818 00:25:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:38.818 00:25:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:38.818 00:25:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:38.818 00:25:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:38.818 00:25:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:38.818 00:25:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:38.818 00:25:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:38.818 00:25:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:38.818 00:25:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:38.818 00:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:38.818 00:25:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:38.818 00:25:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.818 nvme0n1 00:24:38.818 00:25:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:38.818 00:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:38.818 00:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:38.818 00:25:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:38.818 00:25:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.818 00:25:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:38.818 00:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:38.818 00:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:38.818 00:25:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:38.818 00:25:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.075 00:25:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:39.075 00:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:39.075 00:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:39.075 00:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:24:39.075 00:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:39.075 00:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:39.075 00:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:39.075 00:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:39.075 00:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDg1ZmE4ZmEzNzYwN2U2ZTg4NDE0M2YzYjM3YTg2MjOKRAP+: 00:24:39.075 00:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTQ0M2YzOWM5MDU0YzU0ZGZkZWFkODViNDY4ZDU4MDNhNTNjNWNlMDk5YjU5OWY1ZmU3ZjZlN2YyZmQwOTJkNxft5Wk=: 00:24:39.075 00:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:39.075 00:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:39.075 00:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDg1ZmE4ZmEzNzYwN2U2ZTg4NDE0M2YzYjM3YTg2MjOKRAP+: 00:24:39.075 00:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTQ0M2YzOWM5MDU0YzU0ZGZkZWFkODViNDY4ZDU4MDNhNTNjNWNlMDk5YjU5OWY1ZmU3ZjZlN2YyZmQwOTJkNxft5Wk=: ]] 00:24:39.075 00:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTQ0M2YzOWM5MDU0YzU0ZGZkZWFkODViNDY4ZDU4MDNhNTNjNWNlMDk5YjU5OWY1ZmU3ZjZlN2YyZmQwOTJkNxft5Wk=: 00:24:39.075 00:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:24:39.075 00:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:39.075 00:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:39.075 00:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:39.075 00:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:39.075 00:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:39.075 00:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:39.075 00:25:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:39.075 00:25:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.075 00:25:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:39.075 00:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:39.075 00:25:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:39.075 00:25:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:39.075 00:25:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:39.075 00:25:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:39.075 00:25:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:39.075 00:25:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:39.075 00:25:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:39.075 00:25:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:39.075 00:25:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:39.075 00:25:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:39.075 00:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:39.075 00:25:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:39.075 00:25:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.333 nvme0n1 00:24:39.333 00:25:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:39.333 00:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:39.333 00:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:39.333 00:25:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:39.333 00:25:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.333 00:25:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:39.333 00:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:39.333 00:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:39.333 00:25:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:39.333 00:25:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.333 00:25:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:39.333 00:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:39.333 00:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:24:39.333 00:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:39.333 00:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:39.333 00:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:39.333 00:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:39.333 00:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGM1ZDU3ZWIxYWZjMWZhN2ZkYjc5MmE4ZTIwZjg4YmVjM2I0ODc3ZTgzMzM2M2MywVYPRg==: 00:24:39.333 00:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjlhOWIyN2U1Nzg0YjQ3MjgxNzhkYjM5MjZjYzkwYmE2NmU3ZDBjNDQ4Y2RjNDIwaDgnyA==: 00:24:39.333 00:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:39.333 00:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:39.333 00:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGM1ZDU3ZWIxYWZjMWZhN2ZkYjc5MmE4ZTIwZjg4YmVjM2I0ODc3ZTgzMzM2M2MywVYPRg==: 00:24:39.333 00:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjlhOWIyN2U1Nzg0YjQ3MjgxNzhkYjM5MjZjYzkwYmE2NmU3ZDBjNDQ4Y2RjNDIwaDgnyA==: ]] 00:24:39.333 00:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjlhOWIyN2U1Nzg0YjQ3MjgxNzhkYjM5MjZjYzkwYmE2NmU3ZDBjNDQ4Y2RjNDIwaDgnyA==: 00:24:39.333 00:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:24:39.333 00:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:39.333 00:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:39.333 00:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:39.333 00:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:39.333 00:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:39.333 00:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:39.333 00:25:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:39.333 00:25:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.333 00:25:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:39.333 00:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:39.333 00:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:39.333 00:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:39.333 00:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:39.333 00:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:39.333 00:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:39.333 00:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:39.333 00:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:39.333 00:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:39.333 00:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:39.333 00:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:39.333 00:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:39.333 00:25:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:39.333 00:25:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.590 nvme0n1 00:24:39.590 00:25:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:39.590 00:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:39.590 00:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:39.590 00:25:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:39.590 00:25:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.590 00:25:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:39.590 00:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:39.590 00:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:39.590 00:25:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:39.590 00:25:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.590 00:25:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:39.590 00:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:39.590 00:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:24:39.590 00:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:39.590 00:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:39.590 00:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:39.590 00:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:39.590 00:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGU4ZDA3MTdmNGJkMDEwN2M4ZTVlMDM0NjAyOTEwMmM1VHFx: 00:24:39.590 00:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2U4NGIyMjM0ODZlZTVlMGI4NTVlOWMxYTk5ZDhjYWT98szC: 00:24:39.590 00:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:39.590 00:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:39.590 00:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGU4ZDA3MTdmNGJkMDEwN2M4ZTVlMDM0NjAyOTEwMmM1VHFx: 00:24:39.590 00:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2U4NGIyMjM0ODZlZTVlMGI4NTVlOWMxYTk5ZDhjYWT98szC: ]] 00:24:39.590 00:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2U4NGIyMjM0ODZlZTVlMGI4NTVlOWMxYTk5ZDhjYWT98szC: 00:24:39.590 00:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:24:39.590 00:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:39.590 00:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:39.590 00:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:39.590 00:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:39.590 00:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:39.590 00:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:39.590 00:25:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:39.590 00:25:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.590 00:25:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:39.590 00:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:39.590 00:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:39.590 00:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:39.590 00:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:39.590 00:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:39.590 00:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:39.590 00:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:39.590 00:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:39.590 00:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:39.590 00:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:39.590 00:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:39.590 00:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:39.590 00:25:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:39.590 00:25:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.847 nvme0n1 00:24:39.847 00:25:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:39.847 00:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:39.847 00:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:39.847 00:25:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:39.847 00:25:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.847 00:25:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:39.847 00:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:39.847 00:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:39.847 00:25:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:39.847 00:25:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.847 00:25:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:39.847 00:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:39.847 00:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:24:39.847 00:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:39.847 00:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:39.847 00:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:39.847 00:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:39.847 00:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmQxN2RmMTEyZjdiZGMxNGM5ZDk3ZTNjOTgzYTRiZDU2OWY4ODRmMjA2NzczYWUzHI5gzw==: 00:24:39.847 00:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjQwMTQ4MDQ0ZTAzZGI5MDc4Y2QxODc5ZGRiODQ3YTWi70fD: 00:24:39.847 00:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:39.847 00:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:39.847 00:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmQxN2RmMTEyZjdiZGMxNGM5ZDk3ZTNjOTgzYTRiZDU2OWY4ODRmMjA2NzczYWUzHI5gzw==: 00:24:39.847 00:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjQwMTQ4MDQ0ZTAzZGI5MDc4Y2QxODc5ZGRiODQ3YTWi70fD: ]] 00:24:39.847 00:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjQwMTQ4MDQ0ZTAzZGI5MDc4Y2QxODc5ZGRiODQ3YTWi70fD: 00:24:39.847 00:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:24:39.847 00:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:39.847 00:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:39.847 00:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:39.847 00:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:39.847 00:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:39.847 00:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:39.847 00:25:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:39.847 00:25:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.847 00:25:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:39.847 00:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:39.847 00:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:39.847 00:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:39.847 00:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:39.847 00:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:39.847 00:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:39.847 00:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:39.847 00:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:39.847 00:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:39.847 00:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:39.847 00:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:39.847 00:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:39.847 00:25:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:39.847 00:25:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.103 nvme0n1 00:24:40.103 00:25:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:40.103 00:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:40.103 00:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:40.103 00:25:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:40.103 00:25:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.103 00:25:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:40.103 00:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:40.103 00:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:40.103 00:25:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:40.103 00:25:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.359 00:25:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:40.359 00:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:40.359 00:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:24:40.359 00:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:40.359 00:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:40.359 00:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:40.359 00:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:40.359 00:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzVmNjJlY2ZlMWE2NTZiZWJhMjNlNWY3Mzk4OTVjOTM3YWZkYjQ3YjJlYWYwNTQ5ZDJlOGFhY2E3NmVmMDEzYQd+kRA=: 00:24:40.359 00:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:40.359 00:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:40.359 00:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:40.359 00:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzVmNjJlY2ZlMWE2NTZiZWJhMjNlNWY3Mzk4OTVjOTM3YWZkYjQ3YjJlYWYwNTQ5ZDJlOGFhY2E3NmVmMDEzYQd+kRA=: 00:24:40.359 00:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:40.359 00:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:24:40.359 00:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:40.359 00:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:40.359 00:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:40.359 00:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:40.359 00:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:40.359 00:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:40.359 00:25:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:40.359 00:25:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.359 00:25:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:40.359 00:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:40.359 00:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:40.359 00:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:40.359 00:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:40.359 00:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:40.359 00:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:40.359 00:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:40.359 00:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:40.359 00:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:40.359 00:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:40.359 00:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:40.359 00:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:40.359 00:25:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:40.359 00:25:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.616 nvme0n1 00:24:40.616 00:25:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:40.616 00:25:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:40.616 00:25:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:40.616 00:25:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:40.616 00:25:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.616 00:25:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:40.616 00:25:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:40.616 00:25:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:40.616 00:25:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:40.616 00:25:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.616 00:25:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:40.616 00:25:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:40.616 00:25:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:40.616 00:25:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:24:40.616 00:25:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:40.616 00:25:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:40.616 00:25:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:40.616 00:25:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:40.616 00:25:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDg1ZmE4ZmEzNzYwN2U2ZTg4NDE0M2YzYjM3YTg2MjOKRAP+: 00:24:40.616 00:25:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTQ0M2YzOWM5MDU0YzU0ZGZkZWFkODViNDY4ZDU4MDNhNTNjNWNlMDk5YjU5OWY1ZmU3ZjZlN2YyZmQwOTJkNxft5Wk=: 00:24:40.616 00:25:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:40.616 00:25:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:40.616 00:25:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDg1ZmE4ZmEzNzYwN2U2ZTg4NDE0M2YzYjM3YTg2MjOKRAP+: 00:24:40.616 00:25:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTQ0M2YzOWM5MDU0YzU0ZGZkZWFkODViNDY4ZDU4MDNhNTNjNWNlMDk5YjU5OWY1ZmU3ZjZlN2YyZmQwOTJkNxft5Wk=: ]] 00:24:40.616 00:25:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTQ0M2YzOWM5MDU0YzU0ZGZkZWFkODViNDY4ZDU4MDNhNTNjNWNlMDk5YjU5OWY1ZmU3ZjZlN2YyZmQwOTJkNxft5Wk=: 00:24:40.616 00:25:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:24:40.616 00:25:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:40.616 00:25:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:40.616 00:25:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:40.616 00:25:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:40.616 00:25:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:40.616 00:25:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:40.616 00:25:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:40.616 00:25:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.616 00:25:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:40.616 00:25:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:40.616 00:25:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:40.616 00:25:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:40.616 00:25:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:40.616 00:25:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:40.616 00:25:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:40.616 00:25:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:40.616 00:25:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:40.616 00:25:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:40.616 00:25:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:40.616 00:25:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:40.616 00:25:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:40.616 00:25:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:40.616 00:25:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.872 nvme0n1 00:24:40.872 00:25:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:40.872 00:25:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:40.872 00:25:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:40.872 00:25:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:40.872 00:25:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.872 00:25:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:40.872 00:25:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:40.872 00:25:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:40.872 00:25:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:40.872 00:25:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.872 00:25:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:40.872 00:25:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:40.872 00:25:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:24:40.872 00:25:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:40.872 00:25:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:40.872 00:25:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:40.872 00:25:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:40.872 00:25:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGM1ZDU3ZWIxYWZjMWZhN2ZkYjc5MmE4ZTIwZjg4YmVjM2I0ODc3ZTgzMzM2M2MywVYPRg==: 00:24:41.128 00:25:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjlhOWIyN2U1Nzg0YjQ3MjgxNzhkYjM5MjZjYzkwYmE2NmU3ZDBjNDQ4Y2RjNDIwaDgnyA==: 00:24:41.128 00:25:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:41.128 00:25:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:41.128 00:25:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGM1ZDU3ZWIxYWZjMWZhN2ZkYjc5MmE4ZTIwZjg4YmVjM2I0ODc3ZTgzMzM2M2MywVYPRg==: 00:24:41.128 00:25:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjlhOWIyN2U1Nzg0YjQ3MjgxNzhkYjM5MjZjYzkwYmE2NmU3ZDBjNDQ4Y2RjNDIwaDgnyA==: ]] 00:24:41.128 00:25:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjlhOWIyN2U1Nzg0YjQ3MjgxNzhkYjM5MjZjYzkwYmE2NmU3ZDBjNDQ4Y2RjNDIwaDgnyA==: 00:24:41.128 00:25:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:24:41.128 00:25:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:41.128 00:25:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:41.128 00:25:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:41.128 00:25:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:41.128 00:25:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:41.128 00:25:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:41.128 00:25:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:41.128 00:25:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.128 00:25:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:41.128 00:25:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:41.128 00:25:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:41.128 00:25:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:41.128 00:25:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:41.128 00:25:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:41.128 00:25:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:41.128 00:25:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:41.128 00:25:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:41.128 00:25:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:41.128 00:25:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:41.128 00:25:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:41.128 00:25:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:41.128 00:25:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:41.128 00:25:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.385 nvme0n1 00:24:41.385 00:26:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:41.385 00:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:41.385 00:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:41.385 00:26:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:41.385 00:26:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.385 00:26:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:41.385 00:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:41.385 00:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:41.385 00:26:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:41.385 00:26:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.385 00:26:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:41.385 00:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:41.385 00:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:24:41.385 00:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:41.385 00:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:41.385 00:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:41.385 00:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:41.385 00:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGU4ZDA3MTdmNGJkMDEwN2M4ZTVlMDM0NjAyOTEwMmM1VHFx: 00:24:41.385 00:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2U4NGIyMjM0ODZlZTVlMGI4NTVlOWMxYTk5ZDhjYWT98szC: 00:24:41.385 00:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:41.385 00:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:41.385 00:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGU4ZDA3MTdmNGJkMDEwN2M4ZTVlMDM0NjAyOTEwMmM1VHFx: 00:24:41.385 00:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2U4NGIyMjM0ODZlZTVlMGI4NTVlOWMxYTk5ZDhjYWT98szC: ]] 00:24:41.385 00:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2U4NGIyMjM0ODZlZTVlMGI4NTVlOWMxYTk5ZDhjYWT98szC: 00:24:41.385 00:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:24:41.385 00:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:41.385 00:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:41.385 00:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:41.385 00:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:41.385 00:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:41.385 00:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:41.385 00:26:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:41.385 00:26:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.385 00:26:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:41.385 00:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:41.385 00:26:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:41.385 00:26:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:41.385 00:26:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:41.385 00:26:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:41.385 00:26:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:41.385 00:26:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:41.385 00:26:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:41.385 00:26:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:41.385 00:26:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:41.385 00:26:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:41.385 00:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:41.385 00:26:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:41.385 00:26:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.946 nvme0n1 00:24:41.946 00:26:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:41.946 00:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:41.946 00:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:41.946 00:26:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:41.946 00:26:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.946 00:26:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:41.946 00:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:41.946 00:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:41.946 00:26:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:41.946 00:26:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.946 00:26:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:41.946 00:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:41.946 00:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:24:41.946 00:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:41.946 00:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:41.946 00:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:41.946 00:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:41.946 00:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmQxN2RmMTEyZjdiZGMxNGM5ZDk3ZTNjOTgzYTRiZDU2OWY4ODRmMjA2NzczYWUzHI5gzw==: 00:24:41.946 00:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjQwMTQ4MDQ0ZTAzZGI5MDc4Y2QxODc5ZGRiODQ3YTWi70fD: 00:24:41.946 00:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:41.946 00:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:41.946 00:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmQxN2RmMTEyZjdiZGMxNGM5ZDk3ZTNjOTgzYTRiZDU2OWY4ODRmMjA2NzczYWUzHI5gzw==: 00:24:41.946 00:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjQwMTQ4MDQ0ZTAzZGI5MDc4Y2QxODc5ZGRiODQ3YTWi70fD: ]] 00:24:41.946 00:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjQwMTQ4MDQ0ZTAzZGI5MDc4Y2QxODc5ZGRiODQ3YTWi70fD: 00:24:41.946 00:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:24:41.946 00:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:41.946 00:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:41.946 00:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:41.946 00:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:41.946 00:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:41.946 00:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:41.946 00:26:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:41.946 00:26:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.946 00:26:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:41.946 00:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:41.946 00:26:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:41.946 00:26:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:41.946 00:26:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:41.946 00:26:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:41.947 00:26:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:41.947 00:26:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:41.947 00:26:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:41.947 00:26:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:41.947 00:26:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:41.947 00:26:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:41.947 00:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:41.947 00:26:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:41.947 00:26:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.203 nvme0n1 00:24:42.203 00:26:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:42.203 00:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:42.203 00:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:42.203 00:26:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:42.203 00:26:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.203 00:26:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:42.203 00:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:42.203 00:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:42.203 00:26:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:42.203 00:26:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.203 00:26:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:42.203 00:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:42.203 00:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:24:42.203 00:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:42.203 00:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:42.203 00:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:42.203 00:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:42.203 00:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzVmNjJlY2ZlMWE2NTZiZWJhMjNlNWY3Mzk4OTVjOTM3YWZkYjQ3YjJlYWYwNTQ5ZDJlOGFhY2E3NmVmMDEzYQd+kRA=: 00:24:42.203 00:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:42.203 00:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:42.203 00:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:42.203 00:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzVmNjJlY2ZlMWE2NTZiZWJhMjNlNWY3Mzk4OTVjOTM3YWZkYjQ3YjJlYWYwNTQ5ZDJlOGFhY2E3NmVmMDEzYQd+kRA=: 00:24:42.203 00:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:42.203 00:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:24:42.203 00:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:42.203 00:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:42.203 00:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:42.203 00:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:42.203 00:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:42.203 00:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:42.203 00:26:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:42.203 00:26:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.460 00:26:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:42.460 00:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:42.460 00:26:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:42.460 00:26:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:42.460 00:26:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:42.460 00:26:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:42.460 00:26:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:42.460 00:26:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:42.460 00:26:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:42.460 00:26:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:42.460 00:26:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:42.460 00:26:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:42.460 00:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:42.460 00:26:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:42.460 00:26:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.716 nvme0n1 00:24:42.716 00:26:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:42.716 00:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:42.716 00:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:42.716 00:26:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:42.716 00:26:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.716 00:26:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:42.716 00:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:42.716 00:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:42.716 00:26:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:42.716 00:26:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.716 00:26:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:42.716 00:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:42.716 00:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:42.716 00:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:24:42.716 00:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:42.717 00:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:42.717 00:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:42.717 00:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:42.717 00:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDg1ZmE4ZmEzNzYwN2U2ZTg4NDE0M2YzYjM3YTg2MjOKRAP+: 00:24:42.717 00:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTQ0M2YzOWM5MDU0YzU0ZGZkZWFkODViNDY4ZDU4MDNhNTNjNWNlMDk5YjU5OWY1ZmU3ZjZlN2YyZmQwOTJkNxft5Wk=: 00:24:42.717 00:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:42.717 00:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:42.717 00:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDg1ZmE4ZmEzNzYwN2U2ZTg4NDE0M2YzYjM3YTg2MjOKRAP+: 00:24:42.717 00:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTQ0M2YzOWM5MDU0YzU0ZGZkZWFkODViNDY4ZDU4MDNhNTNjNWNlMDk5YjU5OWY1ZmU3ZjZlN2YyZmQwOTJkNxft5Wk=: ]] 00:24:42.717 00:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTQ0M2YzOWM5MDU0YzU0ZGZkZWFkODViNDY4ZDU4MDNhNTNjNWNlMDk5YjU5OWY1ZmU3ZjZlN2YyZmQwOTJkNxft5Wk=: 00:24:42.717 00:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:24:42.717 00:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:42.717 00:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:42.717 00:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:42.717 00:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:42.717 00:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:42.717 00:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:42.717 00:26:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:42.717 00:26:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.717 00:26:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:42.717 00:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:42.717 00:26:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:42.717 00:26:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:42.717 00:26:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:42.717 00:26:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:42.717 00:26:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:42.717 00:26:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:42.717 00:26:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:42.717 00:26:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:42.717 00:26:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:42.717 00:26:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:42.717 00:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:42.717 00:26:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:42.717 00:26:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.279 nvme0n1 00:24:43.279 00:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:43.279 00:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:43.279 00:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:43.279 00:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:43.279 00:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.279 00:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:43.279 00:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:43.279 00:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:43.279 00:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:43.279 00:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.279 00:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:43.279 00:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:43.279 00:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:24:43.279 00:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:43.279 00:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:43.279 00:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:43.279 00:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:43.279 00:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGM1ZDU3ZWIxYWZjMWZhN2ZkYjc5MmE4ZTIwZjg4YmVjM2I0ODc3ZTgzMzM2M2MywVYPRg==: 00:24:43.279 00:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjlhOWIyN2U1Nzg0YjQ3MjgxNzhkYjM5MjZjYzkwYmE2NmU3ZDBjNDQ4Y2RjNDIwaDgnyA==: 00:24:43.279 00:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:43.279 00:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:43.279 00:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGM1ZDU3ZWIxYWZjMWZhN2ZkYjc5MmE4ZTIwZjg4YmVjM2I0ODc3ZTgzMzM2M2MywVYPRg==: 00:24:43.279 00:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjlhOWIyN2U1Nzg0YjQ3MjgxNzhkYjM5MjZjYzkwYmE2NmU3ZDBjNDQ4Y2RjNDIwaDgnyA==: ]] 00:24:43.279 00:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjlhOWIyN2U1Nzg0YjQ3MjgxNzhkYjM5MjZjYzkwYmE2NmU3ZDBjNDQ4Y2RjNDIwaDgnyA==: 00:24:43.279 00:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:24:43.279 00:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:43.279 00:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:43.279 00:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:43.279 00:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:43.279 00:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:43.279 00:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:43.279 00:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:43.279 00:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.536 00:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:43.536 00:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:43.536 00:26:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:43.536 00:26:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:43.536 00:26:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:43.536 00:26:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:43.536 00:26:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:43.536 00:26:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:43.536 00:26:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:43.536 00:26:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:43.536 00:26:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:43.536 00:26:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:43.536 00:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:43.536 00:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:43.536 00:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.100 nvme0n1 00:24:44.100 00:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:44.100 00:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:44.100 00:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:44.100 00:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:44.100 00:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.100 00:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:44.100 00:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:44.100 00:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:44.100 00:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:44.100 00:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.100 00:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:44.100 00:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:44.100 00:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:24:44.100 00:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:44.100 00:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:44.100 00:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:44.100 00:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:44.100 00:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGU4ZDA3MTdmNGJkMDEwN2M4ZTVlMDM0NjAyOTEwMmM1VHFx: 00:24:44.100 00:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2U4NGIyMjM0ODZlZTVlMGI4NTVlOWMxYTk5ZDhjYWT98szC: 00:24:44.100 00:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:44.100 00:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:44.100 00:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGU4ZDA3MTdmNGJkMDEwN2M4ZTVlMDM0NjAyOTEwMmM1VHFx: 00:24:44.100 00:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2U4NGIyMjM0ODZlZTVlMGI4NTVlOWMxYTk5ZDhjYWT98szC: ]] 00:24:44.100 00:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2U4NGIyMjM0ODZlZTVlMGI4NTVlOWMxYTk5ZDhjYWT98szC: 00:24:44.100 00:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:24:44.100 00:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:44.100 00:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:44.100 00:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:44.100 00:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:44.100 00:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:44.100 00:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:44.100 00:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:44.100 00:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.100 00:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:44.100 00:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:44.100 00:26:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:44.100 00:26:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:44.100 00:26:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:44.100 00:26:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:44.100 00:26:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:44.100 00:26:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:44.100 00:26:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:44.100 00:26:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:44.100 00:26:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:44.100 00:26:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:44.100 00:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:44.100 00:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:44.100 00:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.665 nvme0n1 00:24:44.665 00:26:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:44.665 00:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:44.665 00:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:44.665 00:26:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:44.665 00:26:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.665 00:26:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:44.665 00:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:44.665 00:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:44.665 00:26:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:44.665 00:26:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.665 00:26:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:44.665 00:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:44.665 00:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:24:44.665 00:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:44.665 00:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:44.665 00:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:44.665 00:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:44.665 00:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmQxN2RmMTEyZjdiZGMxNGM5ZDk3ZTNjOTgzYTRiZDU2OWY4ODRmMjA2NzczYWUzHI5gzw==: 00:24:44.665 00:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjQwMTQ4MDQ0ZTAzZGI5MDc4Y2QxODc5ZGRiODQ3YTWi70fD: 00:24:44.665 00:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:44.665 00:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:44.665 00:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmQxN2RmMTEyZjdiZGMxNGM5ZDk3ZTNjOTgzYTRiZDU2OWY4ODRmMjA2NzczYWUzHI5gzw==: 00:24:44.665 00:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjQwMTQ4MDQ0ZTAzZGI5MDc4Y2QxODc5ZGRiODQ3YTWi70fD: ]] 00:24:44.665 00:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjQwMTQ4MDQ0ZTAzZGI5MDc4Y2QxODc5ZGRiODQ3YTWi70fD: 00:24:44.665 00:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:24:44.665 00:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:44.665 00:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:44.665 00:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:44.665 00:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:44.665 00:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:44.665 00:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:44.665 00:26:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:44.665 00:26:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.665 00:26:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:44.665 00:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:44.665 00:26:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:44.665 00:26:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:44.665 00:26:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:44.665 00:26:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:44.665 00:26:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:44.665 00:26:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:44.665 00:26:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:44.665 00:26:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:44.665 00:26:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:44.665 00:26:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:44.665 00:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:44.665 00:26:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:44.665 00:26:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.242 nvme0n1 00:24:45.242 00:26:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:45.242 00:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:45.242 00:26:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:45.242 00:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:45.242 00:26:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.242 00:26:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:45.242 00:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:45.242 00:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:45.242 00:26:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:45.242 00:26:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.242 00:26:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:45.242 00:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:45.242 00:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:24:45.242 00:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:45.242 00:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:45.242 00:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:45.242 00:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:45.242 00:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzVmNjJlY2ZlMWE2NTZiZWJhMjNlNWY3Mzk4OTVjOTM3YWZkYjQ3YjJlYWYwNTQ5ZDJlOGFhY2E3NmVmMDEzYQd+kRA=: 00:24:45.242 00:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:45.242 00:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:45.242 00:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:45.242 00:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzVmNjJlY2ZlMWE2NTZiZWJhMjNlNWY3Mzk4OTVjOTM3YWZkYjQ3YjJlYWYwNTQ5ZDJlOGFhY2E3NmVmMDEzYQd+kRA=: 00:24:45.242 00:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:45.242 00:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:24:45.242 00:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:45.242 00:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:45.242 00:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:45.242 00:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:45.242 00:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:45.242 00:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:45.242 00:26:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:45.242 00:26:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.242 00:26:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:45.242 00:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:45.242 00:26:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:45.242 00:26:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:45.242 00:26:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:45.242 00:26:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:45.242 00:26:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:45.242 00:26:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:45.242 00:26:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:45.242 00:26:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:45.242 00:26:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:45.242 00:26:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:45.242 00:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:45.242 00:26:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:45.242 00:26:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.805 nvme0n1 00:24:45.805 00:26:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:45.805 00:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:45.805 00:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:45.805 00:26:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:45.805 00:26:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.805 00:26:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:46.063 00:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:46.063 00:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:46.063 00:26:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:46.063 00:26:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.063 00:26:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:46.063 00:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:24:46.063 00:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:46.063 00:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:46.063 00:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:24:46.063 00:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:46.063 00:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:46.063 00:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:46.063 00:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:46.063 00:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDg1ZmE4ZmEzNzYwN2U2ZTg4NDE0M2YzYjM3YTg2MjOKRAP+: 00:24:46.063 00:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTQ0M2YzOWM5MDU0YzU0ZGZkZWFkODViNDY4ZDU4MDNhNTNjNWNlMDk5YjU5OWY1ZmU3ZjZlN2YyZmQwOTJkNxft5Wk=: 00:24:46.063 00:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:46.063 00:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:46.063 00:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDg1ZmE4ZmEzNzYwN2U2ZTg4NDE0M2YzYjM3YTg2MjOKRAP+: 00:24:46.063 00:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTQ0M2YzOWM5MDU0YzU0ZGZkZWFkODViNDY4ZDU4MDNhNTNjNWNlMDk5YjU5OWY1ZmU3ZjZlN2YyZmQwOTJkNxft5Wk=: ]] 00:24:46.063 00:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTQ0M2YzOWM5MDU0YzU0ZGZkZWFkODViNDY4ZDU4MDNhNTNjNWNlMDk5YjU5OWY1ZmU3ZjZlN2YyZmQwOTJkNxft5Wk=: 00:24:46.063 00:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:24:46.063 00:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:46.063 00:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:46.063 00:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:46.063 00:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:46.063 00:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:46.063 00:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:46.063 00:26:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:46.063 00:26:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.063 00:26:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:46.063 00:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:46.063 00:26:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:46.063 00:26:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:46.063 00:26:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:46.063 00:26:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:46.063 00:26:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:46.063 00:26:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:46.063 00:26:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:46.063 00:26:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:46.063 00:26:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:46.063 00:26:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:46.063 00:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:46.063 00:26:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:46.063 00:26:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.063 nvme0n1 00:24:46.063 00:26:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:46.063 00:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:46.063 00:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:46.063 00:26:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:46.063 00:26:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.063 00:26:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:46.063 00:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:46.063 00:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:46.063 00:26:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:46.063 00:26:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.063 00:26:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:46.063 00:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:46.063 00:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:24:46.063 00:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:46.063 00:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:46.063 00:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:46.063 00:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:46.063 00:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGM1ZDU3ZWIxYWZjMWZhN2ZkYjc5MmE4ZTIwZjg4YmVjM2I0ODc3ZTgzMzM2M2MywVYPRg==: 00:24:46.063 00:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjlhOWIyN2U1Nzg0YjQ3MjgxNzhkYjM5MjZjYzkwYmE2NmU3ZDBjNDQ4Y2RjNDIwaDgnyA==: 00:24:46.063 00:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:46.063 00:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:46.063 00:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGM1ZDU3ZWIxYWZjMWZhN2ZkYjc5MmE4ZTIwZjg4YmVjM2I0ODc3ZTgzMzM2M2MywVYPRg==: 00:24:46.063 00:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjlhOWIyN2U1Nzg0YjQ3MjgxNzhkYjM5MjZjYzkwYmE2NmU3ZDBjNDQ4Y2RjNDIwaDgnyA==: ]] 00:24:46.063 00:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjlhOWIyN2U1Nzg0YjQ3MjgxNzhkYjM5MjZjYzkwYmE2NmU3ZDBjNDQ4Y2RjNDIwaDgnyA==: 00:24:46.063 00:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:24:46.063 00:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:46.063 00:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:46.063 00:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:46.063 00:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:46.063 00:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:46.063 00:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:46.063 00:26:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:46.063 00:26:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.063 00:26:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:46.063 00:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:46.063 00:26:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:46.063 00:26:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:46.063 00:26:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:46.063 00:26:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:46.063 00:26:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:46.063 00:26:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:46.064 00:26:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:46.064 00:26:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:46.064 00:26:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:46.064 00:26:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:46.064 00:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:46.064 00:26:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:46.064 00:26:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.320 nvme0n1 00:24:46.320 00:26:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:46.320 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:46.320 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:46.320 00:26:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:46.320 00:26:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.320 00:26:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:46.320 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:46.320 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:46.320 00:26:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:46.320 00:26:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.320 00:26:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:46.320 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:46.320 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:24:46.320 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:46.320 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:46.320 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:46.320 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:46.320 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGU4ZDA3MTdmNGJkMDEwN2M4ZTVlMDM0NjAyOTEwMmM1VHFx: 00:24:46.320 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2U4NGIyMjM0ODZlZTVlMGI4NTVlOWMxYTk5ZDhjYWT98szC: 00:24:46.320 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:46.320 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:46.320 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGU4ZDA3MTdmNGJkMDEwN2M4ZTVlMDM0NjAyOTEwMmM1VHFx: 00:24:46.320 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2U4NGIyMjM0ODZlZTVlMGI4NTVlOWMxYTk5ZDhjYWT98szC: ]] 00:24:46.320 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2U4NGIyMjM0ODZlZTVlMGI4NTVlOWMxYTk5ZDhjYWT98szC: 00:24:46.320 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:24:46.320 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:46.320 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:46.320 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:46.320 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:46.320 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:46.320 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:46.320 00:26:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:46.320 00:26:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.320 00:26:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:46.320 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:46.320 00:26:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:46.320 00:26:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:46.320 00:26:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:46.320 00:26:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:46.320 00:26:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:46.320 00:26:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:46.320 00:26:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:46.320 00:26:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:46.320 00:26:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:46.320 00:26:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:46.320 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:46.320 00:26:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:46.320 00:26:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.577 nvme0n1 00:24:46.577 00:26:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:46.577 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:46.577 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:46.577 00:26:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:46.577 00:26:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.577 00:26:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:46.577 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:46.577 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:46.577 00:26:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:46.577 00:26:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.577 00:26:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:46.577 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:46.577 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:24:46.577 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:46.577 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:46.577 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:46.577 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:46.577 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmQxN2RmMTEyZjdiZGMxNGM5ZDk3ZTNjOTgzYTRiZDU2OWY4ODRmMjA2NzczYWUzHI5gzw==: 00:24:46.577 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjQwMTQ4MDQ0ZTAzZGI5MDc4Y2QxODc5ZGRiODQ3YTWi70fD: 00:24:46.577 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:46.577 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:46.577 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmQxN2RmMTEyZjdiZGMxNGM5ZDk3ZTNjOTgzYTRiZDU2OWY4ODRmMjA2NzczYWUzHI5gzw==: 00:24:46.577 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjQwMTQ4MDQ0ZTAzZGI5MDc4Y2QxODc5ZGRiODQ3YTWi70fD: ]] 00:24:46.577 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjQwMTQ4MDQ0ZTAzZGI5MDc4Y2QxODc5ZGRiODQ3YTWi70fD: 00:24:46.577 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:24:46.577 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:46.577 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:46.577 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:46.577 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:46.577 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:46.577 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:46.577 00:26:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:46.577 00:26:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.577 00:26:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:46.577 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:46.577 00:26:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:46.577 00:26:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:46.577 00:26:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:46.577 00:26:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:46.577 00:26:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:46.577 00:26:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:46.577 00:26:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:46.577 00:26:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:46.577 00:26:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:46.577 00:26:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:46.577 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:46.577 00:26:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:46.577 00:26:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.834 nvme0n1 00:24:46.834 00:26:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:46.834 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:46.834 00:26:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:46.834 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:46.834 00:26:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.834 00:26:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:46.834 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:46.834 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:46.834 00:26:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:46.834 00:26:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.834 00:26:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:46.834 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:46.834 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:24:46.834 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:46.834 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:46.834 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:46.834 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:46.834 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzVmNjJlY2ZlMWE2NTZiZWJhMjNlNWY3Mzk4OTVjOTM3YWZkYjQ3YjJlYWYwNTQ5ZDJlOGFhY2E3NmVmMDEzYQd+kRA=: 00:24:46.834 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:46.834 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:46.834 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:46.834 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzVmNjJlY2ZlMWE2NTZiZWJhMjNlNWY3Mzk4OTVjOTM3YWZkYjQ3YjJlYWYwNTQ5ZDJlOGFhY2E3NmVmMDEzYQd+kRA=: 00:24:46.834 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:46.834 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:24:46.834 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:46.834 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:46.834 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:46.834 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:46.834 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:46.834 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:46.834 00:26:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:46.835 00:26:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.835 00:26:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:46.835 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:46.835 00:26:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:46.835 00:26:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:46.835 00:26:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:46.835 00:26:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:46.835 00:26:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:46.835 00:26:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:46.835 00:26:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:46.835 00:26:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:46.835 00:26:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:46.835 00:26:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:46.835 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:46.835 00:26:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:46.835 00:26:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.092 nvme0n1 00:24:47.092 00:26:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:47.092 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:47.092 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:47.092 00:26:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:47.092 00:26:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.092 00:26:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:47.092 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:47.092 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:47.092 00:26:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:47.092 00:26:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.092 00:26:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:47.092 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:47.092 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:47.092 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:24:47.092 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:47.092 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:47.092 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:47.092 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:47.092 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDg1ZmE4ZmEzNzYwN2U2ZTg4NDE0M2YzYjM3YTg2MjOKRAP+: 00:24:47.092 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTQ0M2YzOWM5MDU0YzU0ZGZkZWFkODViNDY4ZDU4MDNhNTNjNWNlMDk5YjU5OWY1ZmU3ZjZlN2YyZmQwOTJkNxft5Wk=: 00:24:47.092 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:47.092 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:47.092 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDg1ZmE4ZmEzNzYwN2U2ZTg4NDE0M2YzYjM3YTg2MjOKRAP+: 00:24:47.092 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTQ0M2YzOWM5MDU0YzU0ZGZkZWFkODViNDY4ZDU4MDNhNTNjNWNlMDk5YjU5OWY1ZmU3ZjZlN2YyZmQwOTJkNxft5Wk=: ]] 00:24:47.092 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTQ0M2YzOWM5MDU0YzU0ZGZkZWFkODViNDY4ZDU4MDNhNTNjNWNlMDk5YjU5OWY1ZmU3ZjZlN2YyZmQwOTJkNxft5Wk=: 00:24:47.092 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:24:47.092 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:47.092 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:47.092 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:47.092 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:47.092 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:47.092 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:47.092 00:26:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:47.092 00:26:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.092 00:26:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:47.092 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:47.092 00:26:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:47.092 00:26:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:47.092 00:26:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:47.092 00:26:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:47.092 00:26:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:47.092 00:26:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:47.092 00:26:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:47.092 00:26:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:47.092 00:26:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:47.092 00:26:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:47.092 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:47.092 00:26:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:47.092 00:26:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.092 nvme0n1 00:24:47.092 00:26:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:47.092 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:47.092 00:26:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:47.092 00:26:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.092 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:47.349 00:26:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:47.349 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:47.349 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:47.349 00:26:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:47.349 00:26:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.349 00:26:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:47.349 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:47.349 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:24:47.349 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:47.349 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:47.349 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:47.349 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:47.349 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGM1ZDU3ZWIxYWZjMWZhN2ZkYjc5MmE4ZTIwZjg4YmVjM2I0ODc3ZTgzMzM2M2MywVYPRg==: 00:24:47.349 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjlhOWIyN2U1Nzg0YjQ3MjgxNzhkYjM5MjZjYzkwYmE2NmU3ZDBjNDQ4Y2RjNDIwaDgnyA==: 00:24:47.349 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:47.349 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:47.349 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGM1ZDU3ZWIxYWZjMWZhN2ZkYjc5MmE4ZTIwZjg4YmVjM2I0ODc3ZTgzMzM2M2MywVYPRg==: 00:24:47.349 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjlhOWIyN2U1Nzg0YjQ3MjgxNzhkYjM5MjZjYzkwYmE2NmU3ZDBjNDQ4Y2RjNDIwaDgnyA==: ]] 00:24:47.349 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjlhOWIyN2U1Nzg0YjQ3MjgxNzhkYjM5MjZjYzkwYmE2NmU3ZDBjNDQ4Y2RjNDIwaDgnyA==: 00:24:47.349 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:24:47.349 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:47.349 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:47.349 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:47.349 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:47.349 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:47.349 00:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:47.349 00:26:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:47.349 00:26:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.349 00:26:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:47.349 00:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:47.349 00:26:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:47.349 00:26:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:47.349 00:26:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:47.349 00:26:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:47.349 00:26:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:47.349 00:26:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:47.349 00:26:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:47.349 00:26:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:47.349 00:26:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:47.349 00:26:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:47.349 00:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:47.349 00:26:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:47.349 00:26:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.349 nvme0n1 00:24:47.349 00:26:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:47.349 00:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:47.349 00:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:47.349 00:26:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:47.349 00:26:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.349 00:26:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:47.605 00:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:47.605 00:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:47.605 00:26:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:47.606 00:26:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.606 00:26:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:47.606 00:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:47.606 00:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:24:47.606 00:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:47.606 00:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:47.606 00:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:47.606 00:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:47.606 00:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGU4ZDA3MTdmNGJkMDEwN2M4ZTVlMDM0NjAyOTEwMmM1VHFx: 00:24:47.606 00:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2U4NGIyMjM0ODZlZTVlMGI4NTVlOWMxYTk5ZDhjYWT98szC: 00:24:47.606 00:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:47.606 00:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:47.606 00:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGU4ZDA3MTdmNGJkMDEwN2M4ZTVlMDM0NjAyOTEwMmM1VHFx: 00:24:47.606 00:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2U4NGIyMjM0ODZlZTVlMGI4NTVlOWMxYTk5ZDhjYWT98szC: ]] 00:24:47.606 00:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2U4NGIyMjM0ODZlZTVlMGI4NTVlOWMxYTk5ZDhjYWT98szC: 00:24:47.606 00:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:24:47.606 00:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:47.606 00:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:47.606 00:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:47.606 00:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:47.606 00:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:47.606 00:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:47.606 00:26:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:47.606 00:26:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.606 00:26:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:47.606 00:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:47.606 00:26:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:47.606 00:26:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:47.606 00:26:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:47.606 00:26:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:47.606 00:26:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:47.606 00:26:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:47.606 00:26:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:47.606 00:26:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:47.606 00:26:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:47.606 00:26:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:47.606 00:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:47.606 00:26:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:47.606 00:26:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.606 nvme0n1 00:24:47.606 00:26:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:47.606 00:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:47.606 00:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:47.606 00:26:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:47.606 00:26:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.606 00:26:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:47.606 00:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:47.606 00:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:47.606 00:26:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:47.606 00:26:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.863 00:26:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:47.863 00:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:47.863 00:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:24:47.863 00:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:47.863 00:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:47.863 00:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:47.863 00:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:47.863 00:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmQxN2RmMTEyZjdiZGMxNGM5ZDk3ZTNjOTgzYTRiZDU2OWY4ODRmMjA2NzczYWUzHI5gzw==: 00:24:47.863 00:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjQwMTQ4MDQ0ZTAzZGI5MDc4Y2QxODc5ZGRiODQ3YTWi70fD: 00:24:47.863 00:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:47.863 00:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:47.863 00:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmQxN2RmMTEyZjdiZGMxNGM5ZDk3ZTNjOTgzYTRiZDU2OWY4ODRmMjA2NzczYWUzHI5gzw==: 00:24:47.863 00:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjQwMTQ4MDQ0ZTAzZGI5MDc4Y2QxODc5ZGRiODQ3YTWi70fD: ]] 00:24:47.863 00:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjQwMTQ4MDQ0ZTAzZGI5MDc4Y2QxODc5ZGRiODQ3YTWi70fD: 00:24:47.863 00:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:24:47.863 00:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:47.863 00:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:47.863 00:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:47.863 00:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:47.863 00:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:47.863 00:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:47.863 00:26:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:47.863 00:26:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.863 00:26:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:47.863 00:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:47.863 00:26:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:47.863 00:26:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:47.863 00:26:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:47.863 00:26:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:47.863 00:26:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:47.863 00:26:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:47.863 00:26:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:47.863 00:26:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:47.863 00:26:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:47.863 00:26:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:47.863 00:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:47.863 00:26:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:47.863 00:26:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.863 nvme0n1 00:24:47.863 00:26:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:47.863 00:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:47.863 00:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:47.863 00:26:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:47.863 00:26:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.863 00:26:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:47.863 00:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:47.863 00:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:47.863 00:26:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:47.863 00:26:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.863 00:26:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:47.863 00:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:47.863 00:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:24:47.863 00:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:47.863 00:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:47.863 00:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:47.863 00:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:47.863 00:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzVmNjJlY2ZlMWE2NTZiZWJhMjNlNWY3Mzk4OTVjOTM3YWZkYjQ3YjJlYWYwNTQ5ZDJlOGFhY2E3NmVmMDEzYQd+kRA=: 00:24:47.863 00:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:47.863 00:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:47.863 00:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:47.863 00:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzVmNjJlY2ZlMWE2NTZiZWJhMjNlNWY3Mzk4OTVjOTM3YWZkYjQ3YjJlYWYwNTQ5ZDJlOGFhY2E3NmVmMDEzYQd+kRA=: 00:24:47.863 00:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:47.863 00:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:24:47.863 00:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:47.863 00:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:47.863 00:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:47.863 00:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:47.863 00:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:47.863 00:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:47.863 00:26:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:47.863 00:26:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.863 00:26:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:48.120 00:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:48.120 00:26:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:48.120 00:26:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:48.120 00:26:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:48.120 00:26:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:48.120 00:26:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:48.120 00:26:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:48.120 00:26:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:48.120 00:26:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:48.120 00:26:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:48.120 00:26:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:48.120 00:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:48.120 00:26:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:48.120 00:26:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.120 nvme0n1 00:24:48.120 00:26:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:48.120 00:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:48.120 00:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:48.120 00:26:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:48.120 00:26:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.120 00:26:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:48.120 00:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:48.120 00:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:48.121 00:26:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:48.121 00:26:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.121 00:26:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:48.121 00:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:48.121 00:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:48.121 00:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:24:48.121 00:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:48.121 00:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:48.121 00:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:48.121 00:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:48.121 00:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDg1ZmE4ZmEzNzYwN2U2ZTg4NDE0M2YzYjM3YTg2MjOKRAP+: 00:24:48.121 00:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTQ0M2YzOWM5MDU0YzU0ZGZkZWFkODViNDY4ZDU4MDNhNTNjNWNlMDk5YjU5OWY1ZmU3ZjZlN2YyZmQwOTJkNxft5Wk=: 00:24:48.121 00:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:48.121 00:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:48.121 00:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDg1ZmE4ZmEzNzYwN2U2ZTg4NDE0M2YzYjM3YTg2MjOKRAP+: 00:24:48.121 00:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTQ0M2YzOWM5MDU0YzU0ZGZkZWFkODViNDY4ZDU4MDNhNTNjNWNlMDk5YjU5OWY1ZmU3ZjZlN2YyZmQwOTJkNxft5Wk=: ]] 00:24:48.121 00:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTQ0M2YzOWM5MDU0YzU0ZGZkZWFkODViNDY4ZDU4MDNhNTNjNWNlMDk5YjU5OWY1ZmU3ZjZlN2YyZmQwOTJkNxft5Wk=: 00:24:48.121 00:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:24:48.121 00:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:48.121 00:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:48.121 00:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:48.121 00:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:48.121 00:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:48.121 00:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:48.121 00:26:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:48.121 00:26:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.121 00:26:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:48.121 00:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:48.121 00:26:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:48.121 00:26:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:48.121 00:26:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:48.121 00:26:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:48.121 00:26:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:48.121 00:26:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:48.121 00:26:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:48.121 00:26:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:48.121 00:26:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:48.121 00:26:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:48.121 00:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:48.121 00:26:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:48.121 00:26:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.378 nvme0n1 00:24:48.378 00:26:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:48.378 00:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:48.378 00:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:48.378 00:26:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:48.378 00:26:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.378 00:26:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:48.634 00:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:48.634 00:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:48.634 00:26:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:48.634 00:26:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.634 00:26:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:48.634 00:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:48.634 00:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:24:48.634 00:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:48.634 00:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:48.634 00:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:48.634 00:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:48.634 00:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGM1ZDU3ZWIxYWZjMWZhN2ZkYjc5MmE4ZTIwZjg4YmVjM2I0ODc3ZTgzMzM2M2MywVYPRg==: 00:24:48.634 00:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjlhOWIyN2U1Nzg0YjQ3MjgxNzhkYjM5MjZjYzkwYmE2NmU3ZDBjNDQ4Y2RjNDIwaDgnyA==: 00:24:48.634 00:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:48.634 00:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:48.634 00:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGM1ZDU3ZWIxYWZjMWZhN2ZkYjc5MmE4ZTIwZjg4YmVjM2I0ODc3ZTgzMzM2M2MywVYPRg==: 00:24:48.634 00:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjlhOWIyN2U1Nzg0YjQ3MjgxNzhkYjM5MjZjYzkwYmE2NmU3ZDBjNDQ4Y2RjNDIwaDgnyA==: ]] 00:24:48.634 00:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjlhOWIyN2U1Nzg0YjQ3MjgxNzhkYjM5MjZjYzkwYmE2NmU3ZDBjNDQ4Y2RjNDIwaDgnyA==: 00:24:48.634 00:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:24:48.634 00:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:48.634 00:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:48.634 00:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:48.634 00:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:48.634 00:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:48.634 00:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:48.634 00:26:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:48.634 00:26:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.634 00:26:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:48.634 00:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:48.634 00:26:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:48.634 00:26:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:48.634 00:26:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:48.634 00:26:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:48.634 00:26:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:48.634 00:26:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:48.634 00:26:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:48.634 00:26:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:48.634 00:26:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:48.634 00:26:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:48.634 00:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:48.634 00:26:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:48.634 00:26:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.891 nvme0n1 00:24:48.891 00:26:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:48.891 00:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:48.891 00:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:48.891 00:26:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:48.891 00:26:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.891 00:26:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:48.891 00:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:48.891 00:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:48.891 00:26:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:48.891 00:26:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.891 00:26:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:48.891 00:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:48.891 00:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:24:48.891 00:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:48.891 00:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:48.891 00:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:48.891 00:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:48.891 00:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGU4ZDA3MTdmNGJkMDEwN2M4ZTVlMDM0NjAyOTEwMmM1VHFx: 00:24:48.891 00:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2U4NGIyMjM0ODZlZTVlMGI4NTVlOWMxYTk5ZDhjYWT98szC: 00:24:48.891 00:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:48.891 00:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:48.891 00:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGU4ZDA3MTdmNGJkMDEwN2M4ZTVlMDM0NjAyOTEwMmM1VHFx: 00:24:48.892 00:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2U4NGIyMjM0ODZlZTVlMGI4NTVlOWMxYTk5ZDhjYWT98szC: ]] 00:24:48.892 00:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2U4NGIyMjM0ODZlZTVlMGI4NTVlOWMxYTk5ZDhjYWT98szC: 00:24:48.892 00:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:24:48.892 00:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:48.892 00:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:48.892 00:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:48.892 00:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:48.892 00:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:48.892 00:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:48.892 00:26:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:48.892 00:26:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.892 00:26:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:48.892 00:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:48.892 00:26:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:48.892 00:26:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:48.892 00:26:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:48.892 00:26:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:48.892 00:26:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:48.892 00:26:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:48.892 00:26:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:48.892 00:26:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:48.892 00:26:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:48.892 00:26:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:48.892 00:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:48.892 00:26:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:48.892 00:26:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.149 nvme0n1 00:24:49.149 00:26:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:49.149 00:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:49.149 00:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:49.149 00:26:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:49.149 00:26:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.149 00:26:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:49.149 00:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:49.149 00:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:49.149 00:26:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:49.149 00:26:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.149 00:26:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:49.149 00:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:49.149 00:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:24:49.149 00:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:49.149 00:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:49.149 00:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:49.149 00:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:49.149 00:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmQxN2RmMTEyZjdiZGMxNGM5ZDk3ZTNjOTgzYTRiZDU2OWY4ODRmMjA2NzczYWUzHI5gzw==: 00:24:49.149 00:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjQwMTQ4MDQ0ZTAzZGI5MDc4Y2QxODc5ZGRiODQ3YTWi70fD: 00:24:49.149 00:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:49.149 00:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:49.149 00:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmQxN2RmMTEyZjdiZGMxNGM5ZDk3ZTNjOTgzYTRiZDU2OWY4ODRmMjA2NzczYWUzHI5gzw==: 00:24:49.149 00:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjQwMTQ4MDQ0ZTAzZGI5MDc4Y2QxODc5ZGRiODQ3YTWi70fD: ]] 00:24:49.149 00:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjQwMTQ4MDQ0ZTAzZGI5MDc4Y2QxODc5ZGRiODQ3YTWi70fD: 00:24:49.149 00:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:24:49.149 00:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:49.149 00:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:49.149 00:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:49.149 00:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:49.149 00:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:49.149 00:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:49.149 00:26:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:49.149 00:26:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.149 00:26:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:49.149 00:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:49.149 00:26:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:49.149 00:26:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:49.149 00:26:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:49.149 00:26:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:49.149 00:26:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:49.149 00:26:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:49.149 00:26:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:49.149 00:26:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:49.149 00:26:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:49.149 00:26:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:49.149 00:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:49.149 00:26:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:49.149 00:26:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.405 nvme0n1 00:24:49.405 00:26:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:49.406 00:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:49.406 00:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:49.406 00:26:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:49.406 00:26:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.406 00:26:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:49.406 00:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:49.406 00:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:49.406 00:26:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:49.406 00:26:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.406 00:26:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:49.406 00:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:49.406 00:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:24:49.406 00:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:49.406 00:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:49.406 00:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:49.406 00:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:49.406 00:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzVmNjJlY2ZlMWE2NTZiZWJhMjNlNWY3Mzk4OTVjOTM3YWZkYjQ3YjJlYWYwNTQ5ZDJlOGFhY2E3NmVmMDEzYQd+kRA=: 00:24:49.406 00:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:49.406 00:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:49.406 00:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:49.406 00:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzVmNjJlY2ZlMWE2NTZiZWJhMjNlNWY3Mzk4OTVjOTM3YWZkYjQ3YjJlYWYwNTQ5ZDJlOGFhY2E3NmVmMDEzYQd+kRA=: 00:24:49.406 00:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:49.406 00:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:24:49.406 00:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:49.406 00:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:49.406 00:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:49.406 00:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:49.406 00:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:49.406 00:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:49.406 00:26:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:49.406 00:26:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.406 00:26:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:49.406 00:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:49.406 00:26:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:49.406 00:26:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:49.406 00:26:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:49.406 00:26:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:49.406 00:26:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:49.406 00:26:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:49.406 00:26:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:49.406 00:26:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:49.406 00:26:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:49.406 00:26:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:49.406 00:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:49.406 00:26:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:49.406 00:26:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.663 nvme0n1 00:24:49.663 00:26:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:49.663 00:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:49.663 00:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:49.663 00:26:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:49.663 00:26:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.663 00:26:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:49.663 00:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:49.663 00:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:49.663 00:26:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:49.663 00:26:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.663 00:26:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:49.663 00:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:49.663 00:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:49.663 00:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:24:49.663 00:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:49.663 00:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:49.663 00:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:49.663 00:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:49.663 00:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDg1ZmE4ZmEzNzYwN2U2ZTg4NDE0M2YzYjM3YTg2MjOKRAP+: 00:24:49.663 00:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTQ0M2YzOWM5MDU0YzU0ZGZkZWFkODViNDY4ZDU4MDNhNTNjNWNlMDk5YjU5OWY1ZmU3ZjZlN2YyZmQwOTJkNxft5Wk=: 00:24:49.663 00:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:49.663 00:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:49.663 00:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDg1ZmE4ZmEzNzYwN2U2ZTg4NDE0M2YzYjM3YTg2MjOKRAP+: 00:24:49.663 00:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTQ0M2YzOWM5MDU0YzU0ZGZkZWFkODViNDY4ZDU4MDNhNTNjNWNlMDk5YjU5OWY1ZmU3ZjZlN2YyZmQwOTJkNxft5Wk=: ]] 00:24:49.663 00:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTQ0M2YzOWM5MDU0YzU0ZGZkZWFkODViNDY4ZDU4MDNhNTNjNWNlMDk5YjU5OWY1ZmU3ZjZlN2YyZmQwOTJkNxft5Wk=: 00:24:49.663 00:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:24:49.663 00:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:49.663 00:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:49.663 00:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:49.663 00:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:49.663 00:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:49.663 00:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:49.663 00:26:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:49.663 00:26:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.663 00:26:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:49.663 00:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:49.663 00:26:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:49.663 00:26:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:49.663 00:26:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:49.663 00:26:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:49.663 00:26:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:49.663 00:26:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:49.663 00:26:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:49.663 00:26:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:49.663 00:26:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:49.663 00:26:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:49.663 00:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:49.663 00:26:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:49.663 00:26:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.227 nvme0n1 00:24:50.227 00:26:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:50.227 00:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:50.227 00:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:50.227 00:26:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:50.227 00:26:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.227 00:26:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:50.227 00:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:50.227 00:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:50.227 00:26:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:50.227 00:26:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.227 00:26:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:50.227 00:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:50.227 00:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:24:50.227 00:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:50.227 00:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:50.227 00:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:50.227 00:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:50.227 00:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGM1ZDU3ZWIxYWZjMWZhN2ZkYjc5MmE4ZTIwZjg4YmVjM2I0ODc3ZTgzMzM2M2MywVYPRg==: 00:24:50.227 00:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjlhOWIyN2U1Nzg0YjQ3MjgxNzhkYjM5MjZjYzkwYmE2NmU3ZDBjNDQ4Y2RjNDIwaDgnyA==: 00:24:50.227 00:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:50.227 00:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:50.227 00:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGM1ZDU3ZWIxYWZjMWZhN2ZkYjc5MmE4ZTIwZjg4YmVjM2I0ODc3ZTgzMzM2M2MywVYPRg==: 00:24:50.227 00:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjlhOWIyN2U1Nzg0YjQ3MjgxNzhkYjM5MjZjYzkwYmE2NmU3ZDBjNDQ4Y2RjNDIwaDgnyA==: ]] 00:24:50.227 00:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjlhOWIyN2U1Nzg0YjQ3MjgxNzhkYjM5MjZjYzkwYmE2NmU3ZDBjNDQ4Y2RjNDIwaDgnyA==: 00:24:50.227 00:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:24:50.227 00:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:50.227 00:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:50.227 00:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:50.227 00:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:50.227 00:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:50.227 00:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:50.227 00:26:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:50.227 00:26:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.227 00:26:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:50.227 00:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:50.227 00:26:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:50.227 00:26:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:50.227 00:26:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:50.227 00:26:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:50.227 00:26:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:50.227 00:26:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:50.228 00:26:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:50.228 00:26:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:50.228 00:26:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:50.228 00:26:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:50.228 00:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:50.228 00:26:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:50.228 00:26:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.484 nvme0n1 00:24:50.484 00:26:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:50.484 00:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:50.484 00:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:50.484 00:26:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:50.484 00:26:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.741 00:26:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:50.741 00:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:50.741 00:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:50.741 00:26:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:50.741 00:26:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.741 00:26:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:50.741 00:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:50.741 00:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:24:50.741 00:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:50.741 00:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:50.741 00:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:50.741 00:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:50.741 00:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGU4ZDA3MTdmNGJkMDEwN2M4ZTVlMDM0NjAyOTEwMmM1VHFx: 00:24:50.741 00:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2U4NGIyMjM0ODZlZTVlMGI4NTVlOWMxYTk5ZDhjYWT98szC: 00:24:50.741 00:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:50.741 00:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:50.741 00:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGU4ZDA3MTdmNGJkMDEwN2M4ZTVlMDM0NjAyOTEwMmM1VHFx: 00:24:50.741 00:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2U4NGIyMjM0ODZlZTVlMGI4NTVlOWMxYTk5ZDhjYWT98szC: ]] 00:24:50.741 00:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2U4NGIyMjM0ODZlZTVlMGI4NTVlOWMxYTk5ZDhjYWT98szC: 00:24:50.741 00:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:24:50.741 00:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:50.741 00:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:50.741 00:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:50.741 00:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:50.741 00:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:50.741 00:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:50.741 00:26:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:50.741 00:26:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.741 00:26:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:50.741 00:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:50.741 00:26:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:50.741 00:26:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:50.741 00:26:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:50.741 00:26:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:50.741 00:26:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:50.741 00:26:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:50.741 00:26:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:50.741 00:26:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:50.741 00:26:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:50.741 00:26:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:50.741 00:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:50.741 00:26:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:50.741 00:26:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.999 nvme0n1 00:24:50.999 00:26:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:50.999 00:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:50.999 00:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:50.999 00:26:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:50.999 00:26:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.999 00:26:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:50.999 00:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:50.999 00:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:50.999 00:26:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:50.999 00:26:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.999 00:26:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:50.999 00:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:50.999 00:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:24:50.999 00:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:50.999 00:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:50.999 00:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:50.999 00:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:50.999 00:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmQxN2RmMTEyZjdiZGMxNGM5ZDk3ZTNjOTgzYTRiZDU2OWY4ODRmMjA2NzczYWUzHI5gzw==: 00:24:50.999 00:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjQwMTQ4MDQ0ZTAzZGI5MDc4Y2QxODc5ZGRiODQ3YTWi70fD: 00:24:50.999 00:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:50.999 00:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:50.999 00:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmQxN2RmMTEyZjdiZGMxNGM5ZDk3ZTNjOTgzYTRiZDU2OWY4ODRmMjA2NzczYWUzHI5gzw==: 00:24:50.999 00:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjQwMTQ4MDQ0ZTAzZGI5MDc4Y2QxODc5ZGRiODQ3YTWi70fD: ]] 00:24:50.999 00:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjQwMTQ4MDQ0ZTAzZGI5MDc4Y2QxODc5ZGRiODQ3YTWi70fD: 00:24:50.999 00:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:24:50.999 00:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:50.999 00:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:50.999 00:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:50.999 00:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:50.999 00:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:50.999 00:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:50.999 00:26:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:50.999 00:26:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.999 00:26:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:50.999 00:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:50.999 00:26:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:50.999 00:26:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:50.999 00:26:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:50.999 00:26:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:50.999 00:26:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:50.999 00:26:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:50.999 00:26:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:50.999 00:26:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:50.999 00:26:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:50.999 00:26:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:50.999 00:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:50.999 00:26:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:50.999 00:26:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.565 nvme0n1 00:24:51.565 00:26:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:51.565 00:26:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:51.565 00:26:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:51.565 00:26:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:51.565 00:26:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.565 00:26:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:51.565 00:26:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:51.565 00:26:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:51.565 00:26:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:51.565 00:26:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.565 00:26:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:51.565 00:26:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:51.565 00:26:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:24:51.565 00:26:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:51.565 00:26:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:51.565 00:26:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:51.565 00:26:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:51.565 00:26:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzVmNjJlY2ZlMWE2NTZiZWJhMjNlNWY3Mzk4OTVjOTM3YWZkYjQ3YjJlYWYwNTQ5ZDJlOGFhY2E3NmVmMDEzYQd+kRA=: 00:24:51.565 00:26:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:51.565 00:26:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:51.565 00:26:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:51.565 00:26:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzVmNjJlY2ZlMWE2NTZiZWJhMjNlNWY3Mzk4OTVjOTM3YWZkYjQ3YjJlYWYwNTQ5ZDJlOGFhY2E3NmVmMDEzYQd+kRA=: 00:24:51.565 00:26:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:51.565 00:26:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:24:51.565 00:26:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:51.565 00:26:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:51.565 00:26:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:51.565 00:26:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:51.565 00:26:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:51.565 00:26:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:51.565 00:26:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:51.565 00:26:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.565 00:26:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:51.565 00:26:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:51.565 00:26:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:51.565 00:26:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:51.565 00:26:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:51.565 00:26:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:51.565 00:26:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:51.565 00:26:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:51.565 00:26:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:51.565 00:26:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:51.565 00:26:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:51.565 00:26:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:51.565 00:26:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:51.565 00:26:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:51.565 00:26:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.822 nvme0n1 00:24:51.822 00:26:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:52.081 00:26:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:52.081 00:26:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:52.081 00:26:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:52.081 00:26:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.081 00:26:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:52.081 00:26:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:52.081 00:26:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:52.081 00:26:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:52.081 00:26:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.081 00:26:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:52.081 00:26:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:52.081 00:26:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:52.081 00:26:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:24:52.081 00:26:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:52.081 00:26:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:52.081 00:26:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:52.081 00:26:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:52.081 00:26:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDg1ZmE4ZmEzNzYwN2U2ZTg4NDE0M2YzYjM3YTg2MjOKRAP+: 00:24:52.081 00:26:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTQ0M2YzOWM5MDU0YzU0ZGZkZWFkODViNDY4ZDU4MDNhNTNjNWNlMDk5YjU5OWY1ZmU3ZjZlN2YyZmQwOTJkNxft5Wk=: 00:24:52.081 00:26:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:52.081 00:26:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:52.081 00:26:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDg1ZmE4ZmEzNzYwN2U2ZTg4NDE0M2YzYjM3YTg2MjOKRAP+: 00:24:52.081 00:26:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTQ0M2YzOWM5MDU0YzU0ZGZkZWFkODViNDY4ZDU4MDNhNTNjNWNlMDk5YjU5OWY1ZmU3ZjZlN2YyZmQwOTJkNxft5Wk=: ]] 00:24:52.081 00:26:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTQ0M2YzOWM5MDU0YzU0ZGZkZWFkODViNDY4ZDU4MDNhNTNjNWNlMDk5YjU5OWY1ZmU3ZjZlN2YyZmQwOTJkNxft5Wk=: 00:24:52.081 00:26:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:24:52.081 00:26:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:52.081 00:26:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:52.081 00:26:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:52.081 00:26:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:52.081 00:26:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:52.081 00:26:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:52.081 00:26:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:52.081 00:26:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.081 00:26:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:52.081 00:26:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:52.081 00:26:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:52.081 00:26:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:52.081 00:26:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:52.081 00:26:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:52.081 00:26:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:52.081 00:26:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:52.081 00:26:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:52.081 00:26:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:52.081 00:26:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:52.081 00:26:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:52.081 00:26:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:52.081 00:26:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:52.081 00:26:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.664 nvme0n1 00:24:52.664 00:26:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:52.664 00:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:52.664 00:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:52.664 00:26:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:52.664 00:26:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.664 00:26:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:52.664 00:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:52.664 00:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:52.664 00:26:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:52.664 00:26:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.664 00:26:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:52.664 00:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:52.664 00:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:24:52.664 00:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:52.664 00:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:52.664 00:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:52.664 00:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:52.664 00:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGM1ZDU3ZWIxYWZjMWZhN2ZkYjc5MmE4ZTIwZjg4YmVjM2I0ODc3ZTgzMzM2M2MywVYPRg==: 00:24:52.664 00:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjlhOWIyN2U1Nzg0YjQ3MjgxNzhkYjM5MjZjYzkwYmE2NmU3ZDBjNDQ4Y2RjNDIwaDgnyA==: 00:24:52.664 00:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:52.664 00:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:52.664 00:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGM1ZDU3ZWIxYWZjMWZhN2ZkYjc5MmE4ZTIwZjg4YmVjM2I0ODc3ZTgzMzM2M2MywVYPRg==: 00:24:52.664 00:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjlhOWIyN2U1Nzg0YjQ3MjgxNzhkYjM5MjZjYzkwYmE2NmU3ZDBjNDQ4Y2RjNDIwaDgnyA==: ]] 00:24:52.664 00:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjlhOWIyN2U1Nzg0YjQ3MjgxNzhkYjM5MjZjYzkwYmE2NmU3ZDBjNDQ4Y2RjNDIwaDgnyA==: 00:24:52.664 00:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:24:52.664 00:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:52.664 00:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:52.664 00:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:52.664 00:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:52.664 00:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:52.664 00:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:52.664 00:26:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:52.664 00:26:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.664 00:26:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:52.664 00:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:52.664 00:26:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:52.664 00:26:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:52.664 00:26:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:52.664 00:26:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:52.664 00:26:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:52.665 00:26:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:52.665 00:26:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:52.665 00:26:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:52.665 00:26:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:52.665 00:26:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:52.665 00:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:52.665 00:26:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:52.665 00:26:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.229 nvme0n1 00:24:53.229 00:26:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:53.229 00:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:53.229 00:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:53.229 00:26:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:53.229 00:26:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.229 00:26:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:53.229 00:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:53.229 00:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:53.229 00:26:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:53.229 00:26:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.229 00:26:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:53.229 00:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:53.229 00:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:24:53.229 00:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:53.229 00:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:53.229 00:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:53.229 00:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:53.229 00:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGU4ZDA3MTdmNGJkMDEwN2M4ZTVlMDM0NjAyOTEwMmM1VHFx: 00:24:53.229 00:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2U4NGIyMjM0ODZlZTVlMGI4NTVlOWMxYTk5ZDhjYWT98szC: 00:24:53.229 00:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:53.229 00:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:53.229 00:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGU4ZDA3MTdmNGJkMDEwN2M4ZTVlMDM0NjAyOTEwMmM1VHFx: 00:24:53.229 00:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2U4NGIyMjM0ODZlZTVlMGI4NTVlOWMxYTk5ZDhjYWT98szC: ]] 00:24:53.229 00:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2U4NGIyMjM0ODZlZTVlMGI4NTVlOWMxYTk5ZDhjYWT98szC: 00:24:53.229 00:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:24:53.229 00:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:53.229 00:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:53.229 00:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:53.229 00:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:53.229 00:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:53.229 00:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:53.229 00:26:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:53.229 00:26:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.229 00:26:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:53.229 00:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:53.229 00:26:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:53.229 00:26:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:53.229 00:26:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:53.229 00:26:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:53.229 00:26:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:53.229 00:26:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:53.229 00:26:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:53.229 00:26:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:53.229 00:26:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:53.229 00:26:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:53.229 00:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:53.229 00:26:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:53.229 00:26:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.794 nvme0n1 00:24:53.794 00:26:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:53.794 00:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:53.794 00:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:53.794 00:26:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:53.794 00:26:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.052 00:26:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:54.052 00:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:54.052 00:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:54.052 00:26:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:54.052 00:26:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.052 00:26:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:54.052 00:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:54.052 00:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:24:54.052 00:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:54.052 00:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:54.052 00:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:54.052 00:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:54.052 00:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmQxN2RmMTEyZjdiZGMxNGM5ZDk3ZTNjOTgzYTRiZDU2OWY4ODRmMjA2NzczYWUzHI5gzw==: 00:24:54.052 00:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjQwMTQ4MDQ0ZTAzZGI5MDc4Y2QxODc5ZGRiODQ3YTWi70fD: 00:24:54.052 00:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:54.052 00:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:54.052 00:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmQxN2RmMTEyZjdiZGMxNGM5ZDk3ZTNjOTgzYTRiZDU2OWY4ODRmMjA2NzczYWUzHI5gzw==: 00:24:54.052 00:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjQwMTQ4MDQ0ZTAzZGI5MDc4Y2QxODc5ZGRiODQ3YTWi70fD: ]] 00:24:54.052 00:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjQwMTQ4MDQ0ZTAzZGI5MDc4Y2QxODc5ZGRiODQ3YTWi70fD: 00:24:54.052 00:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:24:54.052 00:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:54.052 00:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:54.052 00:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:54.052 00:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:54.052 00:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:54.052 00:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:54.052 00:26:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:54.052 00:26:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.052 00:26:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:54.052 00:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:54.052 00:26:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:54.052 00:26:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:54.052 00:26:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:54.052 00:26:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:54.052 00:26:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:54.052 00:26:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:54.052 00:26:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:54.052 00:26:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:54.052 00:26:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:54.052 00:26:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:54.052 00:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:54.052 00:26:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:54.052 00:26:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.628 nvme0n1 00:24:54.628 00:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:54.628 00:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:54.628 00:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:54.628 00:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:54.628 00:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.628 00:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:54.628 00:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:54.628 00:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:54.628 00:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:54.628 00:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.628 00:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:54.628 00:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:54.628 00:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:24:54.628 00:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:54.628 00:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:54.628 00:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:54.628 00:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:54.628 00:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzVmNjJlY2ZlMWE2NTZiZWJhMjNlNWY3Mzk4OTVjOTM3YWZkYjQ3YjJlYWYwNTQ5ZDJlOGFhY2E3NmVmMDEzYQd+kRA=: 00:24:54.628 00:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:54.628 00:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:54.628 00:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:54.628 00:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzVmNjJlY2ZlMWE2NTZiZWJhMjNlNWY3Mzk4OTVjOTM3YWZkYjQ3YjJlYWYwNTQ5ZDJlOGFhY2E3NmVmMDEzYQd+kRA=: 00:24:54.628 00:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:54.628 00:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:24:54.628 00:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:54.628 00:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:54.628 00:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:54.628 00:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:54.628 00:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:54.628 00:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:54.628 00:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:54.628 00:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.628 00:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:54.628 00:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:54.628 00:26:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:54.628 00:26:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:54.628 00:26:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:54.628 00:26:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:54.628 00:26:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:54.628 00:26:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:54.628 00:26:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:54.628 00:26:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:54.628 00:26:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:54.628 00:26:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:54.628 00:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:54.628 00:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:54.628 00:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.193 nvme0n1 00:24:55.193 00:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:55.193 00:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:55.193 00:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:55.193 00:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:55.193 00:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.193 00:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:55.193 00:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:55.193 00:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:55.193 00:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:55.193 00:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.193 00:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:55.193 00:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:24:55.193 00:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:55.193 00:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:55.193 00:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:24:55.193 00:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:55.193 00:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:55.193 00:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:55.193 00:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:55.193 00:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDg1ZmE4ZmEzNzYwN2U2ZTg4NDE0M2YzYjM3YTg2MjOKRAP+: 00:24:55.193 00:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTQ0M2YzOWM5MDU0YzU0ZGZkZWFkODViNDY4ZDU4MDNhNTNjNWNlMDk5YjU5OWY1ZmU3ZjZlN2YyZmQwOTJkNxft5Wk=: 00:24:55.193 00:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:55.193 00:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:55.193 00:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDg1ZmE4ZmEzNzYwN2U2ZTg4NDE0M2YzYjM3YTg2MjOKRAP+: 00:24:55.193 00:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTQ0M2YzOWM5MDU0YzU0ZGZkZWFkODViNDY4ZDU4MDNhNTNjNWNlMDk5YjU5OWY1ZmU3ZjZlN2YyZmQwOTJkNxft5Wk=: ]] 00:24:55.193 00:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTQ0M2YzOWM5MDU0YzU0ZGZkZWFkODViNDY4ZDU4MDNhNTNjNWNlMDk5YjU5OWY1ZmU3ZjZlN2YyZmQwOTJkNxft5Wk=: 00:24:55.193 00:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:24:55.193 00:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:55.193 00:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:55.193 00:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:55.193 00:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:55.193 00:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:55.193 00:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:55.193 00:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:55.193 00:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.193 00:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:55.193 00:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:55.193 00:26:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:55.193 00:26:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:55.193 00:26:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:55.193 00:26:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:55.193 00:26:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:55.193 00:26:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:55.193 00:26:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:55.193 00:26:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:55.193 00:26:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:55.193 00:26:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:55.193 00:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:55.193 00:26:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:55.193 00:26:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.451 nvme0n1 00:24:55.451 00:26:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:55.451 00:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:55.451 00:26:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:55.451 00:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:55.451 00:26:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.451 00:26:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:55.451 00:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:55.451 00:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:55.451 00:26:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:55.451 00:26:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.451 00:26:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:55.451 00:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:55.451 00:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:24:55.451 00:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:55.451 00:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:55.451 00:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:55.451 00:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:55.451 00:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGM1ZDU3ZWIxYWZjMWZhN2ZkYjc5MmE4ZTIwZjg4YmVjM2I0ODc3ZTgzMzM2M2MywVYPRg==: 00:24:55.451 00:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjlhOWIyN2U1Nzg0YjQ3MjgxNzhkYjM5MjZjYzkwYmE2NmU3ZDBjNDQ4Y2RjNDIwaDgnyA==: 00:24:55.451 00:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:55.451 00:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:55.451 00:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGM1ZDU3ZWIxYWZjMWZhN2ZkYjc5MmE4ZTIwZjg4YmVjM2I0ODc3ZTgzMzM2M2MywVYPRg==: 00:24:55.451 00:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjlhOWIyN2U1Nzg0YjQ3MjgxNzhkYjM5MjZjYzkwYmE2NmU3ZDBjNDQ4Y2RjNDIwaDgnyA==: ]] 00:24:55.451 00:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjlhOWIyN2U1Nzg0YjQ3MjgxNzhkYjM5MjZjYzkwYmE2NmU3ZDBjNDQ4Y2RjNDIwaDgnyA==: 00:24:55.451 00:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:24:55.451 00:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:55.451 00:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:55.451 00:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:55.451 00:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:55.451 00:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:55.451 00:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:55.451 00:26:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:55.451 00:26:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.451 00:26:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:55.451 00:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:55.451 00:26:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:55.451 00:26:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:55.451 00:26:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:55.451 00:26:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:55.452 00:26:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:55.452 00:26:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:55.452 00:26:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:55.452 00:26:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:55.452 00:26:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:55.452 00:26:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:55.452 00:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:55.452 00:26:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:55.452 00:26:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.709 nvme0n1 00:24:55.709 00:26:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:55.709 00:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:55.709 00:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:55.709 00:26:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:55.709 00:26:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.709 00:26:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:55.709 00:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:55.709 00:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:55.709 00:26:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:55.709 00:26:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.709 00:26:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:55.709 00:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:55.709 00:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:24:55.709 00:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:55.709 00:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:55.709 00:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:55.709 00:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:55.709 00:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGU4ZDA3MTdmNGJkMDEwN2M4ZTVlMDM0NjAyOTEwMmM1VHFx: 00:24:55.709 00:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2U4NGIyMjM0ODZlZTVlMGI4NTVlOWMxYTk5ZDhjYWT98szC: 00:24:55.709 00:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:55.709 00:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:55.709 00:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGU4ZDA3MTdmNGJkMDEwN2M4ZTVlMDM0NjAyOTEwMmM1VHFx: 00:24:55.709 00:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2U4NGIyMjM0ODZlZTVlMGI4NTVlOWMxYTk5ZDhjYWT98szC: ]] 00:24:55.709 00:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2U4NGIyMjM0ODZlZTVlMGI4NTVlOWMxYTk5ZDhjYWT98szC: 00:24:55.709 00:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:24:55.709 00:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:55.709 00:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:55.709 00:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:55.709 00:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:55.709 00:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:55.709 00:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:55.709 00:26:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:55.709 00:26:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.709 00:26:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:55.709 00:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:55.709 00:26:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:55.709 00:26:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:55.709 00:26:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:55.709 00:26:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:55.709 00:26:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:55.709 00:26:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:55.709 00:26:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:55.709 00:26:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:55.709 00:26:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:55.709 00:26:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:55.709 00:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:55.709 00:26:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:55.709 00:26:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.967 nvme0n1 00:24:55.967 00:26:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:55.967 00:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:55.967 00:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:55.967 00:26:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:55.967 00:26:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.967 00:26:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:55.967 00:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:55.967 00:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:55.967 00:26:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:55.967 00:26:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.967 00:26:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:55.967 00:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:55.967 00:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:24:55.967 00:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:55.967 00:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:55.967 00:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:55.967 00:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:55.967 00:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmQxN2RmMTEyZjdiZGMxNGM5ZDk3ZTNjOTgzYTRiZDU2OWY4ODRmMjA2NzczYWUzHI5gzw==: 00:24:55.967 00:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjQwMTQ4MDQ0ZTAzZGI5MDc4Y2QxODc5ZGRiODQ3YTWi70fD: 00:24:55.967 00:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:55.967 00:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:55.967 00:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmQxN2RmMTEyZjdiZGMxNGM5ZDk3ZTNjOTgzYTRiZDU2OWY4ODRmMjA2NzczYWUzHI5gzw==: 00:24:55.967 00:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjQwMTQ4MDQ0ZTAzZGI5MDc4Y2QxODc5ZGRiODQ3YTWi70fD: ]] 00:24:55.967 00:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjQwMTQ4MDQ0ZTAzZGI5MDc4Y2QxODc5ZGRiODQ3YTWi70fD: 00:24:55.967 00:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:24:55.967 00:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:55.967 00:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:55.967 00:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:55.967 00:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:55.967 00:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:55.967 00:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:55.967 00:26:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:55.967 00:26:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.967 00:26:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:55.967 00:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:55.967 00:26:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:55.967 00:26:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:55.967 00:26:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:55.967 00:26:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:55.967 00:26:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:55.967 00:26:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:55.967 00:26:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:55.967 00:26:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:55.967 00:26:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:55.967 00:26:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:55.967 00:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:55.967 00:26:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:55.967 00:26:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.967 nvme0n1 00:24:55.967 00:26:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:55.967 00:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:55.967 00:26:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:55.967 00:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:55.967 00:26:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.967 00:26:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:56.224 00:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:56.224 00:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:56.224 00:26:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:56.224 00:26:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.224 00:26:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:56.224 00:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:56.224 00:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:24:56.224 00:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:56.224 00:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:56.224 00:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:56.224 00:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:56.224 00:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzVmNjJlY2ZlMWE2NTZiZWJhMjNlNWY3Mzk4OTVjOTM3YWZkYjQ3YjJlYWYwNTQ5ZDJlOGFhY2E3NmVmMDEzYQd+kRA=: 00:24:56.224 00:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:56.224 00:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:56.224 00:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:56.224 00:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzVmNjJlY2ZlMWE2NTZiZWJhMjNlNWY3Mzk4OTVjOTM3YWZkYjQ3YjJlYWYwNTQ5ZDJlOGFhY2E3NmVmMDEzYQd+kRA=: 00:24:56.224 00:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:56.224 00:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:24:56.224 00:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:56.224 00:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:56.224 00:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:56.224 00:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:56.224 00:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:56.224 00:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:56.224 00:26:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:56.224 00:26:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.224 00:26:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:56.224 00:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:56.224 00:26:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:56.224 00:26:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:56.224 00:26:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:56.224 00:26:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:56.224 00:26:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:56.224 00:26:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:56.224 00:26:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:56.224 00:26:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:56.224 00:26:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:56.224 00:26:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:56.224 00:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:56.224 00:26:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:56.224 00:26:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.224 nvme0n1 00:24:56.224 00:26:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:56.224 00:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:56.224 00:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:56.224 00:26:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:56.224 00:26:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.224 00:26:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:56.224 00:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:56.224 00:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:56.224 00:26:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:56.224 00:26:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.482 00:26:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:56.482 00:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:56.482 00:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:56.482 00:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:24:56.482 00:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:56.482 00:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:56.482 00:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:56.482 00:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:56.482 00:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDg1ZmE4ZmEzNzYwN2U2ZTg4NDE0M2YzYjM3YTg2MjOKRAP+: 00:24:56.482 00:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTQ0M2YzOWM5MDU0YzU0ZGZkZWFkODViNDY4ZDU4MDNhNTNjNWNlMDk5YjU5OWY1ZmU3ZjZlN2YyZmQwOTJkNxft5Wk=: 00:24:56.482 00:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:56.482 00:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:56.482 00:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDg1ZmE4ZmEzNzYwN2U2ZTg4NDE0M2YzYjM3YTg2MjOKRAP+: 00:24:56.482 00:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTQ0M2YzOWM5MDU0YzU0ZGZkZWFkODViNDY4ZDU4MDNhNTNjNWNlMDk5YjU5OWY1ZmU3ZjZlN2YyZmQwOTJkNxft5Wk=: ]] 00:24:56.482 00:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTQ0M2YzOWM5MDU0YzU0ZGZkZWFkODViNDY4ZDU4MDNhNTNjNWNlMDk5YjU5OWY1ZmU3ZjZlN2YyZmQwOTJkNxft5Wk=: 00:24:56.482 00:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:24:56.482 00:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:56.482 00:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:56.482 00:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:56.482 00:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:56.482 00:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:56.482 00:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:56.482 00:26:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:56.482 00:26:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.482 00:26:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:56.482 00:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:56.482 00:26:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:56.482 00:26:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:56.482 00:26:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:56.482 00:26:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:56.482 00:26:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:56.482 00:26:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:56.482 00:26:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:56.482 00:26:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:56.482 00:26:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:56.482 00:26:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:56.482 00:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:56.482 00:26:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:56.482 00:26:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.482 nvme0n1 00:24:56.482 00:26:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:56.482 00:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:56.483 00:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:56.483 00:26:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:56.483 00:26:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.483 00:26:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:56.483 00:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:56.483 00:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:56.483 00:26:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:56.483 00:26:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.483 00:26:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:56.483 00:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:56.483 00:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:24:56.483 00:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:56.483 00:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:56.483 00:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:56.483 00:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:56.483 00:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGM1ZDU3ZWIxYWZjMWZhN2ZkYjc5MmE4ZTIwZjg4YmVjM2I0ODc3ZTgzMzM2M2MywVYPRg==: 00:24:56.483 00:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjlhOWIyN2U1Nzg0YjQ3MjgxNzhkYjM5MjZjYzkwYmE2NmU3ZDBjNDQ4Y2RjNDIwaDgnyA==: 00:24:56.483 00:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:56.483 00:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:56.483 00:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGM1ZDU3ZWIxYWZjMWZhN2ZkYjc5MmE4ZTIwZjg4YmVjM2I0ODc3ZTgzMzM2M2MywVYPRg==: 00:24:56.483 00:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjlhOWIyN2U1Nzg0YjQ3MjgxNzhkYjM5MjZjYzkwYmE2NmU3ZDBjNDQ4Y2RjNDIwaDgnyA==: ]] 00:24:56.483 00:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjlhOWIyN2U1Nzg0YjQ3MjgxNzhkYjM5MjZjYzkwYmE2NmU3ZDBjNDQ4Y2RjNDIwaDgnyA==: 00:24:56.483 00:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:24:56.741 00:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:56.741 00:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:56.741 00:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:56.741 00:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:56.741 00:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:56.741 00:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:56.741 00:26:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:56.741 00:26:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.741 00:26:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:56.741 00:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:56.741 00:26:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:56.741 00:26:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:56.741 00:26:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:56.741 00:26:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:56.741 00:26:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:56.741 00:26:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:56.741 00:26:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:56.741 00:26:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:56.741 00:26:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:56.741 00:26:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:56.741 00:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:56.741 00:26:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:56.741 00:26:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.741 nvme0n1 00:24:56.741 00:26:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:56.741 00:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:56.741 00:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:56.741 00:26:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:56.741 00:26:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.741 00:26:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:56.741 00:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:56.741 00:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:56.741 00:26:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:56.741 00:26:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.741 00:26:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:56.741 00:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:56.741 00:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:24:56.741 00:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:56.741 00:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:56.741 00:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:56.741 00:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:56.741 00:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGU4ZDA3MTdmNGJkMDEwN2M4ZTVlMDM0NjAyOTEwMmM1VHFx: 00:24:56.741 00:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2U4NGIyMjM0ODZlZTVlMGI4NTVlOWMxYTk5ZDhjYWT98szC: 00:24:56.741 00:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:56.741 00:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:56.741 00:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGU4ZDA3MTdmNGJkMDEwN2M4ZTVlMDM0NjAyOTEwMmM1VHFx: 00:24:56.741 00:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2U4NGIyMjM0ODZlZTVlMGI4NTVlOWMxYTk5ZDhjYWT98szC: ]] 00:24:56.741 00:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2U4NGIyMjM0ODZlZTVlMGI4NTVlOWMxYTk5ZDhjYWT98szC: 00:24:56.741 00:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:24:56.741 00:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:56.741 00:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:56.741 00:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:56.741 00:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:56.741 00:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:56.741 00:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:56.741 00:26:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:56.741 00:26:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.741 00:26:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:56.741 00:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:56.741 00:26:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:56.741 00:26:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:56.741 00:26:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:56.741 00:26:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:56.741 00:26:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:56.741 00:26:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:56.741 00:26:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:56.741 00:26:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:56.741 00:26:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:56.741 00:26:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:56.999 00:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:56.999 00:26:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:56.999 00:26:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.999 nvme0n1 00:24:56.999 00:26:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:56.999 00:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:56.999 00:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:56.999 00:26:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:56.999 00:26:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.999 00:26:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:56.999 00:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:56.999 00:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:56.999 00:26:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:56.999 00:26:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.999 00:26:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:56.999 00:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:56.999 00:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:24:56.999 00:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:56.999 00:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:56.999 00:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:56.999 00:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:56.999 00:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmQxN2RmMTEyZjdiZGMxNGM5ZDk3ZTNjOTgzYTRiZDU2OWY4ODRmMjA2NzczYWUzHI5gzw==: 00:24:56.999 00:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjQwMTQ4MDQ0ZTAzZGI5MDc4Y2QxODc5ZGRiODQ3YTWi70fD: 00:24:56.999 00:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:56.999 00:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:56.999 00:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmQxN2RmMTEyZjdiZGMxNGM5ZDk3ZTNjOTgzYTRiZDU2OWY4ODRmMjA2NzczYWUzHI5gzw==: 00:24:56.999 00:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjQwMTQ4MDQ0ZTAzZGI5MDc4Y2QxODc5ZGRiODQ3YTWi70fD: ]] 00:24:56.999 00:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjQwMTQ4MDQ0ZTAzZGI5MDc4Y2QxODc5ZGRiODQ3YTWi70fD: 00:24:56.999 00:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:24:56.999 00:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:56.999 00:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:56.999 00:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:56.999 00:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:56.999 00:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:56.999 00:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:56.999 00:26:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:56.999 00:26:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.999 00:26:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:56.999 00:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:56.999 00:26:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:56.999 00:26:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:57.000 00:26:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:57.000 00:26:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:57.000 00:26:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:57.000 00:26:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:57.000 00:26:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:57.000 00:26:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:57.000 00:26:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:57.000 00:26:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:57.000 00:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:57.000 00:26:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:57.000 00:26:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.257 nvme0n1 00:24:57.257 00:26:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:57.257 00:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:57.257 00:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:57.257 00:26:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:57.257 00:26:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.257 00:26:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:57.257 00:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:57.257 00:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:57.257 00:26:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:57.257 00:26:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.257 00:26:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:57.257 00:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:57.257 00:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:24:57.257 00:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:57.257 00:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:57.257 00:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:57.257 00:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:57.257 00:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzVmNjJlY2ZlMWE2NTZiZWJhMjNlNWY3Mzk4OTVjOTM3YWZkYjQ3YjJlYWYwNTQ5ZDJlOGFhY2E3NmVmMDEzYQd+kRA=: 00:24:57.257 00:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:57.257 00:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:57.257 00:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:57.257 00:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzVmNjJlY2ZlMWE2NTZiZWJhMjNlNWY3Mzk4OTVjOTM3YWZkYjQ3YjJlYWYwNTQ5ZDJlOGFhY2E3NmVmMDEzYQd+kRA=: 00:24:57.257 00:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:57.257 00:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:24:57.257 00:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:57.257 00:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:57.257 00:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:57.257 00:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:57.257 00:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:57.257 00:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:57.257 00:26:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:57.257 00:26:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.257 00:26:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:57.257 00:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:57.257 00:26:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:57.257 00:26:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:57.257 00:26:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:57.257 00:26:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:57.257 00:26:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:57.257 00:26:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:57.257 00:26:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:57.257 00:26:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:57.257 00:26:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:57.257 00:26:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:57.515 00:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:57.515 00:26:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:57.515 00:26:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.515 nvme0n1 00:24:57.515 00:26:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:57.515 00:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:57.515 00:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:57.515 00:26:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:57.515 00:26:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.515 00:26:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:57.515 00:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:57.515 00:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:57.515 00:26:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:57.515 00:26:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.515 00:26:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:57.515 00:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:57.515 00:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:57.515 00:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:24:57.515 00:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:57.515 00:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:57.515 00:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:57.515 00:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:57.515 00:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDg1ZmE4ZmEzNzYwN2U2ZTg4NDE0M2YzYjM3YTg2MjOKRAP+: 00:24:57.515 00:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTQ0M2YzOWM5MDU0YzU0ZGZkZWFkODViNDY4ZDU4MDNhNTNjNWNlMDk5YjU5OWY1ZmU3ZjZlN2YyZmQwOTJkNxft5Wk=: 00:24:57.515 00:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:57.515 00:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:57.515 00:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDg1ZmE4ZmEzNzYwN2U2ZTg4NDE0M2YzYjM3YTg2MjOKRAP+: 00:24:57.515 00:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTQ0M2YzOWM5MDU0YzU0ZGZkZWFkODViNDY4ZDU4MDNhNTNjNWNlMDk5YjU5OWY1ZmU3ZjZlN2YyZmQwOTJkNxft5Wk=: ]] 00:24:57.515 00:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTQ0M2YzOWM5MDU0YzU0ZGZkZWFkODViNDY4ZDU4MDNhNTNjNWNlMDk5YjU5OWY1ZmU3ZjZlN2YyZmQwOTJkNxft5Wk=: 00:24:57.515 00:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:24:57.515 00:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:57.515 00:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:57.515 00:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:57.515 00:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:57.515 00:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:57.515 00:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:57.515 00:26:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:57.515 00:26:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.515 00:26:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:57.515 00:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:57.515 00:26:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:57.515 00:26:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:57.515 00:26:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:57.515 00:26:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:57.515 00:26:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:57.515 00:26:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:57.515 00:26:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:57.515 00:26:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:57.515 00:26:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:57.515 00:26:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:57.515 00:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:57.515 00:26:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:57.515 00:26:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.772 nvme0n1 00:24:57.772 00:26:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:57.772 00:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:57.772 00:26:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:57.772 00:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:57.772 00:26:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.772 00:26:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:58.030 00:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:58.030 00:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:58.030 00:26:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:58.030 00:26:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.030 00:26:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:58.030 00:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:58.030 00:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:24:58.030 00:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:58.030 00:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:58.030 00:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:58.030 00:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:58.030 00:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGM1ZDU3ZWIxYWZjMWZhN2ZkYjc5MmE4ZTIwZjg4YmVjM2I0ODc3ZTgzMzM2M2MywVYPRg==: 00:24:58.030 00:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjlhOWIyN2U1Nzg0YjQ3MjgxNzhkYjM5MjZjYzkwYmE2NmU3ZDBjNDQ4Y2RjNDIwaDgnyA==: 00:24:58.030 00:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:58.030 00:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:58.030 00:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGM1ZDU3ZWIxYWZjMWZhN2ZkYjc5MmE4ZTIwZjg4YmVjM2I0ODc3ZTgzMzM2M2MywVYPRg==: 00:24:58.030 00:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjlhOWIyN2U1Nzg0YjQ3MjgxNzhkYjM5MjZjYzkwYmE2NmU3ZDBjNDQ4Y2RjNDIwaDgnyA==: ]] 00:24:58.030 00:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjlhOWIyN2U1Nzg0YjQ3MjgxNzhkYjM5MjZjYzkwYmE2NmU3ZDBjNDQ4Y2RjNDIwaDgnyA==: 00:24:58.030 00:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:24:58.030 00:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:58.030 00:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:58.030 00:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:58.030 00:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:58.030 00:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:58.030 00:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:58.030 00:26:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:58.031 00:26:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.031 00:26:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:58.031 00:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:58.031 00:26:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:58.031 00:26:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:58.031 00:26:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:58.031 00:26:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:58.031 00:26:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:58.031 00:26:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:58.031 00:26:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:58.031 00:26:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:58.031 00:26:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:58.031 00:26:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:58.031 00:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:58.031 00:26:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:58.031 00:26:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.289 nvme0n1 00:24:58.289 00:26:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:58.289 00:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:58.289 00:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:58.289 00:26:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:58.289 00:26:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.289 00:26:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:58.289 00:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:58.289 00:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:58.289 00:26:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:58.289 00:26:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.289 00:26:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:58.289 00:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:58.289 00:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:24:58.289 00:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:58.289 00:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:58.289 00:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:58.289 00:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:58.289 00:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGU4ZDA3MTdmNGJkMDEwN2M4ZTVlMDM0NjAyOTEwMmM1VHFx: 00:24:58.289 00:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2U4NGIyMjM0ODZlZTVlMGI4NTVlOWMxYTk5ZDhjYWT98szC: 00:24:58.289 00:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:58.289 00:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:58.289 00:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGU4ZDA3MTdmNGJkMDEwN2M4ZTVlMDM0NjAyOTEwMmM1VHFx: 00:24:58.289 00:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2U4NGIyMjM0ODZlZTVlMGI4NTVlOWMxYTk5ZDhjYWT98szC: ]] 00:24:58.289 00:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2U4NGIyMjM0ODZlZTVlMGI4NTVlOWMxYTk5ZDhjYWT98szC: 00:24:58.289 00:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:24:58.289 00:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:58.289 00:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:58.289 00:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:58.289 00:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:58.289 00:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:58.289 00:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:58.289 00:26:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:58.289 00:26:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.289 00:26:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:58.289 00:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:58.289 00:26:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:58.289 00:26:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:58.289 00:26:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:58.289 00:26:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:58.289 00:26:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:58.289 00:26:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:58.289 00:26:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:58.289 00:26:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:58.289 00:26:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:58.289 00:26:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:58.289 00:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:58.289 00:26:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:58.289 00:26:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.547 nvme0n1 00:24:58.547 00:26:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:58.547 00:26:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:58.547 00:26:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:58.547 00:26:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:58.547 00:26:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.547 00:26:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:58.547 00:26:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:58.547 00:26:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:58.547 00:26:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:58.547 00:26:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.547 00:26:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:58.547 00:26:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:58.547 00:26:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:24:58.547 00:26:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:58.547 00:26:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:58.547 00:26:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:58.547 00:26:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:58.547 00:26:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmQxN2RmMTEyZjdiZGMxNGM5ZDk3ZTNjOTgzYTRiZDU2OWY4ODRmMjA2NzczYWUzHI5gzw==: 00:24:58.547 00:26:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjQwMTQ4MDQ0ZTAzZGI5MDc4Y2QxODc5ZGRiODQ3YTWi70fD: 00:24:58.547 00:26:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:58.547 00:26:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:58.547 00:26:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmQxN2RmMTEyZjdiZGMxNGM5ZDk3ZTNjOTgzYTRiZDU2OWY4ODRmMjA2NzczYWUzHI5gzw==: 00:24:58.547 00:26:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjQwMTQ4MDQ0ZTAzZGI5MDc4Y2QxODc5ZGRiODQ3YTWi70fD: ]] 00:24:58.547 00:26:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjQwMTQ4MDQ0ZTAzZGI5MDc4Y2QxODc5ZGRiODQ3YTWi70fD: 00:24:58.547 00:26:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:24:58.547 00:26:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:58.547 00:26:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:58.547 00:26:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:58.547 00:26:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:58.547 00:26:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:58.547 00:26:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:58.547 00:26:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:58.547 00:26:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.547 00:26:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:58.547 00:26:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:58.547 00:26:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:58.547 00:26:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:58.547 00:26:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:58.547 00:26:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:58.547 00:26:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:58.547 00:26:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:58.547 00:26:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:58.547 00:26:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:58.547 00:26:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:58.547 00:26:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:58.547 00:26:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:58.547 00:26:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:58.547 00:26:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.805 nvme0n1 00:24:58.805 00:26:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:58.805 00:26:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:58.805 00:26:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:58.805 00:26:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:58.805 00:26:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.805 00:26:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:58.805 00:26:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:58.805 00:26:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:58.805 00:26:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:58.805 00:26:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.805 00:26:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:58.805 00:26:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:58.805 00:26:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:24:58.805 00:26:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:58.805 00:26:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:58.805 00:26:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:58.805 00:26:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:58.805 00:26:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzVmNjJlY2ZlMWE2NTZiZWJhMjNlNWY3Mzk4OTVjOTM3YWZkYjQ3YjJlYWYwNTQ5ZDJlOGFhY2E3NmVmMDEzYQd+kRA=: 00:24:58.805 00:26:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:58.805 00:26:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:58.805 00:26:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:58.805 00:26:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzVmNjJlY2ZlMWE2NTZiZWJhMjNlNWY3Mzk4OTVjOTM3YWZkYjQ3YjJlYWYwNTQ5ZDJlOGFhY2E3NmVmMDEzYQd+kRA=: 00:24:58.805 00:26:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:58.805 00:26:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:24:58.805 00:26:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:58.805 00:26:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:58.805 00:26:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:58.805 00:26:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:58.805 00:26:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:58.805 00:26:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:58.805 00:26:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:58.805 00:26:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.805 00:26:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:58.805 00:26:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:58.805 00:26:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:58.805 00:26:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:58.805 00:26:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:58.805 00:26:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:58.805 00:26:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:58.805 00:26:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:58.805 00:26:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:58.805 00:26:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:58.805 00:26:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:58.805 00:26:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:58.805 00:26:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:58.805 00:26:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:58.805 00:26:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.061 nvme0n1 00:24:59.061 00:26:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:59.061 00:26:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:59.061 00:26:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:59.061 00:26:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:59.061 00:26:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.061 00:26:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:59.318 00:26:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:59.318 00:26:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:59.318 00:26:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:59.318 00:26:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.318 00:26:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:59.318 00:26:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:59.318 00:26:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:59.318 00:26:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:24:59.318 00:26:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:59.318 00:26:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:59.318 00:26:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:59.318 00:26:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:59.318 00:26:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDg1ZmE4ZmEzNzYwN2U2ZTg4NDE0M2YzYjM3YTg2MjOKRAP+: 00:24:59.318 00:26:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTQ0M2YzOWM5MDU0YzU0ZGZkZWFkODViNDY4ZDU4MDNhNTNjNWNlMDk5YjU5OWY1ZmU3ZjZlN2YyZmQwOTJkNxft5Wk=: 00:24:59.318 00:26:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:59.318 00:26:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:59.318 00:26:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDg1ZmE4ZmEzNzYwN2U2ZTg4NDE0M2YzYjM3YTg2MjOKRAP+: 00:24:59.318 00:26:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTQ0M2YzOWM5MDU0YzU0ZGZkZWFkODViNDY4ZDU4MDNhNTNjNWNlMDk5YjU5OWY1ZmU3ZjZlN2YyZmQwOTJkNxft5Wk=: ]] 00:24:59.318 00:26:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTQ0M2YzOWM5MDU0YzU0ZGZkZWFkODViNDY4ZDU4MDNhNTNjNWNlMDk5YjU5OWY1ZmU3ZjZlN2YyZmQwOTJkNxft5Wk=: 00:24:59.318 00:26:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:24:59.318 00:26:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:59.318 00:26:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:59.318 00:26:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:59.318 00:26:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:59.318 00:26:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:59.318 00:26:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:59.318 00:26:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:59.318 00:26:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.318 00:26:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:59.318 00:26:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:59.318 00:26:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:59.318 00:26:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:59.318 00:26:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:59.318 00:26:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:59.318 00:26:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:59.318 00:26:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:59.318 00:26:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:59.318 00:26:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:59.318 00:26:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:59.318 00:26:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:59.318 00:26:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:59.318 00:26:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:59.318 00:26:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.574 nvme0n1 00:24:59.574 00:26:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:59.574 00:26:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:59.574 00:26:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:59.574 00:26:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:59.574 00:26:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.574 00:26:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:59.574 00:26:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:59.574 00:26:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:59.574 00:26:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:59.574 00:26:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.574 00:26:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:59.574 00:26:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:59.574 00:26:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:24:59.574 00:26:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:59.574 00:26:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:59.574 00:26:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:59.574 00:26:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:59.574 00:26:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGM1ZDU3ZWIxYWZjMWZhN2ZkYjc5MmE4ZTIwZjg4YmVjM2I0ODc3ZTgzMzM2M2MywVYPRg==: 00:24:59.574 00:26:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjlhOWIyN2U1Nzg0YjQ3MjgxNzhkYjM5MjZjYzkwYmE2NmU3ZDBjNDQ4Y2RjNDIwaDgnyA==: 00:24:59.574 00:26:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:59.574 00:26:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:59.574 00:26:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGM1ZDU3ZWIxYWZjMWZhN2ZkYjc5MmE4ZTIwZjg4YmVjM2I0ODc3ZTgzMzM2M2MywVYPRg==: 00:24:59.575 00:26:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjlhOWIyN2U1Nzg0YjQ3MjgxNzhkYjM5MjZjYzkwYmE2NmU3ZDBjNDQ4Y2RjNDIwaDgnyA==: ]] 00:24:59.575 00:26:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjlhOWIyN2U1Nzg0YjQ3MjgxNzhkYjM5MjZjYzkwYmE2NmU3ZDBjNDQ4Y2RjNDIwaDgnyA==: 00:24:59.575 00:26:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:24:59.575 00:26:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:59.575 00:26:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:59.575 00:26:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:59.575 00:26:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:59.575 00:26:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:59.575 00:26:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:59.575 00:26:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:59.575 00:26:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.575 00:26:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:59.575 00:26:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:59.575 00:26:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:59.575 00:26:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:59.575 00:26:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:59.575 00:26:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:59.575 00:26:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:59.575 00:26:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:59.575 00:26:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:59.575 00:26:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:59.575 00:26:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:59.575 00:26:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:59.575 00:26:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:59.575 00:26:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:59.575 00:26:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.137 nvme0n1 00:25:00.137 00:26:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:00.137 00:26:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:00.137 00:26:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:00.137 00:26:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:00.137 00:26:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.137 00:26:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:00.137 00:26:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:00.137 00:26:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:00.137 00:26:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:00.137 00:26:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.137 00:26:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:00.137 00:26:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:00.137 00:26:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:25:00.137 00:26:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:00.137 00:26:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:00.137 00:26:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:00.137 00:26:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:00.137 00:26:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGU4ZDA3MTdmNGJkMDEwN2M4ZTVlMDM0NjAyOTEwMmM1VHFx: 00:25:00.137 00:26:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2U4NGIyMjM0ODZlZTVlMGI4NTVlOWMxYTk5ZDhjYWT98szC: 00:25:00.137 00:26:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:00.137 00:26:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:00.137 00:26:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGU4ZDA3MTdmNGJkMDEwN2M4ZTVlMDM0NjAyOTEwMmM1VHFx: 00:25:00.137 00:26:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2U4NGIyMjM0ODZlZTVlMGI4NTVlOWMxYTk5ZDhjYWT98szC: ]] 00:25:00.137 00:26:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2U4NGIyMjM0ODZlZTVlMGI4NTVlOWMxYTk5ZDhjYWT98szC: 00:25:00.137 00:26:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:25:00.137 00:26:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:00.137 00:26:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:00.137 00:26:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:00.137 00:26:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:00.137 00:26:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:00.137 00:26:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:00.137 00:26:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:00.137 00:26:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.137 00:26:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:00.137 00:26:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:00.137 00:26:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:00.137 00:26:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:00.137 00:26:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:00.137 00:26:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:00.137 00:26:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:00.137 00:26:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:00.137 00:26:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:00.137 00:26:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:00.137 00:26:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:00.137 00:26:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:00.137 00:26:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:00.137 00:26:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:00.137 00:26:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.394 nvme0n1 00:25:00.394 00:26:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:00.394 00:26:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:00.394 00:26:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:00.394 00:26:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:00.394 00:26:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.650 00:26:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:00.650 00:26:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:00.650 00:26:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:00.650 00:26:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:00.650 00:26:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.650 00:26:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:00.650 00:26:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:00.650 00:26:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:25:00.650 00:26:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:00.650 00:26:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:00.650 00:26:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:00.650 00:26:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:00.650 00:26:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmQxN2RmMTEyZjdiZGMxNGM5ZDk3ZTNjOTgzYTRiZDU2OWY4ODRmMjA2NzczYWUzHI5gzw==: 00:25:00.650 00:26:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjQwMTQ4MDQ0ZTAzZGI5MDc4Y2QxODc5ZGRiODQ3YTWi70fD: 00:25:00.650 00:26:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:00.650 00:26:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:00.650 00:26:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmQxN2RmMTEyZjdiZGMxNGM5ZDk3ZTNjOTgzYTRiZDU2OWY4ODRmMjA2NzczYWUzHI5gzw==: 00:25:00.650 00:26:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjQwMTQ4MDQ0ZTAzZGI5MDc4Y2QxODc5ZGRiODQ3YTWi70fD: ]] 00:25:00.650 00:26:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjQwMTQ4MDQ0ZTAzZGI5MDc4Y2QxODc5ZGRiODQ3YTWi70fD: 00:25:00.650 00:26:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:25:00.650 00:26:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:00.650 00:26:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:00.650 00:26:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:00.650 00:26:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:00.650 00:26:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:00.650 00:26:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:00.651 00:26:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:00.651 00:26:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.651 00:26:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:00.651 00:26:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:00.651 00:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:00.651 00:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:00.651 00:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:00.651 00:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:00.651 00:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:00.651 00:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:00.651 00:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:00.651 00:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:00.651 00:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:00.651 00:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:00.651 00:26:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:00.651 00:26:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:00.651 00:26:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.907 nvme0n1 00:25:00.908 00:26:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:00.908 00:26:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:00.908 00:26:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:00.908 00:26:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:00.908 00:26:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.908 00:26:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:00.908 00:26:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:00.908 00:26:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:00.908 00:26:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:00.908 00:26:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.908 00:26:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:00.908 00:26:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:00.908 00:26:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:25:00.908 00:26:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:00.908 00:26:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:00.908 00:26:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:00.908 00:26:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:00.908 00:26:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzVmNjJlY2ZlMWE2NTZiZWJhMjNlNWY3Mzk4OTVjOTM3YWZkYjQ3YjJlYWYwNTQ5ZDJlOGFhY2E3NmVmMDEzYQd+kRA=: 00:25:00.908 00:26:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:00.908 00:26:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:00.908 00:26:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:00.908 00:26:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzVmNjJlY2ZlMWE2NTZiZWJhMjNlNWY3Mzk4OTVjOTM3YWZkYjQ3YjJlYWYwNTQ5ZDJlOGFhY2E3NmVmMDEzYQd+kRA=: 00:25:00.908 00:26:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:00.908 00:26:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:25:00.908 00:26:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:00.908 00:26:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:00.908 00:26:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:00.908 00:26:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:00.908 00:26:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:00.908 00:26:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:00.908 00:26:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:00.908 00:26:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.908 00:26:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:00.908 00:26:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:00.908 00:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:00.908 00:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:00.908 00:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:00.908 00:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:00.908 00:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:00.908 00:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:00.908 00:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:00.908 00:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:00.908 00:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:00.908 00:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:00.908 00:26:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:00.908 00:26:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:00.908 00:26:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.471 nvme0n1 00:25:01.471 00:26:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:01.471 00:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:01.471 00:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:01.471 00:26:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:01.471 00:26:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.471 00:26:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:01.471 00:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:01.471 00:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:01.471 00:26:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:01.471 00:26:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.471 00:26:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:01.471 00:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:01.471 00:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:01.471 00:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:25:01.471 00:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:01.471 00:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:01.471 00:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:01.471 00:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:01.471 00:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDg1ZmE4ZmEzNzYwN2U2ZTg4NDE0M2YzYjM3YTg2MjOKRAP+: 00:25:01.471 00:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTQ0M2YzOWM5MDU0YzU0ZGZkZWFkODViNDY4ZDU4MDNhNTNjNWNlMDk5YjU5OWY1ZmU3ZjZlN2YyZmQwOTJkNxft5Wk=: 00:25:01.471 00:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:01.471 00:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:01.471 00:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDg1ZmE4ZmEzNzYwN2U2ZTg4NDE0M2YzYjM3YTg2MjOKRAP+: 00:25:01.471 00:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTQ0M2YzOWM5MDU0YzU0ZGZkZWFkODViNDY4ZDU4MDNhNTNjNWNlMDk5YjU5OWY1ZmU3ZjZlN2YyZmQwOTJkNxft5Wk=: ]] 00:25:01.471 00:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTQ0M2YzOWM5MDU0YzU0ZGZkZWFkODViNDY4ZDU4MDNhNTNjNWNlMDk5YjU5OWY1ZmU3ZjZlN2YyZmQwOTJkNxft5Wk=: 00:25:01.471 00:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:25:01.471 00:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:01.471 00:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:01.471 00:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:01.471 00:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:01.471 00:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:01.471 00:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:01.471 00:26:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:01.471 00:26:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.471 00:26:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:01.471 00:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:01.471 00:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:01.471 00:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:01.471 00:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:01.471 00:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:01.471 00:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:01.471 00:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:01.471 00:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:01.471 00:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:01.471 00:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:01.471 00:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:01.471 00:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:01.471 00:26:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:01.471 00:26:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.033 nvme0n1 00:25:02.033 00:26:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:02.033 00:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:02.033 00:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:02.033 00:26:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:02.033 00:26:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.033 00:26:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:02.033 00:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:02.034 00:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:02.034 00:26:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:02.034 00:26:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.034 00:26:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:02.034 00:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:02.034 00:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:25:02.034 00:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:02.034 00:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:02.034 00:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:02.034 00:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:02.034 00:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGM1ZDU3ZWIxYWZjMWZhN2ZkYjc5MmE4ZTIwZjg4YmVjM2I0ODc3ZTgzMzM2M2MywVYPRg==: 00:25:02.034 00:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjlhOWIyN2U1Nzg0YjQ3MjgxNzhkYjM5MjZjYzkwYmE2NmU3ZDBjNDQ4Y2RjNDIwaDgnyA==: 00:25:02.034 00:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:02.034 00:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:02.034 00:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGM1ZDU3ZWIxYWZjMWZhN2ZkYjc5MmE4ZTIwZjg4YmVjM2I0ODc3ZTgzMzM2M2MywVYPRg==: 00:25:02.034 00:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjlhOWIyN2U1Nzg0YjQ3MjgxNzhkYjM5MjZjYzkwYmE2NmU3ZDBjNDQ4Y2RjNDIwaDgnyA==: ]] 00:25:02.034 00:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjlhOWIyN2U1Nzg0YjQ3MjgxNzhkYjM5MjZjYzkwYmE2NmU3ZDBjNDQ4Y2RjNDIwaDgnyA==: 00:25:02.034 00:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:25:02.034 00:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:02.034 00:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:02.034 00:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:02.034 00:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:02.034 00:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:02.034 00:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:02.034 00:26:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:02.034 00:26:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.034 00:26:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:02.034 00:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:02.034 00:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:02.034 00:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:02.034 00:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:02.034 00:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:02.034 00:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:02.034 00:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:02.034 00:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:02.034 00:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:02.034 00:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:02.034 00:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:02.034 00:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:02.034 00:26:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:02.034 00:26:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.599 nvme0n1 00:25:02.599 00:26:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:02.599 00:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:02.599 00:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:02.599 00:26:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:02.599 00:26:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.856 00:26:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:02.856 00:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:02.856 00:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:02.856 00:26:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:02.856 00:26:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.856 00:26:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:02.856 00:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:02.856 00:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:25:02.856 00:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:02.856 00:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:02.856 00:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:02.856 00:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:02.856 00:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGU4ZDA3MTdmNGJkMDEwN2M4ZTVlMDM0NjAyOTEwMmM1VHFx: 00:25:02.856 00:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2U4NGIyMjM0ODZlZTVlMGI4NTVlOWMxYTk5ZDhjYWT98szC: 00:25:02.856 00:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:02.857 00:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:02.857 00:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGU4ZDA3MTdmNGJkMDEwN2M4ZTVlMDM0NjAyOTEwMmM1VHFx: 00:25:02.857 00:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2U4NGIyMjM0ODZlZTVlMGI4NTVlOWMxYTk5ZDhjYWT98szC: ]] 00:25:02.857 00:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2U4NGIyMjM0ODZlZTVlMGI4NTVlOWMxYTk5ZDhjYWT98szC: 00:25:02.857 00:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:25:02.857 00:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:02.857 00:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:02.857 00:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:02.857 00:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:02.857 00:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:02.857 00:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:02.857 00:26:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:02.857 00:26:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.857 00:26:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:02.857 00:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:02.857 00:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:02.857 00:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:02.857 00:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:02.857 00:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:02.857 00:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:02.857 00:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:02.857 00:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:02.857 00:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:02.857 00:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:02.857 00:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:02.857 00:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:02.857 00:26:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:02.857 00:26:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.421 nvme0n1 00:25:03.421 00:26:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:03.421 00:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:03.421 00:26:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:03.421 00:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:03.421 00:26:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.421 00:26:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:03.421 00:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:03.421 00:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:03.421 00:26:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:03.421 00:26:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.421 00:26:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:03.421 00:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:03.421 00:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:25:03.421 00:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:03.421 00:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:03.421 00:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:03.421 00:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:03.421 00:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmQxN2RmMTEyZjdiZGMxNGM5ZDk3ZTNjOTgzYTRiZDU2OWY4ODRmMjA2NzczYWUzHI5gzw==: 00:25:03.421 00:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjQwMTQ4MDQ0ZTAzZGI5MDc4Y2QxODc5ZGRiODQ3YTWi70fD: 00:25:03.421 00:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:03.421 00:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:03.421 00:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmQxN2RmMTEyZjdiZGMxNGM5ZDk3ZTNjOTgzYTRiZDU2OWY4ODRmMjA2NzczYWUzHI5gzw==: 00:25:03.421 00:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjQwMTQ4MDQ0ZTAzZGI5MDc4Y2QxODc5ZGRiODQ3YTWi70fD: ]] 00:25:03.421 00:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjQwMTQ4MDQ0ZTAzZGI5MDc4Y2QxODc5ZGRiODQ3YTWi70fD: 00:25:03.421 00:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:25:03.421 00:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:03.421 00:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:03.421 00:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:03.421 00:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:03.421 00:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:03.422 00:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:03.422 00:26:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:03.422 00:26:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.422 00:26:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:03.422 00:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:03.422 00:26:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:03.422 00:26:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:03.422 00:26:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:03.422 00:26:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:03.422 00:26:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:03.422 00:26:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:03.422 00:26:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:03.422 00:26:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:03.422 00:26:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:03.422 00:26:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:03.422 00:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:03.422 00:26:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:03.422 00:26:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.987 nvme0n1 00:25:03.987 00:26:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:03.987 00:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:03.987 00:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:03.987 00:26:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:03.987 00:26:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.987 00:26:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:03.987 00:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:03.987 00:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:03.987 00:26:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:03.987 00:26:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.987 00:26:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:03.987 00:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:03.987 00:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:25:03.987 00:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:03.987 00:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:03.987 00:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:03.987 00:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:03.987 00:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzVmNjJlY2ZlMWE2NTZiZWJhMjNlNWY3Mzk4OTVjOTM3YWZkYjQ3YjJlYWYwNTQ5ZDJlOGFhY2E3NmVmMDEzYQd+kRA=: 00:25:03.987 00:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:03.987 00:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:03.987 00:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:03.987 00:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzVmNjJlY2ZlMWE2NTZiZWJhMjNlNWY3Mzk4OTVjOTM3YWZkYjQ3YjJlYWYwNTQ5ZDJlOGFhY2E3NmVmMDEzYQd+kRA=: 00:25:03.987 00:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:03.987 00:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:25:03.987 00:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:03.987 00:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:03.987 00:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:03.987 00:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:03.987 00:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:03.987 00:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:03.987 00:26:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:03.987 00:26:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.987 00:26:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:03.987 00:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:03.987 00:26:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:03.987 00:26:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:03.987 00:26:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:03.987 00:26:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:03.987 00:26:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:03.987 00:26:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:03.987 00:26:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:03.987 00:26:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:03.987 00:26:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:03.987 00:26:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:03.987 00:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:03.987 00:26:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:03.987 00:26:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.922 nvme0n1 00:25:04.922 00:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:04.922 00:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:04.922 00:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:04.922 00:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:04.922 00:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.922 00:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:04.922 00:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:04.922 00:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:04.922 00:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:04.922 00:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.922 00:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:04.922 00:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:04.922 00:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:04.923 00:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:04.923 00:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:04.923 00:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:04.923 00:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGM1ZDU3ZWIxYWZjMWZhN2ZkYjc5MmE4ZTIwZjg4YmVjM2I0ODc3ZTgzMzM2M2MywVYPRg==: 00:25:04.923 00:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjlhOWIyN2U1Nzg0YjQ3MjgxNzhkYjM5MjZjYzkwYmE2NmU3ZDBjNDQ4Y2RjNDIwaDgnyA==: 00:25:04.923 00:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:04.923 00:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:04.923 00:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGM1ZDU3ZWIxYWZjMWZhN2ZkYjc5MmE4ZTIwZjg4YmVjM2I0ODc3ZTgzMzM2M2MywVYPRg==: 00:25:04.923 00:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjlhOWIyN2U1Nzg0YjQ3MjgxNzhkYjM5MjZjYzkwYmE2NmU3ZDBjNDQ4Y2RjNDIwaDgnyA==: ]] 00:25:04.923 00:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjlhOWIyN2U1Nzg0YjQ3MjgxNzhkYjM5MjZjYzkwYmE2NmU3ZDBjNDQ4Y2RjNDIwaDgnyA==: 00:25:04.923 00:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:04.923 00:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:04.923 00:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.923 00:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:04.923 00:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:25:04.923 00:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:04.923 00:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:04.923 00:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:04.923 00:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:04.923 00:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:04.923 00:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:04.923 00:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:04.923 00:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:04.923 00:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:04.923 00:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:04.923 00:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:04.923 00:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@642 -- # local es=0 00:25:04.923 00:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@644 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:04.923 00:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@630 -- # local arg=rpc_cmd 00:25:04.923 00:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:25:04.923 00:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@634 -- # type -t rpc_cmd 00:25:04.923 00:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:25:04.923 00:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@645 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:04.923 00:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:04.923 00:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.923 request: 00:25:04.923 { 00:25:04.923 "name": "nvme0", 00:25:04.923 "trtype": "tcp", 00:25:04.923 "traddr": "10.0.0.1", 00:25:04.923 "adrfam": "ipv4", 00:25:04.923 "trsvcid": "4420", 00:25:04.923 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:04.923 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:04.923 "prchk_reftag": false, 00:25:04.923 "prchk_guard": false, 00:25:04.923 "hdgst": false, 00:25:04.923 "ddgst": false, 00:25:04.923 "method": "bdev_nvme_attach_controller", 00:25:04.923 "req_id": 1 00:25:04.923 } 00:25:04.923 Got JSON-RPC error response 00:25:04.923 response: 00:25:04.923 { 00:25:04.923 "code": -5, 00:25:04.923 "message": "Input/output error" 00:25:04.923 } 00:25:04.923 00:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 1 == 0 ]] 00:25:04.923 00:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@645 -- # es=1 00:25:04.923 00:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:25:04.923 00:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:25:04.923 00:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:25:04.923 00:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:25:04.923 00:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:25:04.923 00:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:04.923 00:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.923 00:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:04.923 00:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:25:04.923 00:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:25:04.923 00:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:04.923 00:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:04.923 00:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:04.923 00:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:04.923 00:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:04.923 00:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:04.923 00:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:04.923 00:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:04.923 00:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:04.923 00:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:04.923 00:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:04.923 00:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@642 -- # local es=0 00:25:04.923 00:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@644 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:04.923 00:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@630 -- # local arg=rpc_cmd 00:25:04.923 00:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:25:04.923 00:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@634 -- # type -t rpc_cmd 00:25:04.923 00:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:25:04.923 00:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@645 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:04.923 00:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:04.923 00:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.923 request: 00:25:04.923 { 00:25:04.923 "name": "nvme0", 00:25:04.923 "trtype": "tcp", 00:25:04.923 "traddr": "10.0.0.1", 00:25:04.923 "adrfam": "ipv4", 00:25:04.923 "trsvcid": "4420", 00:25:04.923 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:04.923 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:04.923 "prchk_reftag": false, 00:25:04.923 "prchk_guard": false, 00:25:04.923 "hdgst": false, 00:25:04.923 "ddgst": false, 00:25:04.923 "dhchap_key": "key2", 00:25:04.923 "method": "bdev_nvme_attach_controller", 00:25:04.923 "req_id": 1 00:25:04.923 } 00:25:04.923 Got JSON-RPC error response 00:25:04.923 response: 00:25:04.923 { 00:25:04.923 "code": -5, 00:25:04.923 "message": "Input/output error" 00:25:04.923 } 00:25:04.923 00:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 1 == 0 ]] 00:25:04.923 00:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@645 -- # es=1 00:25:04.923 00:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:25:04.923 00:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:25:04.923 00:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:25:04.923 00:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:25:04.923 00:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:25:04.923 00:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:04.923 00:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.923 00:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:04.923 00:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:25:04.923 00:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:25:04.923 00:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:04.923 00:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:04.923 00:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:04.923 00:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:04.923 00:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:04.923 00:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:04.923 00:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:04.923 00:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:04.923 00:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:04.923 00:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:04.923 00:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:04.923 00:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@642 -- # local es=0 00:25:04.923 00:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@644 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:04.923 00:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@630 -- # local arg=rpc_cmd 00:25:04.923 00:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:25:04.923 00:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@634 -- # type -t rpc_cmd 00:25:04.923 00:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:25:04.923 00:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@645 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:04.923 00:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:04.923 00:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.179 request: 00:25:05.179 { 00:25:05.179 "name": "nvme0", 00:25:05.179 "trtype": "tcp", 00:25:05.179 "traddr": "10.0.0.1", 00:25:05.179 "adrfam": "ipv4", 00:25:05.179 "trsvcid": "4420", 00:25:05.179 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:05.179 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:05.179 "prchk_reftag": false, 00:25:05.179 "prchk_guard": false, 00:25:05.179 "hdgst": false, 00:25:05.179 "ddgst": false, 00:25:05.179 "dhchap_key": "key1", 00:25:05.179 "dhchap_ctrlr_key": "ckey2", 00:25:05.179 "method": "bdev_nvme_attach_controller", 00:25:05.179 "req_id": 1 00:25:05.179 } 00:25:05.179 Got JSON-RPC error response 00:25:05.179 response: 00:25:05.179 { 00:25:05.179 "code": -5, 00:25:05.179 "message": "Input/output error" 00:25:05.179 } 00:25:05.179 00:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 1 == 0 ]] 00:25:05.179 00:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@645 -- # es=1 00:25:05.179 00:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:25:05.179 00:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:25:05.179 00:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:25:05.179 00:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:25:05.179 00:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:25:05.179 00:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:25:05.179 00:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:05.179 00:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:25:05.179 00:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:05.179 00:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:25:05.179 00:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:05.179 00:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:05.179 rmmod nvme_tcp 00:25:05.179 rmmod nvme_fabrics 00:25:05.179 00:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:05.179 00:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:25:05.179 00:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:25:05.179 00:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 1636083 ']' 00:25:05.179 00:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 1636083 00:25:05.179 00:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@942 -- # '[' -z 1636083 ']' 00:25:05.179 00:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@946 -- # kill -0 1636083 00:25:05.179 00:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@947 -- # uname 00:25:05.179 00:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:25:05.179 00:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1636083 00:25:05.179 00:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:25:05.179 00:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:25:05.179 00:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1636083' 00:25:05.179 killing process with pid 1636083 00:25:05.179 00:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@961 -- # kill 1636083 00:25:05.179 00:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # wait 1636083 00:25:05.436 00:26:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:05.436 00:26:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:05.436 00:26:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:05.436 00:26:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:05.436 00:26:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:05.436 00:26:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:05.436 00:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:05.436 00:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:07.395 00:26:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:07.395 00:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:25:07.395 00:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:07.395 00:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:25:07.395 00:26:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:25:07.395 00:26:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:25:07.395 00:26:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:07.395 00:26:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:07.395 00:26:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:07.395 00:26:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:07.395 00:26:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:25:07.395 00:26:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:25:07.395 00:26:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:09.929 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:09.929 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:09.929 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:09.929 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:09.929 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:09.929 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:09.929 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:09.929 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:09.929 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:09.929 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:09.929 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:09.929 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:09.929 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:09.929 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:09.929 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:09.929 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:10.866 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:25:10.866 00:26:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.V6d /tmp/spdk.key-null.f4k /tmp/spdk.key-sha256.m1G /tmp/spdk.key-sha384.JPC /tmp/spdk.key-sha512.6oO /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:25:10.866 00:26:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:13.395 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:25:13.395 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:25:13.395 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:25:13.395 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:25:13.395 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:25:13.395 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:25:13.395 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:25:13.395 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:25:13.395 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:25:13.395 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:25:13.395 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:25:13.395 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:25:13.395 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:25:13.395 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:25:13.395 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:25:13.395 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:25:13.395 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:25:13.395 00:25:13.395 real 0m48.435s 00:25:13.395 user 0m43.490s 00:25:13.395 sys 0m11.311s 00:25:13.395 00:26:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1118 -- # xtrace_disable 00:25:13.395 00:26:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.395 ************************************ 00:25:13.395 END TEST nvmf_auth_host 00:25:13.395 ************************************ 00:25:13.395 00:26:32 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:25:13.395 00:26:32 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:25:13.395 00:26:32 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:25:13.395 00:26:32 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:25:13.395 00:26:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:25:13.395 00:26:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:13.395 ************************************ 00:25:13.395 START TEST nvmf_digest 00:25:13.395 ************************************ 00:25:13.395 00:26:32 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:25:13.395 * Looking for test storage... 00:25:13.395 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:13.395 00:26:32 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:13.395 00:26:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:25:13.395 00:26:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:13.395 00:26:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:13.395 00:26:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:13.395 00:26:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:13.395 00:26:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:13.395 00:26:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:13.395 00:26:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:13.395 00:26:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:13.395 00:26:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:13.395 00:26:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:13.395 00:26:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:25:13.396 00:26:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:25:13.396 00:26:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:13.396 00:26:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:13.396 00:26:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:13.396 00:26:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:13.396 00:26:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:13.396 00:26:32 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:13.396 00:26:32 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:13.396 00:26:32 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:13.396 00:26:32 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.396 00:26:32 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.396 00:26:32 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.396 00:26:32 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:25:13.396 00:26:32 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.396 00:26:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:25:13.396 00:26:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:13.396 00:26:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:13.396 00:26:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:13.396 00:26:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:13.396 00:26:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:13.396 00:26:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:13.396 00:26:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:13.396 00:26:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:13.396 00:26:32 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:25:13.396 00:26:32 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:25:13.396 00:26:32 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:25:13.396 00:26:32 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:25:13.396 00:26:32 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:25:13.396 00:26:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:13.396 00:26:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:13.396 00:26:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:13.396 00:26:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:13.396 00:26:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:13.396 00:26:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:13.396 00:26:32 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:13.396 00:26:32 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:13.396 00:26:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:13.396 00:26:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:13.396 00:26:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:25:13.396 00:26:32 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:18.666 00:26:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:18.666 00:26:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:25:18.666 00:26:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:18.666 00:26:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:18.666 00:26:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:18.666 00:26:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:18.666 00:26:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:18.666 00:26:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:25:18.666 00:26:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:18.666 00:26:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:25:18.666 00:26:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:25:18.666 00:26:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:25:18.666 00:26:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:25:18.666 00:26:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:25:18.666 00:26:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:25:18.666 00:26:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:18.666 00:26:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:18.666 00:26:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:18.666 00:26:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:18.666 00:26:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:18.666 00:26:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:18.666 00:26:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:18.666 00:26:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:18.666 00:26:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:18.666 00:26:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:18.666 00:26:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:18.666 00:26:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:18.666 00:26:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:18.666 00:26:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:18.666 00:26:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:18.666 00:26:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:18.666 00:26:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:18.666 00:26:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:18.666 00:26:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:18.666 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:18.666 00:26:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:18.666 00:26:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:18.666 00:26:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:18.666 00:26:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:18.666 00:26:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:18.666 00:26:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:18.666 00:26:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:18.666 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:18.666 00:26:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:18.666 00:26:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:18.666 00:26:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:18.666 00:26:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:18.666 00:26:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:18.666 00:26:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:18.666 00:26:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:18.666 00:26:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:18.666 00:26:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:18.666 00:26:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:18.666 00:26:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:18.666 00:26:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:18.666 00:26:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:18.666 00:26:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:18.666 00:26:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:18.666 00:26:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:18.666 Found net devices under 0000:86:00.0: cvl_0_0 00:25:18.666 00:26:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:18.666 00:26:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:18.666 00:26:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:18.666 00:26:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:18.666 00:26:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:18.666 00:26:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:18.666 00:26:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:18.666 00:26:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:18.666 00:26:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:18.666 Found net devices under 0000:86:00.1: cvl_0_1 00:25:18.666 00:26:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:18.666 00:26:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:18.666 00:26:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:25:18.666 00:26:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:18.666 00:26:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:18.666 00:26:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:18.666 00:26:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:18.666 00:26:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:18.666 00:26:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:18.667 00:26:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:18.667 00:26:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:18.667 00:26:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:18.667 00:26:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:18.667 00:26:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:18.667 00:26:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:18.667 00:26:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:18.667 00:26:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:18.667 00:26:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:18.667 00:26:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:18.667 00:26:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:18.667 00:26:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:18.667 00:26:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:18.667 00:26:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:18.667 00:26:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:18.667 00:26:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:18.667 00:26:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:18.667 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:18.667 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.214 ms 00:25:18.667 00:25:18.667 --- 10.0.0.2 ping statistics --- 00:25:18.667 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:18.667 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:25:18.667 00:26:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:18.667 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:18.667 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:25:18.667 00:25:18.667 --- 10.0.0.1 ping statistics --- 00:25:18.667 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:18.667 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:25:18.667 00:26:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:18.667 00:26:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:25:18.667 00:26:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:18.667 00:26:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:18.667 00:26:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:18.667 00:26:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:18.667 00:26:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:18.667 00:26:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:18.667 00:26:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:18.667 00:26:37 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:25:18.667 00:26:37 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:25:18.667 00:26:37 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:25:18.667 00:26:37 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:25:18.667 00:26:37 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # xtrace_disable 00:25:18.667 00:26:37 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:18.667 ************************************ 00:25:18.667 START TEST nvmf_digest_clean 00:25:18.667 ************************************ 00:25:18.667 00:26:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1117 -- # run_digest 00:25:18.667 00:26:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:25:18.667 00:26:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:25:18.667 00:26:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:25:18.667 00:26:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:25:18.667 00:26:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:25:18.667 00:26:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:18.667 00:26:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:18.667 00:26:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:18.667 00:26:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=1649117 00:25:18.667 00:26:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 1649117 00:25:18.667 00:26:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@823 -- # '[' -z 1649117 ']' 00:25:18.667 00:26:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:18.667 00:26:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@828 -- # local max_retries=100 00:25:18.667 00:26:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:18.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:18.667 00:26:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:18.667 00:26:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # xtrace_disable 00:25:18.667 00:26:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:18.667 [2024-07-16 00:26:37.413605] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:25:18.667 [2024-07-16 00:26:37.413653] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:18.667 [2024-07-16 00:26:37.472094] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:18.925 [2024-07-16 00:26:37.552551] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:18.925 [2024-07-16 00:26:37.552583] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:18.925 [2024-07-16 00:26:37.552590] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:18.925 [2024-07-16 00:26:37.552596] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:18.925 [2024-07-16 00:26:37.552601] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:18.925 [2024-07-16 00:26:37.552622] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:19.491 00:26:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:25:19.491 00:26:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # return 0 00:25:19.491 00:26:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:19.491 00:26:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:19.491 00:26:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:19.491 00:26:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:19.491 00:26:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:25:19.491 00:26:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:25:19.491 00:26:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:25:19.491 00:26:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:19.491 00:26:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:19.491 null0 00:25:19.491 [2024-07-16 00:26:38.315347] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:19.491 [2024-07-16 00:26:38.339540] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:19.491 00:26:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:19.491 00:26:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:25:19.491 00:26:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:19.491 00:26:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:19.750 00:26:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:25:19.750 00:26:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:25:19.750 00:26:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:25:19.750 00:26:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:19.750 00:26:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1649319 00:25:19.750 00:26:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1649319 /var/tmp/bperf.sock 00:25:19.750 00:26:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:25:19.750 00:26:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@823 -- # '[' -z 1649319 ']' 00:25:19.750 00:26:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:19.750 00:26:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@828 -- # local max_retries=100 00:25:19.750 00:26:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:19.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:19.750 00:26:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # xtrace_disable 00:25:19.750 00:26:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:19.750 [2024-07-16 00:26:38.372211] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:25:19.750 [2024-07-16 00:26:38.372257] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1649319 ] 00:25:19.750 [2024-07-16 00:26:38.425414] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:19.750 [2024-07-16 00:26:38.503960] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:20.685 00:26:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:25:20.685 00:26:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # return 0 00:25:20.685 00:26:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:20.685 00:26:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:20.685 00:26:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:20.685 00:26:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:20.685 00:26:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:21.252 nvme0n1 00:25:21.252 00:26:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:21.252 00:26:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:21.252 Running I/O for 2 seconds... 00:25:23.151 00:25:23.151 Latency(us) 00:25:23.151 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:23.151 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:25:23.151 nvme0n1 : 2.00 27063.32 105.72 0.00 0.00 4724.74 2265.27 13392.14 00:25:23.151 =================================================================================================================== 00:25:23.151 Total : 27063.32 105.72 0.00 0.00 4724.74 2265.27 13392.14 00:25:23.151 0 00:25:23.151 00:26:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:23.151 00:26:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:23.151 00:26:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:23.151 00:26:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:23.151 00:26:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:23.151 | select(.opcode=="crc32c") 00:25:23.151 | "\(.module_name) \(.executed)"' 00:25:23.409 00:26:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:23.409 00:26:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:23.409 00:26:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:23.409 00:26:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:23.409 00:26:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1649319 00:25:23.409 00:26:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@942 -- # '[' -z 1649319 ']' 00:25:23.409 00:26:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # kill -0 1649319 00:25:23.409 00:26:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@947 -- # uname 00:25:23.409 00:26:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:25:23.409 00:26:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1649319 00:25:23.409 00:26:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:25:23.409 00:26:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:25:23.409 00:26:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1649319' 00:25:23.409 killing process with pid 1649319 00:25:23.409 00:26:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@961 -- # kill 1649319 00:25:23.409 Received shutdown signal, test time was about 2.000000 seconds 00:25:23.409 00:25:23.409 Latency(us) 00:25:23.409 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:23.409 =================================================================================================================== 00:25:23.409 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:23.409 00:26:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # wait 1649319 00:25:23.667 00:26:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:25:23.667 00:26:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:23.667 00:26:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:23.667 00:26:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:25:23.667 00:26:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:25:23.667 00:26:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:25:23.667 00:26:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:23.667 00:26:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1649866 00:25:23.667 00:26:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1649866 /var/tmp/bperf.sock 00:25:23.667 00:26:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:25:23.667 00:26:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@823 -- # '[' -z 1649866 ']' 00:25:23.667 00:26:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:23.667 00:26:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@828 -- # local max_retries=100 00:25:23.668 00:26:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:23.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:23.668 00:26:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # xtrace_disable 00:25:23.668 00:26:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:23.668 [2024-07-16 00:26:42.380304] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:25:23.668 [2024-07-16 00:26:42.380354] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1649866 ] 00:25:23.668 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:23.668 Zero copy mechanism will not be used. 00:25:23.668 [2024-07-16 00:26:42.431617] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:23.668 [2024-07-16 00:26:42.504578] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:23.925 00:26:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:25:23.925 00:26:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # return 0 00:25:23.925 00:26:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:23.925 00:26:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:23.925 00:26:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:23.925 00:26:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:23.925 00:26:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:24.492 nvme0n1 00:25:24.492 00:26:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:24.492 00:26:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:24.492 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:24.492 Zero copy mechanism will not be used. 00:25:24.492 Running I/O for 2 seconds... 00:25:27.026 00:25:27.026 Latency(us) 00:25:27.026 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:27.026 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:25:27.026 nvme0n1 : 2.04 4594.54 574.32 0.00 0.00 3415.40 968.79 43082.80 00:25:27.026 =================================================================================================================== 00:25:27.026 Total : 4594.54 574.32 0.00 0.00 3415.40 968.79 43082.80 00:25:27.026 0 00:25:27.026 00:26:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:27.026 00:26:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:27.026 00:26:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:27.026 00:26:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:27.026 00:26:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:27.026 | select(.opcode=="crc32c") 00:25:27.026 | "\(.module_name) \(.executed)"' 00:25:27.026 00:26:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:27.026 00:26:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:27.026 00:26:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:27.026 00:26:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:27.026 00:26:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1649866 00:25:27.026 00:26:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@942 -- # '[' -z 1649866 ']' 00:25:27.026 00:26:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # kill -0 1649866 00:25:27.026 00:26:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@947 -- # uname 00:25:27.026 00:26:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:25:27.026 00:26:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1649866 00:25:27.026 00:26:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:25:27.026 00:26:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:25:27.026 00:26:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1649866' 00:25:27.026 killing process with pid 1649866 00:25:27.026 00:26:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@961 -- # kill 1649866 00:25:27.026 Received shutdown signal, test time was about 2.000000 seconds 00:25:27.026 00:25:27.026 Latency(us) 00:25:27.026 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:27.026 =================================================================================================================== 00:25:27.026 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:27.026 00:26:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # wait 1649866 00:25:27.026 00:26:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:25:27.026 00:26:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:27.026 00:26:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:27.026 00:26:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:25:27.026 00:26:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:25:27.026 00:26:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:25:27.026 00:26:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:27.026 00:26:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1650533 00:25:27.026 00:26:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1650533 /var/tmp/bperf.sock 00:25:27.026 00:26:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:25:27.026 00:26:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@823 -- # '[' -z 1650533 ']' 00:25:27.026 00:26:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:27.026 00:26:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@828 -- # local max_retries=100 00:25:27.026 00:26:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:27.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:27.026 00:26:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # xtrace_disable 00:25:27.026 00:26:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:27.026 [2024-07-16 00:26:45.777900] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:25:27.026 [2024-07-16 00:26:45.777948] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1650533 ] 00:25:27.026 [2024-07-16 00:26:45.831989] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:27.285 [2024-07-16 00:26:45.904066] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:27.876 00:26:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:25:27.876 00:26:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # return 0 00:25:27.876 00:26:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:27.876 00:26:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:27.876 00:26:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:28.160 00:26:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:28.160 00:26:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:28.419 nvme0n1 00:25:28.419 00:26:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:28.419 00:26:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:28.419 Running I/O for 2 seconds... 00:25:30.318 00:25:30.318 Latency(us) 00:25:30.318 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:30.318 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:30.318 nvme0n1 : 2.00 27008.58 105.50 0.00 0.00 4731.26 2008.82 6667.58 00:25:30.318 =================================================================================================================== 00:25:30.318 Total : 27008.58 105.50 0.00 0.00 4731.26 2008.82 6667.58 00:25:30.318 0 00:25:30.318 00:26:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:30.318 00:26:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:30.318 00:26:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:30.318 00:26:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:30.318 00:26:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:30.318 | select(.opcode=="crc32c") 00:25:30.318 | "\(.module_name) \(.executed)"' 00:25:30.576 00:26:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:30.576 00:26:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:30.576 00:26:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:30.576 00:26:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:30.576 00:26:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1650533 00:25:30.576 00:26:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@942 -- # '[' -z 1650533 ']' 00:25:30.576 00:26:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # kill -0 1650533 00:25:30.576 00:26:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@947 -- # uname 00:25:30.576 00:26:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:25:30.576 00:26:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1650533 00:25:30.576 00:26:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:25:30.576 00:26:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:25:30.576 00:26:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1650533' 00:25:30.576 killing process with pid 1650533 00:25:30.576 00:26:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@961 -- # kill 1650533 00:25:30.576 Received shutdown signal, test time was about 2.000000 seconds 00:25:30.576 00:25:30.576 Latency(us) 00:25:30.576 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:30.576 =================================================================================================================== 00:25:30.576 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:30.576 00:26:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # wait 1650533 00:25:30.834 00:26:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:25:30.834 00:26:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:30.834 00:26:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:30.834 00:26:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:25:30.834 00:26:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:25:30.834 00:26:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:25:30.834 00:26:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:30.834 00:26:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1651073 00:25:30.834 00:26:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:25:30.834 00:26:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1651073 /var/tmp/bperf.sock 00:25:30.834 00:26:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@823 -- # '[' -z 1651073 ']' 00:25:30.834 00:26:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:30.834 00:26:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@828 -- # local max_retries=100 00:25:30.834 00:26:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:30.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:30.834 00:26:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # xtrace_disable 00:25:30.834 00:26:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:30.834 [2024-07-16 00:26:49.556945] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:25:30.834 [2024-07-16 00:26:49.556994] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1651073 ] 00:25:30.834 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:30.834 Zero copy mechanism will not be used. 00:25:30.834 [2024-07-16 00:26:49.607906] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:30.834 [2024-07-16 00:26:49.681216] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:31.092 00:26:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:25:31.092 00:26:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # return 0 00:25:31.092 00:26:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:31.092 00:26:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:31.092 00:26:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:31.349 00:26:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:31.350 00:26:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:31.607 nvme0n1 00:25:31.607 00:26:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:31.607 00:26:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:31.607 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:31.607 Zero copy mechanism will not be used. 00:25:31.607 Running I/O for 2 seconds... 00:25:34.135 00:25:34.135 Latency(us) 00:25:34.135 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:34.135 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:25:34.135 nvme0n1 : 2.00 4620.42 577.55 0.00 0.00 3458.73 1809.36 12195.39 00:25:34.135 =================================================================================================================== 00:25:34.135 Total : 4620.42 577.55 0.00 0.00 3458.73 1809.36 12195.39 00:25:34.135 0 00:25:34.135 00:26:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:34.135 00:26:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:34.135 00:26:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:34.135 00:26:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:34.135 00:26:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:34.135 | select(.opcode=="crc32c") 00:25:34.135 | "\(.module_name) \(.executed)"' 00:25:34.135 00:26:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:34.135 00:26:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:34.135 00:26:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:34.135 00:26:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:34.135 00:26:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1651073 00:25:34.135 00:26:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@942 -- # '[' -z 1651073 ']' 00:25:34.135 00:26:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # kill -0 1651073 00:25:34.135 00:26:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@947 -- # uname 00:25:34.135 00:26:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:25:34.135 00:26:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1651073 00:25:34.135 00:26:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:25:34.135 00:26:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:25:34.135 00:26:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1651073' 00:25:34.135 killing process with pid 1651073 00:25:34.135 00:26:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@961 -- # kill 1651073 00:25:34.135 Received shutdown signal, test time was about 2.000000 seconds 00:25:34.135 00:25:34.135 Latency(us) 00:25:34.135 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:34.135 =================================================================================================================== 00:25:34.135 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:34.135 00:26:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # wait 1651073 00:25:34.135 00:26:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1649117 00:25:34.135 00:26:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@942 -- # '[' -z 1649117 ']' 00:25:34.135 00:26:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # kill -0 1649117 00:25:34.135 00:26:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@947 -- # uname 00:25:34.135 00:26:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:25:34.135 00:26:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1649117 00:25:34.135 00:26:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:25:34.135 00:26:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:25:34.135 00:26:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1649117' 00:25:34.135 killing process with pid 1649117 00:25:34.135 00:26:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@961 -- # kill 1649117 00:25:34.135 00:26:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # wait 1649117 00:25:34.393 00:25:34.393 real 0m15.687s 00:25:34.393 user 0m29.948s 00:25:34.393 sys 0m4.163s 00:25:34.393 00:26:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1118 -- # xtrace_disable 00:25:34.393 00:26:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:34.393 ************************************ 00:25:34.393 END TEST nvmf_digest_clean 00:25:34.393 ************************************ 00:25:34.393 00:26:53 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1136 -- # return 0 00:25:34.393 00:26:53 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:25:34.393 00:26:53 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:25:34.393 00:26:53 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # xtrace_disable 00:25:34.393 00:26:53 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:34.393 ************************************ 00:25:34.393 START TEST nvmf_digest_error 00:25:34.393 ************************************ 00:25:34.393 00:26:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1117 -- # run_digest_error 00:25:34.393 00:26:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:25:34.393 00:26:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:34.393 00:26:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:34.393 00:26:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:34.393 00:26:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=1651737 00:25:34.393 00:26:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 1651737 00:25:34.393 00:26:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@823 -- # '[' -z 1651737 ']' 00:25:34.393 00:26:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:34.393 00:26:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@828 -- # local max_retries=100 00:25:34.393 00:26:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:34.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:34.393 00:26:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # xtrace_disable 00:25:34.393 00:26:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:34.393 00:26:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:34.393 [2024-07-16 00:26:53.150713] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:25:34.393 [2024-07-16 00:26:53.150754] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:34.393 [2024-07-16 00:26:53.206112] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:34.649 [2024-07-16 00:26:53.284199] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:34.649 [2024-07-16 00:26:53.284238] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:34.649 [2024-07-16 00:26:53.284247] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:34.649 [2024-07-16 00:26:53.284269] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:34.649 [2024-07-16 00:26:53.284275] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:34.649 [2024-07-16 00:26:53.284292] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:35.215 00:26:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:25:35.215 00:26:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # return 0 00:25:35.215 00:26:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:35.215 00:26:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:35.215 00:26:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:35.215 00:26:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:35.215 00:26:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:25:35.215 00:26:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:35.215 00:26:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:35.215 [2024-07-16 00:26:53.966297] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:25:35.215 00:26:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:35.215 00:26:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:25:35.216 00:26:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:25:35.216 00:26:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:35.216 00:26:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:35.216 null0 00:25:35.216 [2024-07-16 00:26:54.056579] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:35.474 [2024-07-16 00:26:54.080741] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:35.474 00:26:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:35.474 00:26:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:25:35.474 00:26:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:35.474 00:26:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:25:35.474 00:26:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:25:35.474 00:26:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:25:35.474 00:26:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1651971 00:25:35.474 00:26:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1651971 /var/tmp/bperf.sock 00:25:35.474 00:26:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:25:35.474 00:26:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@823 -- # '[' -z 1651971 ']' 00:25:35.474 00:26:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:35.474 00:26:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@828 -- # local max_retries=100 00:25:35.474 00:26:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:35.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:35.474 00:26:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # xtrace_disable 00:25:35.474 00:26:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:35.474 [2024-07-16 00:26:54.112982] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:25:35.474 [2024-07-16 00:26:54.113027] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1651971 ] 00:25:35.474 [2024-07-16 00:26:54.166823] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:35.474 [2024-07-16 00:26:54.246019] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:36.410 00:26:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:25:36.410 00:26:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # return 0 00:25:36.410 00:26:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:36.410 00:26:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:36.410 00:26:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:36.410 00:26:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:36.410 00:26:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:36.410 00:26:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:36.410 00:26:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:36.410 00:26:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:36.669 nvme0n1 00:25:36.669 00:26:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:25:36.669 00:26:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:36.669 00:26:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:36.669 00:26:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:36.669 00:26:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:36.669 00:26:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:36.669 Running I/O for 2 seconds... 00:25:36.669 [2024-07-16 00:26:55.455320] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:36.669 [2024-07-16 00:26:55.455351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.669 [2024-07-16 00:26:55.455362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.669 [2024-07-16 00:26:55.465638] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:36.669 [2024-07-16 00:26:55.465665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:13362 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.669 [2024-07-16 00:26:55.465674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.669 [2024-07-16 00:26:55.475962] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:36.669 [2024-07-16 00:26:55.475984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:3393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.669 [2024-07-16 00:26:55.475993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.669 [2024-07-16 00:26:55.483902] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:36.669 [2024-07-16 00:26:55.483924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:13219 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.669 [2024-07-16 00:26:55.483932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.669 [2024-07-16 00:26:55.494149] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:36.669 [2024-07-16 00:26:55.494171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:11200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.669 [2024-07-16 00:26:55.494180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.669 [2024-07-16 00:26:55.504171] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:36.669 [2024-07-16 00:26:55.504192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:18515 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.669 [2024-07-16 00:26:55.504201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.669 [2024-07-16 00:26:55.513369] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:36.669 [2024-07-16 00:26:55.513391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:19245 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.669 [2024-07-16 00:26:55.513403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.929 [2024-07-16 00:26:55.523150] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:36.929 [2024-07-16 00:26:55.523172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:22670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.929 [2024-07-16 00:26:55.523181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.929 [2024-07-16 00:26:55.532061] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:36.929 [2024-07-16 00:26:55.532082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:3161 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.929 [2024-07-16 00:26:55.532091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.929 [2024-07-16 00:26:55.541904] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:36.929 [2024-07-16 00:26:55.541926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:25509 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.929 [2024-07-16 00:26:55.541935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.929 [2024-07-16 00:26:55.550941] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:36.929 [2024-07-16 00:26:55.550962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23535 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.929 [2024-07-16 00:26:55.550970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.929 [2024-07-16 00:26:55.560354] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:36.929 [2024-07-16 00:26:55.560375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:20570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.929 [2024-07-16 00:26:55.560383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.929 [2024-07-16 00:26:55.569583] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:36.929 [2024-07-16 00:26:55.569603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:7268 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.929 [2024-07-16 00:26:55.569611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.929 [2024-07-16 00:26:55.579652] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:36.929 [2024-07-16 00:26:55.579673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:1621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.929 [2024-07-16 00:26:55.579681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.929 [2024-07-16 00:26:55.587774] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:36.929 [2024-07-16 00:26:55.587794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20203 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.929 [2024-07-16 00:26:55.587803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.929 [2024-07-16 00:26:55.598618] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:36.929 [2024-07-16 00:26:55.598642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:19084 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.929 [2024-07-16 00:26:55.598651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.929 [2024-07-16 00:26:55.607682] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:36.929 [2024-07-16 00:26:55.607702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4961 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.929 [2024-07-16 00:26:55.607710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.929 [2024-07-16 00:26:55.616593] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:36.929 [2024-07-16 00:26:55.616615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:12989 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.929 [2024-07-16 00:26:55.616623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.929 [2024-07-16 00:26:55.626976] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:36.929 [2024-07-16 00:26:55.626997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:22331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.929 [2024-07-16 00:26:55.627006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.929 [2024-07-16 00:26:55.635382] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:36.929 [2024-07-16 00:26:55.635402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:11222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.929 [2024-07-16 00:26:55.635410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.929 [2024-07-16 00:26:55.645173] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:36.929 [2024-07-16 00:26:55.645194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:23421 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.929 [2024-07-16 00:26:55.645202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.929 [2024-07-16 00:26:55.654726] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:36.929 [2024-07-16 00:26:55.654746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:7018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.929 [2024-07-16 00:26:55.654755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.929 [2024-07-16 00:26:55.664063] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:36.929 [2024-07-16 00:26:55.664084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:9638 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.930 [2024-07-16 00:26:55.664092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.930 [2024-07-16 00:26:55.673717] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:36.930 [2024-07-16 00:26:55.673737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:23581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.930 [2024-07-16 00:26:55.673746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.930 [2024-07-16 00:26:55.682641] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:36.930 [2024-07-16 00:26:55.682662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:17307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.930 [2024-07-16 00:26:55.682671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.930 [2024-07-16 00:26:55.692195] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:36.930 [2024-07-16 00:26:55.692216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:23831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.930 [2024-07-16 00:26:55.692230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.930 [2024-07-16 00:26:55.702110] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:36.930 [2024-07-16 00:26:55.702130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:21155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.930 [2024-07-16 00:26:55.702139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.930 [2024-07-16 00:26:55.710707] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:36.930 [2024-07-16 00:26:55.710727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:22643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.930 [2024-07-16 00:26:55.710735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.930 [2024-07-16 00:26:55.720069] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:36.930 [2024-07-16 00:26:55.720090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:15689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.930 [2024-07-16 00:26:55.720098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.930 [2024-07-16 00:26:55.729463] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:36.930 [2024-07-16 00:26:55.729484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:19631 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.930 [2024-07-16 00:26:55.729494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.930 [2024-07-16 00:26:55.739443] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:36.930 [2024-07-16 00:26:55.739464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:5209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.930 [2024-07-16 00:26:55.739473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.930 [2024-07-16 00:26:55.748050] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:36.930 [2024-07-16 00:26:55.748071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.930 [2024-07-16 00:26:55.748079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.930 [2024-07-16 00:26:55.759090] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:36.930 [2024-07-16 00:26:55.759113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:14023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.930 [2024-07-16 00:26:55.759122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.930 [2024-07-16 00:26:55.766825] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:36.930 [2024-07-16 00:26:55.766845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:24911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.930 [2024-07-16 00:26:55.766853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.930 [2024-07-16 00:26:55.776944] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:36.930 [2024-07-16 00:26:55.776965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:161 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.930 [2024-07-16 00:26:55.776973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.190 [2024-07-16 00:26:55.787356] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:37.190 [2024-07-16 00:26:55.787377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:21388 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.190 [2024-07-16 00:26:55.787386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.190 [2024-07-16 00:26:55.795525] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:37.190 [2024-07-16 00:26:55.795546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18691 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.190 [2024-07-16 00:26:55.795554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.190 [2024-07-16 00:26:55.805379] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:37.190 [2024-07-16 00:26:55.805400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:20032 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.190 [2024-07-16 00:26:55.805408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.190 [2024-07-16 00:26:55.815168] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:37.190 [2024-07-16 00:26:55.815189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:5891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.190 [2024-07-16 00:26:55.815198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.190 [2024-07-16 00:26:55.824846] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:37.190 [2024-07-16 00:26:55.824867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:3174 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.190 [2024-07-16 00:26:55.824875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.190 [2024-07-16 00:26:55.834012] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:37.190 [2024-07-16 00:26:55.834033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:20895 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.190 [2024-07-16 00:26:55.834042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.190 [2024-07-16 00:26:55.843476] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:37.190 [2024-07-16 00:26:55.843497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.190 [2024-07-16 00:26:55.843505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.190 [2024-07-16 00:26:55.852237] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:37.190 [2024-07-16 00:26:55.852259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.190 [2024-07-16 00:26:55.852267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.190 [2024-07-16 00:26:55.862407] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:37.190 [2024-07-16 00:26:55.862427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:13694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.190 [2024-07-16 00:26:55.862435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.190 [2024-07-16 00:26:55.870758] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:37.190 [2024-07-16 00:26:55.870778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:18664 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.190 [2024-07-16 00:26:55.870787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.190 [2024-07-16 00:26:55.880095] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:37.190 [2024-07-16 00:26:55.880116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:2873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.190 [2024-07-16 00:26:55.880125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.190 [2024-07-16 00:26:55.890216] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:37.190 [2024-07-16 00:26:55.890244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:5277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.190 [2024-07-16 00:26:55.890252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.190 [2024-07-16 00:26:55.898947] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:37.190 [2024-07-16 00:26:55.898968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:6104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.190 [2024-07-16 00:26:55.898976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.190 [2024-07-16 00:26:55.909846] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:37.190 [2024-07-16 00:26:55.909866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:5087 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.190 [2024-07-16 00:26:55.909875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.190 [2024-07-16 00:26:55.918360] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:37.190 [2024-07-16 00:26:55.918380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:13510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.190 [2024-07-16 00:26:55.918395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.191 [2024-07-16 00:26:55.929734] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:37.191 [2024-07-16 00:26:55.929755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:10581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.191 [2024-07-16 00:26:55.929763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.191 [2024-07-16 00:26:55.937818] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:37.191 [2024-07-16 00:26:55.937840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1445 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.191 [2024-07-16 00:26:55.937848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.191 [2024-07-16 00:26:55.948531] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:37.191 [2024-07-16 00:26:55.948552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:16946 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.191 [2024-07-16 00:26:55.948560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.191 [2024-07-16 00:26:55.957941] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:37.191 [2024-07-16 00:26:55.957963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:5556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.191 [2024-07-16 00:26:55.957971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.191 [2024-07-16 00:26:55.965982] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:37.191 [2024-07-16 00:26:55.966003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.191 [2024-07-16 00:26:55.966011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.191 [2024-07-16 00:26:55.977056] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:37.191 [2024-07-16 00:26:55.977077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:7243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.191 [2024-07-16 00:26:55.977085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.191 [2024-07-16 00:26:55.987154] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:37.191 [2024-07-16 00:26:55.987175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:14402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.191 [2024-07-16 00:26:55.987183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.191 [2024-07-16 00:26:55.995327] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:37.191 [2024-07-16 00:26:55.995349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:17471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.191 [2024-07-16 00:26:55.995357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.191 [2024-07-16 00:26:56.006305] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:37.191 [2024-07-16 00:26:56.006331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:18433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.191 [2024-07-16 00:26:56.006339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.191 [2024-07-16 00:26:56.013813] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:37.191 [2024-07-16 00:26:56.013835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:14940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.191 [2024-07-16 00:26:56.013844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.191 [2024-07-16 00:26:56.024234] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:37.191 [2024-07-16 00:26:56.024255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:23839 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.191 [2024-07-16 00:26:56.024264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.191 [2024-07-16 00:26:56.033530] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:37.191 [2024-07-16 00:26:56.033551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:20177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.191 [2024-07-16 00:26:56.033559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.450 [2024-07-16 00:26:56.043408] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:37.450 [2024-07-16 00:26:56.043430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:22421 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.450 [2024-07-16 00:26:56.043439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.451 [2024-07-16 00:26:56.052298] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:37.451 [2024-07-16 00:26:56.052319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:10653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.451 [2024-07-16 00:26:56.052327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.451 [2024-07-16 00:26:56.062659] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:37.451 [2024-07-16 00:26:56.062680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:1159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.451 [2024-07-16 00:26:56.062689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.451 [2024-07-16 00:26:56.072232] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:37.451 [2024-07-16 00:26:56.072252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:8709 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.451 [2024-07-16 00:26:56.072260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.451 [2024-07-16 00:26:56.080943] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:37.451 [2024-07-16 00:26:56.080964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:17717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.451 [2024-07-16 00:26:56.080972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.451 [2024-07-16 00:26:56.090489] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:37.451 [2024-07-16 00:26:56.090510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:14592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.451 [2024-07-16 00:26:56.090518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.451 [2024-07-16 00:26:56.099085] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:37.451 [2024-07-16 00:26:56.099107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21738 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.451 [2024-07-16 00:26:56.099115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.451 [2024-07-16 00:26:56.108947] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:37.451 [2024-07-16 00:26:56.108967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:25256 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.451 [2024-07-16 00:26:56.108975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.451 [2024-07-16 00:26:56.118857] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:37.451 [2024-07-16 00:26:56.118879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:9707 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.451 [2024-07-16 00:26:56.118887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.451 [2024-07-16 00:26:56.128150] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:37.451 [2024-07-16 00:26:56.128172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:4778 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.451 [2024-07-16 00:26:56.128180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.451 [2024-07-16 00:26:56.137517] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:37.451 [2024-07-16 00:26:56.137539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:14695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.451 [2024-07-16 00:26:56.137547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.451 [2024-07-16 00:26:56.146804] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:37.451 [2024-07-16 00:26:56.146825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:14261 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.451 [2024-07-16 00:26:56.146834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.451 [2024-07-16 00:26:56.156262] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:37.451 [2024-07-16 00:26:56.156284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6764 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.451 [2024-07-16 00:26:56.156292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.451 [2024-07-16 00:26:56.165116] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:37.451 [2024-07-16 00:26:56.165137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:20973 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.451 [2024-07-16 00:26:56.165148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.451 [2024-07-16 00:26:56.175854] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:37.451 [2024-07-16 00:26:56.175876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:7119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.451 [2024-07-16 00:26:56.175884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.451 [2024-07-16 00:26:56.185284] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:37.451 [2024-07-16 00:26:56.185306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:5207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.451 [2024-07-16 00:26:56.185314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.451 [2024-07-16 00:26:56.194709] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:37.451 [2024-07-16 00:26:56.194730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:22817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.451 [2024-07-16 00:26:56.194738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.451 [2024-07-16 00:26:56.203972] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:37.451 [2024-07-16 00:26:56.203992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:5454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.451 [2024-07-16 00:26:56.204001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.451 [2024-07-16 00:26:56.212563] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:37.451 [2024-07-16 00:26:56.212584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.451 [2024-07-16 00:26:56.212592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.451 [2024-07-16 00:26:56.222656] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:37.451 [2024-07-16 00:26:56.222677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19947 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.451 [2024-07-16 00:26:56.222685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.451 [2024-07-16 00:26:56.232291] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:37.451 [2024-07-16 00:26:56.232312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:10274 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.451 [2024-07-16 00:26:56.232321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.451 [2024-07-16 00:26:56.243126] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:37.451 [2024-07-16 00:26:56.243147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.451 [2024-07-16 00:26:56.243155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.451 [2024-07-16 00:26:56.252511] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:37.451 [2024-07-16 00:26:56.252532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:14090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.451 [2024-07-16 00:26:56.252540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.451 [2024-07-16 00:26:56.263005] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:37.451 [2024-07-16 00:26:56.263026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:11770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.451 [2024-07-16 00:26:56.263035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.451 [2024-07-16 00:26:56.272597] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:37.451 [2024-07-16 00:26:56.272618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:6966 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.451 [2024-07-16 00:26:56.272626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.451 [2024-07-16 00:26:56.281927] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:37.451 [2024-07-16 00:26:56.281949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:8804 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.451 [2024-07-16 00:26:56.281958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.451 [2024-07-16 00:26:56.290851] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:37.451 [2024-07-16 00:26:56.290872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.451 [2024-07-16 00:26:56.290880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.451 [2024-07-16 00:26:56.300319] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:37.451 [2024-07-16 00:26:56.300340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:4162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.451 [2024-07-16 00:26:56.300348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.712 [2024-07-16 00:26:56.309682] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:37.712 [2024-07-16 00:26:56.309704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:2516 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.712 [2024-07-16 00:26:56.309712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.712 [2024-07-16 00:26:56.319728] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:37.712 [2024-07-16 00:26:56.319748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:785 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.712 [2024-07-16 00:26:56.319757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.712 [2024-07-16 00:26:56.327897] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:37.712 [2024-07-16 00:26:56.327918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:13016 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.712 [2024-07-16 00:26:56.327930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.712 [2024-07-16 00:26:56.338715] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:37.712 [2024-07-16 00:26:56.338736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:24749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.712 [2024-07-16 00:26:56.338744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.712 [2024-07-16 00:26:56.346926] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:37.712 [2024-07-16 00:26:56.346946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:18723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.712 [2024-07-16 00:26:56.346955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.712 [2024-07-16 00:26:56.357011] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:37.712 [2024-07-16 00:26:56.357033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:24009 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.712 [2024-07-16 00:26:56.357041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.712 [2024-07-16 00:26:56.366486] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:37.712 [2024-07-16 00:26:56.366507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:4105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.712 [2024-07-16 00:26:56.366515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.712 [2024-07-16 00:26:56.375139] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:37.712 [2024-07-16 00:26:56.375159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:18775 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.712 [2024-07-16 00:26:56.375167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.712 [2024-07-16 00:26:56.386218] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:37.712 [2024-07-16 00:26:56.386246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:507 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.712 [2024-07-16 00:26:56.386254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.712 [2024-07-16 00:26:56.394866] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:37.712 [2024-07-16 00:26:56.394885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.712 [2024-07-16 00:26:56.394893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.712 [2024-07-16 00:26:56.403693] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:37.712 [2024-07-16 00:26:56.403714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:8820 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.712 [2024-07-16 00:26:56.403722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.712 [2024-07-16 00:26:56.413586] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:37.712 [2024-07-16 00:26:56.413610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:18571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.712 [2024-07-16 00:26:56.413619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.712 [2024-07-16 00:26:56.422399] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:37.712 [2024-07-16 00:26:56.422420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.712 [2024-07-16 00:26:56.422429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.712 [2024-07-16 00:26:56.432752] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:37.712 [2024-07-16 00:26:56.432774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:25176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.712 [2024-07-16 00:26:56.432782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.712 [2024-07-16 00:26:56.442385] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:37.712 [2024-07-16 00:26:56.442406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:19123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.712 [2024-07-16 00:26:56.442414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.712 [2024-07-16 00:26:56.451840] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:37.712 [2024-07-16 00:26:56.451861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15046 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.712 [2024-07-16 00:26:56.451869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.712 [2024-07-16 00:26:56.460029] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:37.712 [2024-07-16 00:26:56.460050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:15896 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.712 [2024-07-16 00:26:56.460058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.712 [2024-07-16 00:26:56.469362] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:37.712 [2024-07-16 00:26:56.469383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:17575 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.712 [2024-07-16 00:26:56.469392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.712 [2024-07-16 00:26:56.479938] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:37.712 [2024-07-16 00:26:56.479960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:10265 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.712 [2024-07-16 00:26:56.479968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.712 [2024-07-16 00:26:56.489107] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:37.712 [2024-07-16 00:26:56.489128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:3250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.712 [2024-07-16 00:26:56.489136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.712 [2024-07-16 00:26:56.499143] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:37.712 [2024-07-16 00:26:56.499166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:9081 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.712 [2024-07-16 00:26:56.499174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.712 [2024-07-16 00:26:56.507902] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:37.712 [2024-07-16 00:26:56.507923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:20975 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.712 [2024-07-16 00:26:56.507932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.712 [2024-07-16 00:26:56.518520] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:37.712 [2024-07-16 00:26:56.518542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:12964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.712 [2024-07-16 00:26:56.518550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.712 [2024-07-16 00:26:56.527712] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:37.712 [2024-07-16 00:26:56.527735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:16682 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.712 [2024-07-16 00:26:56.527744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.712 [2024-07-16 00:26:56.535834] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:37.713 [2024-07-16 00:26:56.535855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:7843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.713 [2024-07-16 00:26:56.535863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.713 [2024-07-16 00:26:56.545321] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:37.713 [2024-07-16 00:26:56.545342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15203 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.713 [2024-07-16 00:26:56.545350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.713 [2024-07-16 00:26:56.555423] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:37.713 [2024-07-16 00:26:56.555445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:7429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.713 [2024-07-16 00:26:56.555454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.972 [2024-07-16 00:26:56.564754] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:37.972 [2024-07-16 00:26:56.564776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:7995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.972 [2024-07-16 00:26:56.564786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.972 [2024-07-16 00:26:56.573862] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:37.972 [2024-07-16 00:26:56.573884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:12506 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.972 [2024-07-16 00:26:56.573898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.972 [2024-07-16 00:26:56.583023] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:37.972 [2024-07-16 00:26:56.583046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.972 [2024-07-16 00:26:56.583055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.972 [2024-07-16 00:26:56.593323] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:37.972 [2024-07-16 00:26:56.593344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:16026 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.972 [2024-07-16 00:26:56.593352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.972 [2024-07-16 00:26:56.602626] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:37.972 [2024-07-16 00:26:56.602647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:2080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.972 [2024-07-16 00:26:56.602656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.972 [2024-07-16 00:26:56.611664] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:37.972 [2024-07-16 00:26:56.611685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.972 [2024-07-16 00:26:56.611694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.972 [2024-07-16 00:26:56.620478] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:37.972 [2024-07-16 00:26:56.620499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:19856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.973 [2024-07-16 00:26:56.620508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.973 [2024-07-16 00:26:56.630747] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:37.973 [2024-07-16 00:26:56.630768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:10611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.973 [2024-07-16 00:26:56.630776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.973 [2024-07-16 00:26:56.640270] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:37.973 [2024-07-16 00:26:56.640291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:6469 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.973 [2024-07-16 00:26:56.640300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.973 [2024-07-16 00:26:56.650855] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:37.973 [2024-07-16 00:26:56.650876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.973 [2024-07-16 00:26:56.650884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.973 [2024-07-16 00:26:56.659580] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:37.973 [2024-07-16 00:26:56.659601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:24735 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.973 [2024-07-16 00:26:56.659610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.973 [2024-07-16 00:26:56.670339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:37.973 [2024-07-16 00:26:56.670360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:6224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.973 [2024-07-16 00:26:56.670368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.973 [2024-07-16 00:26:56.678066] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:37.973 [2024-07-16 00:26:56.678086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.973 [2024-07-16 00:26:56.678094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.973 [2024-07-16 00:26:56.688176] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:37.973 [2024-07-16 00:26:56.688197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:1526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.973 [2024-07-16 00:26:56.688206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.973 [2024-07-16 00:26:56.698652] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:37.973 [2024-07-16 00:26:56.698672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:9354 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.973 [2024-07-16 00:26:56.698682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.973 [2024-07-16 00:26:56.707770] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:37.973 [2024-07-16 00:26:56.707791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:17567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.973 [2024-07-16 00:26:56.707800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.973 [2024-07-16 00:26:56.716386] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:37.973 [2024-07-16 00:26:56.716407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:9961 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.973 [2024-07-16 00:26:56.716415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.973 [2024-07-16 00:26:56.726315] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:37.973 [2024-07-16 00:26:56.726336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:9460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.973 [2024-07-16 00:26:56.726344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.973 [2024-07-16 00:26:56.735568] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:37.973 [2024-07-16 00:26:56.735589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:4722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.973 [2024-07-16 00:26:56.735600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.973 [2024-07-16 00:26:56.744897] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:37.973 [2024-07-16 00:26:56.744919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:22657 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.973 [2024-07-16 00:26:56.744927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.973 [2024-07-16 00:26:56.754964] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:37.973 [2024-07-16 00:26:56.754986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17713 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.973 [2024-07-16 00:26:56.754993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.973 [2024-07-16 00:26:56.764008] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:37.973 [2024-07-16 00:26:56.764029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:13494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.973 [2024-07-16 00:26:56.764037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.973 [2024-07-16 00:26:56.772961] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:37.973 [2024-07-16 00:26:56.772983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:15531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.973 [2024-07-16 00:26:56.772991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.973 [2024-07-16 00:26:56.782939] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:37.973 [2024-07-16 00:26:56.782961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:7366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.973 [2024-07-16 00:26:56.782969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.973 [2024-07-16 00:26:56.791192] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:37.973 [2024-07-16 00:26:56.791213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.973 [2024-07-16 00:26:56.791221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.973 [2024-07-16 00:26:56.801158] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:37.973 [2024-07-16 00:26:56.801180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:22180 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.973 [2024-07-16 00:26:56.801188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.973 [2024-07-16 00:26:56.811012] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:37.973 [2024-07-16 00:26:56.811033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:6331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.973 [2024-07-16 00:26:56.811042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.973 [2024-07-16 00:26:56.820007] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:37.973 [2024-07-16 00:26:56.820032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:12012 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.973 [2024-07-16 00:26:56.820040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:38.232 [2024-07-16 00:26:56.829681] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:38.232 [2024-07-16 00:26:56.829702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:4484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.232 [2024-07-16 00:26:56.829710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:38.232 [2024-07-16 00:26:56.840254] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:38.232 [2024-07-16 00:26:56.840277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:17365 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.232 [2024-07-16 00:26:56.840285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:38.232 [2024-07-16 00:26:56.848794] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:38.232 [2024-07-16 00:26:56.848815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:4434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.232 [2024-07-16 00:26:56.848823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:38.232 [2024-07-16 00:26:56.858328] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:38.232 [2024-07-16 00:26:56.858350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:24354 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.233 [2024-07-16 00:26:56.858359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:38.233 [2024-07-16 00:26:56.866506] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:38.233 [2024-07-16 00:26:56.866527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:14455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.233 [2024-07-16 00:26:56.866535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:38.233 [2024-07-16 00:26:56.877343] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:38.233 [2024-07-16 00:26:56.877365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:8978 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.233 [2024-07-16 00:26:56.877374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:38.233 [2024-07-16 00:26:56.889414] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:38.233 [2024-07-16 00:26:56.889436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:23638 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.233 [2024-07-16 00:26:56.889444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:38.233 [2024-07-16 00:26:56.897895] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:38.233 [2024-07-16 00:26:56.897917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:23115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.233 [2024-07-16 00:26:56.897925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:38.233 [2024-07-16 00:26:56.909095] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:38.233 [2024-07-16 00:26:56.909116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.233 [2024-07-16 00:26:56.909125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:38.233 [2024-07-16 00:26:56.917216] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:38.233 [2024-07-16 00:26:56.917248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:16081 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.233 [2024-07-16 00:26:56.917257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:38.233 [2024-07-16 00:26:56.927644] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:38.233 [2024-07-16 00:26:56.927665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:16602 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.233 [2024-07-16 00:26:56.927674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:38.233 [2024-07-16 00:26:56.937595] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:38.233 [2024-07-16 00:26:56.937616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.233 [2024-07-16 00:26:56.937624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:38.233 [2024-07-16 00:26:56.946260] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:38.233 [2024-07-16 00:26:56.946282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:22423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.233 [2024-07-16 00:26:56.946290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:38.233 [2024-07-16 00:26:56.956327] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:38.233 [2024-07-16 00:26:56.956348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12088 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.233 [2024-07-16 00:26:56.956357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:38.233 [2024-07-16 00:26:56.965467] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:38.233 [2024-07-16 00:26:56.965487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.233 [2024-07-16 00:26:56.965496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:38.233 [2024-07-16 00:26:56.973801] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:38.233 [2024-07-16 00:26:56.973821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6233 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.233 [2024-07-16 00:26:56.973830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:38.233 [2024-07-16 00:26:56.984763] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:38.233 [2024-07-16 00:26:56.984785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:15308 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.233 [2024-07-16 00:26:56.984796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:38.233 [2024-07-16 00:26:56.993939] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:38.233 [2024-07-16 00:26:56.993960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:13346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.233 [2024-07-16 00:26:56.993969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:38.233 [2024-07-16 00:26:57.003060] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:38.233 [2024-07-16 00:26:57.003082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:5278 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.233 [2024-07-16 00:26:57.003090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:38.233 [2024-07-16 00:26:57.012485] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:38.233 [2024-07-16 00:26:57.012508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:623 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.233 [2024-07-16 00:26:57.012517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:38.233 [2024-07-16 00:26:57.022127] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:38.233 [2024-07-16 00:26:57.022150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:15555 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.233 [2024-07-16 00:26:57.022158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:38.233 [2024-07-16 00:26:57.032538] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:38.233 [2024-07-16 00:26:57.032559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:789 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.233 [2024-07-16 00:26:57.032567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:38.233 [2024-07-16 00:26:57.041485] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:38.233 [2024-07-16 00:26:57.041507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:20982 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.233 [2024-07-16 00:26:57.041516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:38.233 [2024-07-16 00:26:57.050805] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:38.233 [2024-07-16 00:26:57.050826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:17875 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.233 [2024-07-16 00:26:57.050834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:38.233 [2024-07-16 00:26:57.060433] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:38.233 [2024-07-16 00:26:57.060454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:2552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.233 [2024-07-16 00:26:57.060462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:38.233 [2024-07-16 00:26:57.069407] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:38.233 [2024-07-16 00:26:57.069431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:1351 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.233 [2024-07-16 00:26:57.069439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:38.233 [2024-07-16 00:26:57.079383] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:38.233 [2024-07-16 00:26:57.079403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:23803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.233 [2024-07-16 00:26:57.079412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:38.492 [2024-07-16 00:26:57.088401] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:38.492 [2024-07-16 00:26:57.088423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:15574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.492 [2024-07-16 00:26:57.088431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:38.492 [2024-07-16 00:26:57.098462] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:38.492 [2024-07-16 00:26:57.098483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:13079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.492 [2024-07-16 00:26:57.098491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:38.493 [2024-07-16 00:26:57.108154] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:38.493 [2024-07-16 00:26:57.108175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3489 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.493 [2024-07-16 00:26:57.108183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:38.493 [2024-07-16 00:26:57.117323] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:38.493 [2024-07-16 00:26:57.117345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:8370 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.493 [2024-07-16 00:26:57.117353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:38.493 [2024-07-16 00:26:57.125681] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:38.493 [2024-07-16 00:26:57.125702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:15991 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.493 [2024-07-16 00:26:57.125710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:38.493 [2024-07-16 00:26:57.135907] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:38.493 [2024-07-16 00:26:57.135927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:17914 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.493 [2024-07-16 00:26:57.135935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:38.493 [2024-07-16 00:26:57.144034] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:38.493 [2024-07-16 00:26:57.144055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:11964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.493 [2024-07-16 00:26:57.144063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:38.493 [2024-07-16 00:26:57.154414] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:38.493 [2024-07-16 00:26:57.154436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:7501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.493 [2024-07-16 00:26:57.154444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:38.493 [2024-07-16 00:26:57.164576] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:38.493 [2024-07-16 00:26:57.164597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15459 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.493 [2024-07-16 00:26:57.164605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:38.493 [2024-07-16 00:26:57.173922] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:38.493 [2024-07-16 00:26:57.173943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:8366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.493 [2024-07-16 00:26:57.173952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:38.493 [2024-07-16 00:26:57.182848] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:38.493 [2024-07-16 00:26:57.182869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:14556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.493 [2024-07-16 00:26:57.182877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:38.493 [2024-07-16 00:26:57.193007] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:38.493 [2024-07-16 00:26:57.193027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:8753 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.493 [2024-07-16 00:26:57.193035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:38.493 [2024-07-16 00:26:57.203489] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:38.493 [2024-07-16 00:26:57.203509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:22782 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.493 [2024-07-16 00:26:57.203517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:38.493 [2024-07-16 00:26:57.212332] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:38.493 [2024-07-16 00:26:57.212353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:22381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.493 [2024-07-16 00:26:57.212361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:38.493 [2024-07-16 00:26:57.222267] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:38.493 [2024-07-16 00:26:57.222287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:24571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.493 [2024-07-16 00:26:57.222296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:38.493 [2024-07-16 00:26:57.231099] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:38.493 [2024-07-16 00:26:57.231123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:25584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.493 [2024-07-16 00:26:57.231132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:38.493 [2024-07-16 00:26:57.241154] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:38.493 [2024-07-16 00:26:57.241175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:14675 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.493 [2024-07-16 00:26:57.241183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:38.493 [2024-07-16 00:26:57.250052] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:38.493 [2024-07-16 00:26:57.250073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:4772 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.493 [2024-07-16 00:26:57.250081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:38.493 [2024-07-16 00:26:57.258923] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:38.493 [2024-07-16 00:26:57.258944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:22976 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.493 [2024-07-16 00:26:57.258952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:38.493 [2024-07-16 00:26:57.268185] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:38.493 [2024-07-16 00:26:57.268206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:8782 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.493 [2024-07-16 00:26:57.268214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:38.493 [2024-07-16 00:26:57.277686] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:38.493 [2024-07-16 00:26:57.277707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:25381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.493 [2024-07-16 00:26:57.277715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:38.493 [2024-07-16 00:26:57.287323] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:38.493 [2024-07-16 00:26:57.287344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:4335 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.493 [2024-07-16 00:26:57.287352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:38.493 [2024-07-16 00:26:57.296645] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:38.493 [2024-07-16 00:26:57.296666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:11137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.493 [2024-07-16 00:26:57.296674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:38.493 [2024-07-16 00:26:57.305576] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:38.493 [2024-07-16 00:26:57.305596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:23103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.493 [2024-07-16 00:26:57.305605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:38.493 [2024-07-16 00:26:57.315814] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:38.493 [2024-07-16 00:26:57.315835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:23265 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.493 [2024-07-16 00:26:57.315843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:38.493 [2024-07-16 00:26:57.324895] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:38.493 [2024-07-16 00:26:57.324916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.493 [2024-07-16 00:26:57.324924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:38.493 [2024-07-16 00:26:57.334132] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:38.493 [2024-07-16 00:26:57.334153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:20552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.493 [2024-07-16 00:26:57.334161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:38.493 [2024-07-16 00:26:57.343977] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:38.493 [2024-07-16 00:26:57.343998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:19430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.493 [2024-07-16 00:26:57.344007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:38.753 [2024-07-16 00:26:57.352739] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:38.753 [2024-07-16 00:26:57.352760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:7592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.753 [2024-07-16 00:26:57.352769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:38.753 [2024-07-16 00:26:57.362386] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:38.753 [2024-07-16 00:26:57.362406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:11289 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.753 [2024-07-16 00:26:57.362415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:38.753 [2024-07-16 00:26:57.372084] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:38.753 [2024-07-16 00:26:57.372105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:6860 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.753 [2024-07-16 00:26:57.372113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:38.753 [2024-07-16 00:26:57.380698] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:38.753 [2024-07-16 00:26:57.380719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:4246 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.753 [2024-07-16 00:26:57.380727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:38.753 [2024-07-16 00:26:57.390306] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:38.753 [2024-07-16 00:26:57.390326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.753 [2024-07-16 00:26:57.390338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:38.753 [2024-07-16 00:26:57.399333] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:38.753 [2024-07-16 00:26:57.399354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:22305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.753 [2024-07-16 00:26:57.399362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:38.753 [2024-07-16 00:26:57.409500] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:38.753 [2024-07-16 00:26:57.409520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:25458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.753 [2024-07-16 00:26:57.409528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:38.753 [2024-07-16 00:26:57.417943] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:38.753 [2024-07-16 00:26:57.417964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4308 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.753 [2024-07-16 00:26:57.417972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:38.753 [2024-07-16 00:26:57.428171] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:38.753 [2024-07-16 00:26:57.428192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:7236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.753 [2024-07-16 00:26:57.428200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:38.753 [2024-07-16 00:26:57.437934] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23faf20) 00:25:38.753 [2024-07-16 00:26:57.437955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.753 [2024-07-16 00:26:57.437963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:38.753 00:25:38.753 Latency(us) 00:25:38.753 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:38.753 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:25:38.753 nvme0n1 : 2.00 26852.27 104.89 0.00 0.00 4761.73 1966.08 12480.33 00:25:38.753 =================================================================================================================== 00:25:38.753 Total : 26852.27 104.89 0.00 0.00 4761.73 1966.08 12480.33 00:25:38.753 0 00:25:38.753 00:26:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:38.753 00:26:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:38.753 00:26:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:38.753 | .driver_specific 00:25:38.753 | .nvme_error 00:25:38.753 | .status_code 00:25:38.753 | .command_transient_transport_error' 00:25:38.753 00:26:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:39.013 00:26:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 210 > 0 )) 00:25:39.013 00:26:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1651971 00:25:39.013 00:26:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@942 -- # '[' -z 1651971 ']' 00:25:39.013 00:26:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # kill -0 1651971 00:25:39.013 00:26:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@947 -- # uname 00:25:39.013 00:26:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:25:39.013 00:26:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1651971 00:25:39.013 00:26:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:25:39.013 00:26:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:25:39.013 00:26:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1651971' 00:25:39.013 killing process with pid 1651971 00:25:39.013 00:26:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@961 -- # kill 1651971 00:25:39.013 Received shutdown signal, test time was about 2.000000 seconds 00:25:39.013 00:25:39.013 Latency(us) 00:25:39.013 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:39.013 =================================================================================================================== 00:25:39.013 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:39.013 00:26:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # wait 1651971 00:25:39.013 00:26:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:25:39.013 00:26:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:39.013 00:26:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:25:39.013 00:26:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:25:39.013 00:26:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:25:39.013 00:26:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1652458 00:25:39.013 00:26:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:25:39.013 00:26:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1652458 /var/tmp/bperf.sock 00:25:39.013 00:26:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@823 -- # '[' -z 1652458 ']' 00:25:39.013 00:26:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:39.013 00:26:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@828 -- # local max_retries=100 00:25:39.013 00:26:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:39.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:39.013 00:26:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # xtrace_disable 00:25:39.013 00:26:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:39.273 [2024-07-16 00:26:57.882334] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:25:39.273 [2024-07-16 00:26:57.882379] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1652458 ] 00:25:39.273 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:39.273 Zero copy mechanism will not be used. 00:25:39.273 [2024-07-16 00:26:57.933017] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:39.273 [2024-07-16 00:26:58.011529] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:39.273 00:26:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:25:39.273 00:26:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # return 0 00:25:39.273 00:26:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:39.273 00:26:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:39.532 00:26:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:39.532 00:26:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:39.532 00:26:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:39.532 00:26:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:39.532 00:26:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:39.532 00:26:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:39.790 nvme0n1 00:25:39.790 00:26:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:25:39.790 00:26:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:39.790 00:26:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:39.791 00:26:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:39.791 00:26:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:39.791 00:26:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:40.049 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:40.049 Zero copy mechanism will not be used. 00:25:40.049 Running I/O for 2 seconds... 00:25:40.049 [2024-07-16 00:26:58.715032] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.049 [2024-07-16 00:26:58.715064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.049 [2024-07-16 00:26:58.715074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:40.049 [2024-07-16 00:26:58.725372] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.049 [2024-07-16 00:26:58.725397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.049 [2024-07-16 00:26:58.725406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:40.049 [2024-07-16 00:26:58.735066] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.049 [2024-07-16 00:26:58.735088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.049 [2024-07-16 00:26:58.735096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:40.049 [2024-07-16 00:26:58.745354] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.049 [2024-07-16 00:26:58.745376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.049 [2024-07-16 00:26:58.745385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.050 [2024-07-16 00:26:58.755527] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.050 [2024-07-16 00:26:58.755554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.050 [2024-07-16 00:26:58.755563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:40.050 [2024-07-16 00:26:58.768739] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.050 [2024-07-16 00:26:58.768761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.050 [2024-07-16 00:26:58.768769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:40.050 [2024-07-16 00:26:58.779594] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.050 [2024-07-16 00:26:58.779616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.050 [2024-07-16 00:26:58.779624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:40.050 [2024-07-16 00:26:58.789463] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.050 [2024-07-16 00:26:58.789485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.050 [2024-07-16 00:26:58.789493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.050 [2024-07-16 00:26:58.799753] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.050 [2024-07-16 00:26:58.799775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.050 [2024-07-16 00:26:58.799784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:40.050 [2024-07-16 00:26:58.812306] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.050 [2024-07-16 00:26:58.812328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.050 [2024-07-16 00:26:58.812336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:40.050 [2024-07-16 00:26:58.825591] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.050 [2024-07-16 00:26:58.825611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.050 [2024-07-16 00:26:58.825619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:40.050 [2024-07-16 00:26:58.835259] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.050 [2024-07-16 00:26:58.835281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.050 [2024-07-16 00:26:58.835290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.050 [2024-07-16 00:26:58.844281] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.050 [2024-07-16 00:26:58.844303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.050 [2024-07-16 00:26:58.844314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:40.050 [2024-07-16 00:26:58.853532] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.050 [2024-07-16 00:26:58.853553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.050 [2024-07-16 00:26:58.853562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:40.050 [2024-07-16 00:26:58.864087] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.050 [2024-07-16 00:26:58.864108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.050 [2024-07-16 00:26:58.864116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:40.050 [2024-07-16 00:26:58.874749] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.050 [2024-07-16 00:26:58.874770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.050 [2024-07-16 00:26:58.874778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.050 [2024-07-16 00:26:58.883304] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.050 [2024-07-16 00:26:58.883325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.050 [2024-07-16 00:26:58.883333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:40.050 [2024-07-16 00:26:58.891192] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.050 [2024-07-16 00:26:58.891212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.050 [2024-07-16 00:26:58.891221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:40.050 [2024-07-16 00:26:58.899554] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.050 [2024-07-16 00:26:58.899575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.050 [2024-07-16 00:26:58.899583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:40.310 [2024-07-16 00:26:58.906461] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.310 [2024-07-16 00:26:58.906482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.310 [2024-07-16 00:26:58.906491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.310 [2024-07-16 00:26:58.913021] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.310 [2024-07-16 00:26:58.913043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.310 [2024-07-16 00:26:58.913052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:40.310 [2024-07-16 00:26:58.919218] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.310 [2024-07-16 00:26:58.919248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.310 [2024-07-16 00:26:58.919256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:40.310 [2024-07-16 00:26:58.925222] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.310 [2024-07-16 00:26:58.925249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.310 [2024-07-16 00:26:58.925257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:40.310 [2024-07-16 00:26:58.930758] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.310 [2024-07-16 00:26:58.930779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.310 [2024-07-16 00:26:58.930787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.310 [2024-07-16 00:26:58.936581] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.310 [2024-07-16 00:26:58.936603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.310 [2024-07-16 00:26:58.936611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:40.310 [2024-07-16 00:26:58.942367] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.310 [2024-07-16 00:26:58.942388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.310 [2024-07-16 00:26:58.942397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:40.310 [2024-07-16 00:26:58.948147] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.310 [2024-07-16 00:26:58.948168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.310 [2024-07-16 00:26:58.948176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:40.310 [2024-07-16 00:26:58.954040] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.310 [2024-07-16 00:26:58.954062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.310 [2024-07-16 00:26:58.954070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.310 [2024-07-16 00:26:58.959962] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.310 [2024-07-16 00:26:58.959983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.310 [2024-07-16 00:26:58.959990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:40.310 [2024-07-16 00:26:58.965719] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.310 [2024-07-16 00:26:58.965742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.310 [2024-07-16 00:26:58.965752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:40.310 [2024-07-16 00:26:58.971549] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.310 [2024-07-16 00:26:58.971571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.310 [2024-07-16 00:26:58.971579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:40.310 [2024-07-16 00:26:58.977506] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.310 [2024-07-16 00:26:58.977527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.310 [2024-07-16 00:26:58.977534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.310 [2024-07-16 00:26:58.983399] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.310 [2024-07-16 00:26:58.983420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.310 [2024-07-16 00:26:58.983428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:40.310 [2024-07-16 00:26:58.989169] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.310 [2024-07-16 00:26:58.989191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.310 [2024-07-16 00:26:58.989198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:40.310 [2024-07-16 00:26:58.994921] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.310 [2024-07-16 00:26:58.994942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.310 [2024-07-16 00:26:58.994949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:40.310 [2024-07-16 00:26:59.000681] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.310 [2024-07-16 00:26:59.000702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.310 [2024-07-16 00:26:59.000710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.310 [2024-07-16 00:26:59.006448] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.310 [2024-07-16 00:26:59.006468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.310 [2024-07-16 00:26:59.006477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:40.310 [2024-07-16 00:26:59.012261] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.310 [2024-07-16 00:26:59.012282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.310 [2024-07-16 00:26:59.012289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:40.310 [2024-07-16 00:26:59.018151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.310 [2024-07-16 00:26:59.018172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.310 [2024-07-16 00:26:59.018184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:40.310 [2024-07-16 00:26:59.024015] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.310 [2024-07-16 00:26:59.024036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.310 [2024-07-16 00:26:59.024044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.310 [2024-07-16 00:26:59.029779] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.310 [2024-07-16 00:26:59.029800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.310 [2024-07-16 00:26:59.029808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:40.310 [2024-07-16 00:26:59.035591] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.310 [2024-07-16 00:26:59.035613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.310 [2024-07-16 00:26:59.035621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:40.311 [2024-07-16 00:26:59.041411] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.311 [2024-07-16 00:26:59.041433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.311 [2024-07-16 00:26:59.041441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:40.311 [2024-07-16 00:26:59.047204] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.311 [2024-07-16 00:26:59.047230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.311 [2024-07-16 00:26:59.047239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.311 [2024-07-16 00:26:59.052962] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.311 [2024-07-16 00:26:59.052984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.311 [2024-07-16 00:26:59.052992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:40.311 [2024-07-16 00:26:59.058805] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.311 [2024-07-16 00:26:59.058826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.311 [2024-07-16 00:26:59.058834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:40.311 [2024-07-16 00:26:59.064615] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.311 [2024-07-16 00:26:59.064636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.311 [2024-07-16 00:26:59.064644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:40.311 [2024-07-16 00:26:59.070414] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.311 [2024-07-16 00:26:59.070439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.311 [2024-07-16 00:26:59.070446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.311 [2024-07-16 00:26:59.076267] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.311 [2024-07-16 00:26:59.076287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.311 [2024-07-16 00:26:59.076295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:40.311 [2024-07-16 00:26:59.082033] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.311 [2024-07-16 00:26:59.082054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.311 [2024-07-16 00:26:59.082062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:40.311 [2024-07-16 00:26:59.087809] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.311 [2024-07-16 00:26:59.087830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.311 [2024-07-16 00:26:59.087838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:40.311 [2024-07-16 00:26:59.093598] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.311 [2024-07-16 00:26:59.093619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.311 [2024-07-16 00:26:59.093627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.311 [2024-07-16 00:26:59.099413] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.311 [2024-07-16 00:26:59.099434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.311 [2024-07-16 00:26:59.099442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:40.311 [2024-07-16 00:26:59.105150] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.311 [2024-07-16 00:26:59.105171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.311 [2024-07-16 00:26:59.105179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:40.311 [2024-07-16 00:26:59.110928] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.311 [2024-07-16 00:26:59.110950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.311 [2024-07-16 00:26:59.110957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:40.311 [2024-07-16 00:26:59.116667] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.311 [2024-07-16 00:26:59.116687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.311 [2024-07-16 00:26:59.116695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.311 [2024-07-16 00:26:59.122439] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.311 [2024-07-16 00:26:59.122461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.311 [2024-07-16 00:26:59.122468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:40.311 [2024-07-16 00:26:59.128220] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.311 [2024-07-16 00:26:59.128247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.311 [2024-07-16 00:26:59.128255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:40.311 [2024-07-16 00:26:59.133984] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.311 [2024-07-16 00:26:59.134005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.311 [2024-07-16 00:26:59.134014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:40.311 [2024-07-16 00:26:59.139779] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.311 [2024-07-16 00:26:59.139800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.311 [2024-07-16 00:26:59.139808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.311 [2024-07-16 00:26:59.145602] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.311 [2024-07-16 00:26:59.145625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.311 [2024-07-16 00:26:59.145635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:40.311 [2024-07-16 00:26:59.151421] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.311 [2024-07-16 00:26:59.151442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.311 [2024-07-16 00:26:59.151450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:40.311 [2024-07-16 00:26:59.157182] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.311 [2024-07-16 00:26:59.157203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.311 [2024-07-16 00:26:59.157211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:40.571 [2024-07-16 00:26:59.163052] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.571 [2024-07-16 00:26:59.163074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.571 [2024-07-16 00:26:59.163083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.571 [2024-07-16 00:26:59.168980] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.571 [2024-07-16 00:26:59.169001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.571 [2024-07-16 00:26:59.169012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:40.571 [2024-07-16 00:26:59.174747] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.571 [2024-07-16 00:26:59.174769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.571 [2024-07-16 00:26:59.174777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:40.571 [2024-07-16 00:26:59.180508] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.571 [2024-07-16 00:26:59.180529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.571 [2024-07-16 00:26:59.180537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:40.571 [2024-07-16 00:26:59.186352] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.571 [2024-07-16 00:26:59.186373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.571 [2024-07-16 00:26:59.186381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.571 [2024-07-16 00:26:59.192251] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.571 [2024-07-16 00:26:59.192272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.571 [2024-07-16 00:26:59.192280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:40.571 [2024-07-16 00:26:59.198051] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.571 [2024-07-16 00:26:59.198072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.571 [2024-07-16 00:26:59.198080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:40.571 [2024-07-16 00:26:59.204032] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.571 [2024-07-16 00:26:59.204053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.571 [2024-07-16 00:26:59.204061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:40.571 [2024-07-16 00:26:59.210008] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.571 [2024-07-16 00:26:59.210029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.571 [2024-07-16 00:26:59.210037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.571 [2024-07-16 00:26:59.215883] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.571 [2024-07-16 00:26:59.215903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.571 [2024-07-16 00:26:59.215910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:40.571 [2024-07-16 00:26:59.221825] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.571 [2024-07-16 00:26:59.221846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.571 [2024-07-16 00:26:59.221854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:40.571 [2024-07-16 00:26:59.227694] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.571 [2024-07-16 00:26:59.227715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.571 [2024-07-16 00:26:59.227723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:40.571 [2024-07-16 00:26:59.233473] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.571 [2024-07-16 00:26:59.233495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.571 [2024-07-16 00:26:59.233502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.571 [2024-07-16 00:26:59.239317] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.571 [2024-07-16 00:26:59.239338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.571 [2024-07-16 00:26:59.239346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:40.571 [2024-07-16 00:26:59.245095] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.571 [2024-07-16 00:26:59.245116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.571 [2024-07-16 00:26:59.245124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:40.572 [2024-07-16 00:26:59.251031] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.572 [2024-07-16 00:26:59.251052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.572 [2024-07-16 00:26:59.251059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:40.572 [2024-07-16 00:26:59.256821] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.572 [2024-07-16 00:26:59.256842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.572 [2024-07-16 00:26:59.256849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.572 [2024-07-16 00:26:59.262579] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.572 [2024-07-16 00:26:59.262600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.572 [2024-07-16 00:26:59.262608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:40.572 [2024-07-16 00:26:59.268347] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.572 [2024-07-16 00:26:59.268368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.572 [2024-07-16 00:26:59.268379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:40.572 [2024-07-16 00:26:59.274116] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.572 [2024-07-16 00:26:59.274137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.572 [2024-07-16 00:26:59.274146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:40.572 [2024-07-16 00:26:59.279854] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.572 [2024-07-16 00:26:59.279875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.572 [2024-07-16 00:26:59.279885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.572 [2024-07-16 00:26:59.285650] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.572 [2024-07-16 00:26:59.285671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.572 [2024-07-16 00:26:59.285678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:40.572 [2024-07-16 00:26:59.291404] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.572 [2024-07-16 00:26:59.291426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.572 [2024-07-16 00:26:59.291433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:40.572 [2024-07-16 00:26:59.297172] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.572 [2024-07-16 00:26:59.297193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.572 [2024-07-16 00:26:59.297201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:40.572 [2024-07-16 00:26:59.302963] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.572 [2024-07-16 00:26:59.302984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.572 [2024-07-16 00:26:59.302992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.572 [2024-07-16 00:26:59.308715] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.572 [2024-07-16 00:26:59.308736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.572 [2024-07-16 00:26:59.308744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:40.572 [2024-07-16 00:26:59.314485] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.572 [2024-07-16 00:26:59.314506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.572 [2024-07-16 00:26:59.314514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:40.572 [2024-07-16 00:26:59.320273] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.572 [2024-07-16 00:26:59.320298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.572 [2024-07-16 00:26:59.320305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:40.572 [2024-07-16 00:26:59.326022] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.572 [2024-07-16 00:26:59.326043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.572 [2024-07-16 00:26:59.326050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.572 [2024-07-16 00:26:59.331849] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.572 [2024-07-16 00:26:59.331871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.572 [2024-07-16 00:26:59.331878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:40.572 [2024-07-16 00:26:59.337646] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.572 [2024-07-16 00:26:59.337666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.572 [2024-07-16 00:26:59.337674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:40.572 [2024-07-16 00:26:59.343397] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.572 [2024-07-16 00:26:59.343419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.572 [2024-07-16 00:26:59.343426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:40.572 [2024-07-16 00:26:59.349136] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.572 [2024-07-16 00:26:59.349157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.572 [2024-07-16 00:26:59.349165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.572 [2024-07-16 00:26:59.354865] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.572 [2024-07-16 00:26:59.354885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.572 [2024-07-16 00:26:59.354893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:40.572 [2024-07-16 00:26:59.360626] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.572 [2024-07-16 00:26:59.360647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.572 [2024-07-16 00:26:59.360655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:40.572 [2024-07-16 00:26:59.366387] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.572 [2024-07-16 00:26:59.366408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.572 [2024-07-16 00:26:59.366416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:40.572 [2024-07-16 00:26:59.372187] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.572 [2024-07-16 00:26:59.372208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.572 [2024-07-16 00:26:59.372216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.572 [2024-07-16 00:26:59.377947] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.572 [2024-07-16 00:26:59.377967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.572 [2024-07-16 00:26:59.377975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:40.572 [2024-07-16 00:26:59.383748] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.572 [2024-07-16 00:26:59.383768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.572 [2024-07-16 00:26:59.383777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:40.572 [2024-07-16 00:26:59.389595] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.572 [2024-07-16 00:26:59.389616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.572 [2024-07-16 00:26:59.389624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:40.572 [2024-07-16 00:26:59.395437] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.572 [2024-07-16 00:26:59.395459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.572 [2024-07-16 00:26:59.395467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.572 [2024-07-16 00:26:59.401246] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.572 [2024-07-16 00:26:59.401267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.572 [2024-07-16 00:26:59.401275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:40.572 [2024-07-16 00:26:59.406976] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.572 [2024-07-16 00:26:59.406997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.572 [2024-07-16 00:26:59.407005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:40.572 [2024-07-16 00:26:59.412732] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.572 [2024-07-16 00:26:59.412754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.573 [2024-07-16 00:26:59.412762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:40.573 [2024-07-16 00:26:59.418485] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.573 [2024-07-16 00:26:59.418508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.573 [2024-07-16 00:26:59.418520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.832 [2024-07-16 00:26:59.424310] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.832 [2024-07-16 00:26:59.424333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.832 [2024-07-16 00:26:59.424341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:40.832 [2024-07-16 00:26:59.430158] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.832 [2024-07-16 00:26:59.430180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.832 [2024-07-16 00:26:59.430188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:40.832 [2024-07-16 00:26:59.435933] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.832 [2024-07-16 00:26:59.435954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.832 [2024-07-16 00:26:59.435963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:40.832 [2024-07-16 00:26:59.441732] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.832 [2024-07-16 00:26:59.441754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.832 [2024-07-16 00:26:59.441762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.832 [2024-07-16 00:26:59.447514] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.832 [2024-07-16 00:26:59.447535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.832 [2024-07-16 00:26:59.447544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:40.832 [2024-07-16 00:26:59.453320] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.832 [2024-07-16 00:26:59.453341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.832 [2024-07-16 00:26:59.453349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:40.832 [2024-07-16 00:26:59.459235] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.832 [2024-07-16 00:26:59.459256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.832 [2024-07-16 00:26:59.459264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:40.832 [2024-07-16 00:26:59.465089] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.832 [2024-07-16 00:26:59.465111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.832 [2024-07-16 00:26:59.465119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.832 [2024-07-16 00:26:59.470998] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.832 [2024-07-16 00:26:59.471024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.832 [2024-07-16 00:26:59.471032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:40.832 [2024-07-16 00:26:59.477059] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.832 [2024-07-16 00:26:59.477081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.832 [2024-07-16 00:26:59.477089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:40.832 [2024-07-16 00:26:59.483122] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.832 [2024-07-16 00:26:59.483145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.832 [2024-07-16 00:26:59.483153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:40.832 [2024-07-16 00:26:59.489157] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.832 [2024-07-16 00:26:59.489178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.832 [2024-07-16 00:26:59.489187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.833 [2024-07-16 00:26:59.495111] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.833 [2024-07-16 00:26:59.495134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.833 [2024-07-16 00:26:59.495142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:40.833 [2024-07-16 00:26:59.501318] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.833 [2024-07-16 00:26:59.501340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.833 [2024-07-16 00:26:59.501349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:40.833 [2024-07-16 00:26:59.507429] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.833 [2024-07-16 00:26:59.507452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.833 [2024-07-16 00:26:59.507460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:40.833 [2024-07-16 00:26:59.514108] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.833 [2024-07-16 00:26:59.514132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.833 [2024-07-16 00:26:59.514141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.833 [2024-07-16 00:26:59.521298] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.833 [2024-07-16 00:26:59.521322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.833 [2024-07-16 00:26:59.521331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:40.833 [2024-07-16 00:26:59.529492] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.833 [2024-07-16 00:26:59.529516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.833 [2024-07-16 00:26:59.529525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:40.833 [2024-07-16 00:26:59.537143] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.833 [2024-07-16 00:26:59.537166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.833 [2024-07-16 00:26:59.537175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:40.833 [2024-07-16 00:26:59.545124] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.833 [2024-07-16 00:26:59.545147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.833 [2024-07-16 00:26:59.545156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.833 [2024-07-16 00:26:59.552251] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.833 [2024-07-16 00:26:59.552275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.833 [2024-07-16 00:26:59.552284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:40.833 [2024-07-16 00:26:59.559563] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.833 [2024-07-16 00:26:59.559586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.833 [2024-07-16 00:26:59.559594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:40.833 [2024-07-16 00:26:59.567629] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.833 [2024-07-16 00:26:59.567652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.833 [2024-07-16 00:26:59.567661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:40.833 [2024-07-16 00:26:59.575930] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.833 [2024-07-16 00:26:59.575954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.833 [2024-07-16 00:26:59.575963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.833 [2024-07-16 00:26:59.584428] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.833 [2024-07-16 00:26:59.584451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.833 [2024-07-16 00:26:59.584460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:40.833 [2024-07-16 00:26:59.593576] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.833 [2024-07-16 00:26:59.593600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.833 [2024-07-16 00:26:59.593614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:40.833 [2024-07-16 00:26:59.602698] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.833 [2024-07-16 00:26:59.602720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.833 [2024-07-16 00:26:59.602730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:40.833 [2024-07-16 00:26:59.611678] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.833 [2024-07-16 00:26:59.611702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.833 [2024-07-16 00:26:59.611711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.833 [2024-07-16 00:26:59.621280] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.833 [2024-07-16 00:26:59.621303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.833 [2024-07-16 00:26:59.621312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:40.833 [2024-07-16 00:26:59.630007] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.833 [2024-07-16 00:26:59.630031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.833 [2024-07-16 00:26:59.630040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:40.833 [2024-07-16 00:26:59.639029] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.833 [2024-07-16 00:26:59.639052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.833 [2024-07-16 00:26:59.639062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:40.833 [2024-07-16 00:26:59.647765] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.833 [2024-07-16 00:26:59.647789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.833 [2024-07-16 00:26:59.647799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.833 [2024-07-16 00:26:59.656238] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.833 [2024-07-16 00:26:59.656262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.833 [2024-07-16 00:26:59.656271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:40.833 [2024-07-16 00:26:59.665056] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.833 [2024-07-16 00:26:59.665081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.833 [2024-07-16 00:26:59.665090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:40.833 [2024-07-16 00:26:59.674655] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:40.833 [2024-07-16 00:26:59.674678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.833 [2024-07-16 00:26:59.674687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:41.093 [2024-07-16 00:26:59.684240] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.093 [2024-07-16 00:26:59.684265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.093 [2024-07-16 00:26:59.684274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:41.093 [2024-07-16 00:26:59.693398] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.093 [2024-07-16 00:26:59.693421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.093 [2024-07-16 00:26:59.693430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:41.093 [2024-07-16 00:26:59.701834] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.093 [2024-07-16 00:26:59.701857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.093 [2024-07-16 00:26:59.701866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:41.093 [2024-07-16 00:26:59.710576] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.093 [2024-07-16 00:26:59.710600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.093 [2024-07-16 00:26:59.710609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:41.093 [2024-07-16 00:26:59.719588] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.093 [2024-07-16 00:26:59.719610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.093 [2024-07-16 00:26:59.719619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:41.093 [2024-07-16 00:26:59.727694] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.093 [2024-07-16 00:26:59.727718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.094 [2024-07-16 00:26:59.727727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:41.094 [2024-07-16 00:26:59.736292] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.094 [2024-07-16 00:26:59.736315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.094 [2024-07-16 00:26:59.736324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:41.094 [2024-07-16 00:26:59.744985] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.094 [2024-07-16 00:26:59.745007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.094 [2024-07-16 00:26:59.745020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:41.094 [2024-07-16 00:26:59.753838] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.094 [2024-07-16 00:26:59.753861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.094 [2024-07-16 00:26:59.753869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:41.094 [2024-07-16 00:26:59.763382] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.094 [2024-07-16 00:26:59.763404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.094 [2024-07-16 00:26:59.763412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:41.094 [2024-07-16 00:26:59.772475] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.094 [2024-07-16 00:26:59.772496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.094 [2024-07-16 00:26:59.772504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:41.094 [2024-07-16 00:26:59.781322] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.094 [2024-07-16 00:26:59.781343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.094 [2024-07-16 00:26:59.781352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:41.094 [2024-07-16 00:26:59.789933] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.094 [2024-07-16 00:26:59.789955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.094 [2024-07-16 00:26:59.789963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:41.094 [2024-07-16 00:26:59.799534] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.094 [2024-07-16 00:26:59.799556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.094 [2024-07-16 00:26:59.799564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:41.094 [2024-07-16 00:26:59.808666] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.094 [2024-07-16 00:26:59.808689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.094 [2024-07-16 00:26:59.808697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:41.094 [2024-07-16 00:26:59.819266] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.094 [2024-07-16 00:26:59.819287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.094 [2024-07-16 00:26:59.819295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:41.094 [2024-07-16 00:26:59.828736] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.094 [2024-07-16 00:26:59.828760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.094 [2024-07-16 00:26:59.828768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:41.094 [2024-07-16 00:26:59.839341] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.094 [2024-07-16 00:26:59.839363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.094 [2024-07-16 00:26:59.839371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:41.094 [2024-07-16 00:26:59.848735] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.094 [2024-07-16 00:26:59.848758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.094 [2024-07-16 00:26:59.848766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:41.094 [2024-07-16 00:26:59.860384] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.094 [2024-07-16 00:26:59.860406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.094 [2024-07-16 00:26:59.860414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:41.094 [2024-07-16 00:26:59.870232] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.094 [2024-07-16 00:26:59.870254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.094 [2024-07-16 00:26:59.870262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:41.094 [2024-07-16 00:26:59.878919] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.094 [2024-07-16 00:26:59.878941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.094 [2024-07-16 00:26:59.878949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:41.094 [2024-07-16 00:26:59.887446] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.094 [2024-07-16 00:26:59.887468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.094 [2024-07-16 00:26:59.887476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:41.094 [2024-07-16 00:26:59.895465] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.094 [2024-07-16 00:26:59.895487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.094 [2024-07-16 00:26:59.895495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:41.094 [2024-07-16 00:26:59.904624] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.094 [2024-07-16 00:26:59.904646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.094 [2024-07-16 00:26:59.904654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:41.094 [2024-07-16 00:26:59.912705] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.094 [2024-07-16 00:26:59.912727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.094 [2024-07-16 00:26:59.912735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:41.094 [2024-07-16 00:26:59.920617] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.094 [2024-07-16 00:26:59.920640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.094 [2024-07-16 00:26:59.920648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:41.094 [2024-07-16 00:26:59.927917] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.094 [2024-07-16 00:26:59.927938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.094 [2024-07-16 00:26:59.927947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:41.094 [2024-07-16 00:26:59.936310] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.094 [2024-07-16 00:26:59.936333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.094 [2024-07-16 00:26:59.936341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:41.094 [2024-07-16 00:26:59.944750] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.094 [2024-07-16 00:26:59.944773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.094 [2024-07-16 00:26:59.944781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:41.353 [2024-07-16 00:26:59.952969] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.353 [2024-07-16 00:26:59.952992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.353 [2024-07-16 00:26:59.953000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:41.353 [2024-07-16 00:26:59.960636] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.353 [2024-07-16 00:26:59.960659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.353 [2024-07-16 00:26:59.960667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:41.353 [2024-07-16 00:26:59.968526] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.353 [2024-07-16 00:26:59.968549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.353 [2024-07-16 00:26:59.968557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:41.353 [2024-07-16 00:26:59.977545] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.353 [2024-07-16 00:26:59.977567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.353 [2024-07-16 00:26:59.977579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:41.353 [2024-07-16 00:26:59.985930] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.353 [2024-07-16 00:26:59.985952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.353 [2024-07-16 00:26:59.985961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:41.353 [2024-07-16 00:26:59.993917] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.353 [2024-07-16 00:26:59.993938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.353 [2024-07-16 00:26:59.993947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:41.353 [2024-07-16 00:27:00.001315] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.353 [2024-07-16 00:27:00.001337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.353 [2024-07-16 00:27:00.001346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:41.353 [2024-07-16 00:27:00.008376] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.353 [2024-07-16 00:27:00.008399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.353 [2024-07-16 00:27:00.008409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:41.353 [2024-07-16 00:27:00.016048] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.353 [2024-07-16 00:27:00.016163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.353 [2024-07-16 00:27:00.016232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:41.353 [2024-07-16 00:27:00.024737] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.353 [2024-07-16 00:27:00.024763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.353 [2024-07-16 00:27:00.024772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:41.353 [2024-07-16 00:27:00.032540] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.353 [2024-07-16 00:27:00.032564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.353 [2024-07-16 00:27:00.032573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:41.353 [2024-07-16 00:27:00.040100] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.353 [2024-07-16 00:27:00.040123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.353 [2024-07-16 00:27:00.040132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:41.353 [2024-07-16 00:27:00.047868] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.353 [2024-07-16 00:27:00.047894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.353 [2024-07-16 00:27:00.047903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:41.353 [2024-07-16 00:27:00.055831] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.353 [2024-07-16 00:27:00.055854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.353 [2024-07-16 00:27:00.055862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:41.353 [2024-07-16 00:27:00.063408] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.353 [2024-07-16 00:27:00.063432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.353 [2024-07-16 00:27:00.063440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:41.353 [2024-07-16 00:27:00.071825] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.353 [2024-07-16 00:27:00.071849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.353 [2024-07-16 00:27:00.071858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:41.353 [2024-07-16 00:27:00.079853] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.353 [2024-07-16 00:27:00.079878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.353 [2024-07-16 00:27:00.079887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:41.353 [2024-07-16 00:27:00.088376] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.353 [2024-07-16 00:27:00.088400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.353 [2024-07-16 00:27:00.088409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:41.353 [2024-07-16 00:27:00.095118] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.353 [2024-07-16 00:27:00.095142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.353 [2024-07-16 00:27:00.095150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:41.353 [2024-07-16 00:27:00.101943] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.353 [2024-07-16 00:27:00.101965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.353 [2024-07-16 00:27:00.101973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:41.353 [2024-07-16 00:27:00.108599] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.353 [2024-07-16 00:27:00.108621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.353 [2024-07-16 00:27:00.108630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:41.353 [2024-07-16 00:27:00.115172] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.353 [2024-07-16 00:27:00.115193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.353 [2024-07-16 00:27:00.115201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:41.353 [2024-07-16 00:27:00.121529] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.353 [2024-07-16 00:27:00.121550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.353 [2024-07-16 00:27:00.121558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:41.353 [2024-07-16 00:27:00.127946] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.353 [2024-07-16 00:27:00.127968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.353 [2024-07-16 00:27:00.127976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:41.353 [2024-07-16 00:27:00.134089] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.353 [2024-07-16 00:27:00.134112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.353 [2024-07-16 00:27:00.134120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:41.353 [2024-07-16 00:27:00.140649] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.353 [2024-07-16 00:27:00.140670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.353 [2024-07-16 00:27:00.140678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:41.353 [2024-07-16 00:27:00.146748] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.353 [2024-07-16 00:27:00.146769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.353 [2024-07-16 00:27:00.146779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:41.353 [2024-07-16 00:27:00.153082] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.353 [2024-07-16 00:27:00.153106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.353 [2024-07-16 00:27:00.153114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:41.353 [2024-07-16 00:27:00.159053] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.353 [2024-07-16 00:27:00.159074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.353 [2024-07-16 00:27:00.159082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:41.353 [2024-07-16 00:27:00.164979] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.353 [2024-07-16 00:27:00.165000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.353 [2024-07-16 00:27:00.165012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:41.353 [2024-07-16 00:27:00.170884] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.353 [2024-07-16 00:27:00.170908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.353 [2024-07-16 00:27:00.170916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:41.353 [2024-07-16 00:27:00.176755] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.353 [2024-07-16 00:27:00.176777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.353 [2024-07-16 00:27:00.176785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:41.353 [2024-07-16 00:27:00.182653] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.353 [2024-07-16 00:27:00.182674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.353 [2024-07-16 00:27:00.182682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:41.353 [2024-07-16 00:27:00.188478] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.353 [2024-07-16 00:27:00.188501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.353 [2024-07-16 00:27:00.188510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:41.353 [2024-07-16 00:27:00.194296] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.353 [2024-07-16 00:27:00.194318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.353 [2024-07-16 00:27:00.194327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:41.353 [2024-07-16 00:27:00.200083] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.353 [2024-07-16 00:27:00.200105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.353 [2024-07-16 00:27:00.200115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:41.613 [2024-07-16 00:27:00.205983] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.613 [2024-07-16 00:27:00.206005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.613 [2024-07-16 00:27:00.206014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:41.613 [2024-07-16 00:27:00.211963] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.613 [2024-07-16 00:27:00.211985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.613 [2024-07-16 00:27:00.211994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:41.613 [2024-07-16 00:27:00.218033] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.613 [2024-07-16 00:27:00.218055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.613 [2024-07-16 00:27:00.218063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:41.613 [2024-07-16 00:27:00.223856] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.613 [2024-07-16 00:27:00.223878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.613 [2024-07-16 00:27:00.223886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:41.613 [2024-07-16 00:27:00.229867] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.613 [2024-07-16 00:27:00.229889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.613 [2024-07-16 00:27:00.229898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:41.613 [2024-07-16 00:27:00.235877] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.613 [2024-07-16 00:27:00.235899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.613 [2024-07-16 00:27:00.235907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:41.613 [2024-07-16 00:27:00.241687] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.613 [2024-07-16 00:27:00.241709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.613 [2024-07-16 00:27:00.241717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:41.613 [2024-07-16 00:27:00.247521] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.613 [2024-07-16 00:27:00.247543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.613 [2024-07-16 00:27:00.247552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:41.613 [2024-07-16 00:27:00.253410] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.613 [2024-07-16 00:27:00.253432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.613 [2024-07-16 00:27:00.253441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:41.613 [2024-07-16 00:27:00.259315] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.613 [2024-07-16 00:27:00.259337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.613 [2024-07-16 00:27:00.259346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:41.613 [2024-07-16 00:27:00.265202] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.613 [2024-07-16 00:27:00.265231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.613 [2024-07-16 00:27:00.265243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:41.613 [2024-07-16 00:27:00.271058] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.613 [2024-07-16 00:27:00.271080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.613 [2024-07-16 00:27:00.271088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:41.613 [2024-07-16 00:27:00.276978] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.613 [2024-07-16 00:27:00.277001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.613 [2024-07-16 00:27:00.277010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:41.613 [2024-07-16 00:27:00.282860] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.613 [2024-07-16 00:27:00.282882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.613 [2024-07-16 00:27:00.282892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:41.613 [2024-07-16 00:27:00.288704] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.613 [2024-07-16 00:27:00.288727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.613 [2024-07-16 00:27:00.288736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:41.613 [2024-07-16 00:27:00.294580] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.613 [2024-07-16 00:27:00.294603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.613 [2024-07-16 00:27:00.294612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:41.613 [2024-07-16 00:27:00.298576] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.613 [2024-07-16 00:27:00.298599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.613 [2024-07-16 00:27:00.298607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:41.613 [2024-07-16 00:27:00.303134] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.613 [2024-07-16 00:27:00.303156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.613 [2024-07-16 00:27:00.303164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:41.613 [2024-07-16 00:27:00.309334] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.613 [2024-07-16 00:27:00.309356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.613 [2024-07-16 00:27:00.309364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:41.613 [2024-07-16 00:27:00.315376] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.613 [2024-07-16 00:27:00.315404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.613 [2024-07-16 00:27:00.315412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:41.613 [2024-07-16 00:27:00.322229] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.613 [2024-07-16 00:27:00.322250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.613 [2024-07-16 00:27:00.322258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:41.613 [2024-07-16 00:27:00.330832] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.613 [2024-07-16 00:27:00.330853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.613 [2024-07-16 00:27:00.330861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:41.613 [2024-07-16 00:27:00.340946] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.613 [2024-07-16 00:27:00.340968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.613 [2024-07-16 00:27:00.340976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:41.613 [2024-07-16 00:27:00.349613] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.613 [2024-07-16 00:27:00.349635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.613 [2024-07-16 00:27:00.349643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:41.613 [2024-07-16 00:27:00.357592] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.613 [2024-07-16 00:27:00.357614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.613 [2024-07-16 00:27:00.357623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:41.613 [2024-07-16 00:27:00.364667] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.613 [2024-07-16 00:27:00.364688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.613 [2024-07-16 00:27:00.364696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:41.613 [2024-07-16 00:27:00.371436] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.613 [2024-07-16 00:27:00.371458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.613 [2024-07-16 00:27:00.371466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:41.613 [2024-07-16 00:27:00.378137] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.613 [2024-07-16 00:27:00.378159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.613 [2024-07-16 00:27:00.378166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:41.613 [2024-07-16 00:27:00.384924] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.613 [2024-07-16 00:27:00.384946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.614 [2024-07-16 00:27:00.384954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:41.614 [2024-07-16 00:27:00.391765] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.614 [2024-07-16 00:27:00.391788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.614 [2024-07-16 00:27:00.391796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:41.614 [2024-07-16 00:27:00.397875] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.614 [2024-07-16 00:27:00.397896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.614 [2024-07-16 00:27:00.397904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:41.614 [2024-07-16 00:27:00.404462] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.614 [2024-07-16 00:27:00.404484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.614 [2024-07-16 00:27:00.404492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:41.614 [2024-07-16 00:27:00.410759] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.614 [2024-07-16 00:27:00.410780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.614 [2024-07-16 00:27:00.410788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:41.614 [2024-07-16 00:27:00.416625] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.614 [2024-07-16 00:27:00.416647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.614 [2024-07-16 00:27:00.416655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:41.614 [2024-07-16 00:27:00.422556] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.614 [2024-07-16 00:27:00.422577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.614 [2024-07-16 00:27:00.422585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:41.614 [2024-07-16 00:27:00.428463] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.614 [2024-07-16 00:27:00.428485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.614 [2024-07-16 00:27:00.428493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:41.614 [2024-07-16 00:27:00.434325] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.614 [2024-07-16 00:27:00.434347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.614 [2024-07-16 00:27:00.434360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:41.614 [2024-07-16 00:27:00.440352] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.614 [2024-07-16 00:27:00.440373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.614 [2024-07-16 00:27:00.440382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:41.614 [2024-07-16 00:27:00.446365] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.614 [2024-07-16 00:27:00.446387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.614 [2024-07-16 00:27:00.446396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:41.614 [2024-07-16 00:27:00.452416] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.614 [2024-07-16 00:27:00.452437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.614 [2024-07-16 00:27:00.452445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:41.614 [2024-07-16 00:27:00.458664] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.614 [2024-07-16 00:27:00.458686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.614 [2024-07-16 00:27:00.458694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:41.873 [2024-07-16 00:27:00.469829] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.873 [2024-07-16 00:27:00.469852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.873 [2024-07-16 00:27:00.469860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:41.873 [2024-07-16 00:27:00.479505] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.873 [2024-07-16 00:27:00.479527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.873 [2024-07-16 00:27:00.479536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:41.873 [2024-07-16 00:27:00.487694] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.873 [2024-07-16 00:27:00.487717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.873 [2024-07-16 00:27:00.487726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:41.873 [2024-07-16 00:27:00.497034] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.873 [2024-07-16 00:27:00.497056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.873 [2024-07-16 00:27:00.497066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:41.873 [2024-07-16 00:27:00.506123] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.873 [2024-07-16 00:27:00.506148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.873 [2024-07-16 00:27:00.506157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:41.873 [2024-07-16 00:27:00.515804] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.873 [2024-07-16 00:27:00.515826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.874 [2024-07-16 00:27:00.515835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:41.874 [2024-07-16 00:27:00.524770] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.874 [2024-07-16 00:27:00.524792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.874 [2024-07-16 00:27:00.524801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:41.874 [2024-07-16 00:27:00.534076] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.874 [2024-07-16 00:27:00.534098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.874 [2024-07-16 00:27:00.534107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:41.874 [2024-07-16 00:27:00.544047] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.874 [2024-07-16 00:27:00.544071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.874 [2024-07-16 00:27:00.544079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:41.874 [2024-07-16 00:27:00.553455] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.874 [2024-07-16 00:27:00.553489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.874 [2024-07-16 00:27:00.553498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:41.874 [2024-07-16 00:27:00.563524] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.874 [2024-07-16 00:27:00.563546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.874 [2024-07-16 00:27:00.563555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:41.874 [2024-07-16 00:27:00.572844] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.874 [2024-07-16 00:27:00.572867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.874 [2024-07-16 00:27:00.572876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:41.874 [2024-07-16 00:27:00.582171] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.874 [2024-07-16 00:27:00.582193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.874 [2024-07-16 00:27:00.582202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:41.874 [2024-07-16 00:27:00.591653] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.874 [2024-07-16 00:27:00.591676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.874 [2024-07-16 00:27:00.591684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:41.874 [2024-07-16 00:27:00.600387] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.874 [2024-07-16 00:27:00.600410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.874 [2024-07-16 00:27:00.600418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:41.874 [2024-07-16 00:27:00.611284] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.874 [2024-07-16 00:27:00.611306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.874 [2024-07-16 00:27:00.611315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:41.874 [2024-07-16 00:27:00.622124] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.874 [2024-07-16 00:27:00.622148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.874 [2024-07-16 00:27:00.622157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:41.874 [2024-07-16 00:27:00.631609] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.874 [2024-07-16 00:27:00.631632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.874 [2024-07-16 00:27:00.631640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:41.874 [2024-07-16 00:27:00.641291] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.874 [2024-07-16 00:27:00.641313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.874 [2024-07-16 00:27:00.641322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:41.874 [2024-07-16 00:27:00.651754] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.874 [2024-07-16 00:27:00.651778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.874 [2024-07-16 00:27:00.651787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:41.874 [2024-07-16 00:27:00.662400] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.874 [2024-07-16 00:27:00.662423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.874 [2024-07-16 00:27:00.662432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:41.874 [2024-07-16 00:27:00.672778] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.874 [2024-07-16 00:27:00.672805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.874 [2024-07-16 00:27:00.672814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:41.874 [2024-07-16 00:27:00.682710] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.874 [2024-07-16 00:27:00.682734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.874 [2024-07-16 00:27:00.682742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:41.874 [2024-07-16 00:27:00.691561] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.874 [2024-07-16 00:27:00.691582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.874 [2024-07-16 00:27:00.691591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:41.874 [2024-07-16 00:27:00.699628] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.874 [2024-07-16 00:27:00.699649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.874 [2024-07-16 00:27:00.699658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:41.874 [2024-07-16 00:27:00.707247] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223f0b0) 00:25:41.874 [2024-07-16 00:27:00.707267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.874 [2024-07-16 00:27:00.707275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:41.874 00:25:41.874 Latency(us) 00:25:41.874 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:41.874 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:25:41.874 nvme0n1 : 2.00 4279.50 534.94 0.00 0.00 3735.34 918.93 13449.13 00:25:41.874 =================================================================================================================== 00:25:41.874 Total : 4279.50 534.94 0.00 0.00 3735.34 918.93 13449.13 00:25:41.874 0 00:25:42.134 00:27:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:42.134 00:27:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:42.134 00:27:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:42.134 00:27:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:42.134 | .driver_specific 00:25:42.134 | .nvme_error 00:25:42.134 | .status_code 00:25:42.134 | .command_transient_transport_error' 00:25:42.134 00:27:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 276 > 0 )) 00:25:42.134 00:27:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1652458 00:25:42.134 00:27:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@942 -- # '[' -z 1652458 ']' 00:25:42.134 00:27:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # kill -0 1652458 00:25:42.134 00:27:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@947 -- # uname 00:25:42.134 00:27:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:25:42.134 00:27:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1652458 00:25:42.134 00:27:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:25:42.134 00:27:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:25:42.134 00:27:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1652458' 00:25:42.134 killing process with pid 1652458 00:25:42.134 00:27:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@961 -- # kill 1652458 00:25:42.134 Received shutdown signal, test time was about 2.000000 seconds 00:25:42.134 00:25:42.134 Latency(us) 00:25:42.134 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:42.134 =================================================================================================================== 00:25:42.134 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:42.134 00:27:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # wait 1652458 00:25:42.392 00:27:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:25:42.392 00:27:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:42.392 00:27:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:25:42.392 00:27:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:25:42.392 00:27:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:25:42.392 00:27:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1653202 00:25:42.392 00:27:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:25:42.392 00:27:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1653202 /var/tmp/bperf.sock 00:25:42.392 00:27:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@823 -- # '[' -z 1653202 ']' 00:25:42.392 00:27:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:42.392 00:27:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@828 -- # local max_retries=100 00:25:42.392 00:27:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:42.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:42.392 00:27:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # xtrace_disable 00:25:42.392 00:27:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:42.392 [2024-07-16 00:27:01.151630] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:25:42.392 [2024-07-16 00:27:01.151682] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1653202 ] 00:25:42.393 [2024-07-16 00:27:01.201742] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:42.692 [2024-07-16 00:27:01.280555] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:43.259 00:27:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:25:43.259 00:27:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # return 0 00:25:43.259 00:27:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:43.259 00:27:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:43.518 00:27:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:43.518 00:27:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:43.518 00:27:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:43.518 00:27:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:43.518 00:27:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:43.518 00:27:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:43.777 nvme0n1 00:25:43.777 00:27:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:25:43.777 00:27:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:43.777 00:27:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:43.777 00:27:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:43.777 00:27:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:43.777 00:27:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:43.777 Running I/O for 2 seconds... 00:25:43.777 [2024-07-16 00:27:02.511863] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:43.777 [2024-07-16 00:27:02.512072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.777 [2024-07-16 00:27:02.512099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:43.777 [2024-07-16 00:27:02.521763] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:43.777 [2024-07-16 00:27:02.521967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.777 [2024-07-16 00:27:02.521990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:43.777 [2024-07-16 00:27:02.531569] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:43.777 [2024-07-16 00:27:02.531771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.778 [2024-07-16 00:27:02.531791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:43.778 [2024-07-16 00:27:02.541445] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:43.778 [2024-07-16 00:27:02.541643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.778 [2024-07-16 00:27:02.541663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:43.778 [2024-07-16 00:27:02.551212] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:43.778 [2024-07-16 00:27:02.551427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.778 [2024-07-16 00:27:02.551454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:43.778 [2024-07-16 00:27:02.561040] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:43.778 [2024-07-16 00:27:02.561238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.778 [2024-07-16 00:27:02.561279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:43.778 [2024-07-16 00:27:02.570830] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:43.778 [2024-07-16 00:27:02.571029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.778 [2024-07-16 00:27:02.571046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:43.778 [2024-07-16 00:27:02.580601] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:43.778 [2024-07-16 00:27:02.580799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.778 [2024-07-16 00:27:02.580817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:43.778 [2024-07-16 00:27:02.590361] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:43.778 [2024-07-16 00:27:02.590561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.778 [2024-07-16 00:27:02.590581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:43.778 [2024-07-16 00:27:02.600191] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:43.778 [2024-07-16 00:27:02.600400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19411 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.778 [2024-07-16 00:27:02.600420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:43.778 [2024-07-16 00:27:02.609916] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:43.778 [2024-07-16 00:27:02.610113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.778 [2024-07-16 00:27:02.610132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:43.778 [2024-07-16 00:27:02.619769] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:43.778 [2024-07-16 00:27:02.619974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.778 [2024-07-16 00:27:02.619993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:43.778 [2024-07-16 00:27:02.629525] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:43.778 [2024-07-16 00:27:02.629720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.778 [2024-07-16 00:27:02.629738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:44.037 [2024-07-16 00:27:02.639294] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:44.037 [2024-07-16 00:27:02.639497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.037 [2024-07-16 00:27:02.639517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:44.037 [2024-07-16 00:27:02.648993] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:44.037 [2024-07-16 00:27:02.649194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8675 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.037 [2024-07-16 00:27:02.649221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:44.037 [2024-07-16 00:27:02.658673] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:44.037 [2024-07-16 00:27:02.658872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.037 [2024-07-16 00:27:02.658889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:44.037 [2024-07-16 00:27:02.668483] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:44.037 [2024-07-16 00:27:02.668684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.037 [2024-07-16 00:27:02.668704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:44.037 [2024-07-16 00:27:02.678198] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:44.037 [2024-07-16 00:27:02.678405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.037 [2024-07-16 00:27:02.678424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:44.037 [2024-07-16 00:27:02.687960] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:44.037 [2024-07-16 00:27:02.688159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.037 [2024-07-16 00:27:02.688178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:44.037 [2024-07-16 00:27:02.697665] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:44.037 [2024-07-16 00:27:02.697869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.037 [2024-07-16 00:27:02.697888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:44.037 [2024-07-16 00:27:02.707346] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:44.037 [2024-07-16 00:27:02.707542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.037 [2024-07-16 00:27:02.707560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:44.037 [2024-07-16 00:27:02.717132] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:44.037 [2024-07-16 00:27:02.717339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.037 [2024-07-16 00:27:02.717365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:44.037 [2024-07-16 00:27:02.726889] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:44.037 [2024-07-16 00:27:02.727093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.037 [2024-07-16 00:27:02.727113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:44.037 [2024-07-16 00:27:02.736654] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:44.037 [2024-07-16 00:27:02.736854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.037 [2024-07-16 00:27:02.736873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:44.037 [2024-07-16 00:27:02.746491] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:44.037 [2024-07-16 00:27:02.746688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.037 [2024-07-16 00:27:02.746708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:44.037 [2024-07-16 00:27:02.756271] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:44.037 [2024-07-16 00:27:02.756470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.037 [2024-07-16 00:27:02.756489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:44.037 [2024-07-16 00:27:02.766072] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:44.037 [2024-07-16 00:27:02.766273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.037 [2024-07-16 00:27:02.766291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:44.037 [2024-07-16 00:27:02.775949] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:44.038 [2024-07-16 00:27:02.776149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24382 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.038 [2024-07-16 00:27:02.776168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:44.038 [2024-07-16 00:27:02.785708] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:44.038 [2024-07-16 00:27:02.785904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.038 [2024-07-16 00:27:02.785923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:44.038 [2024-07-16 00:27:02.795481] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:44.038 [2024-07-16 00:27:02.795680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.038 [2024-07-16 00:27:02.795699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:44.038 [2024-07-16 00:27:02.805178] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:44.038 [2024-07-16 00:27:02.805382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.038 [2024-07-16 00:27:02.805401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:44.038 [2024-07-16 00:27:02.814809] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:44.038 [2024-07-16 00:27:02.815004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.038 [2024-07-16 00:27:02.815025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:44.038 [2024-07-16 00:27:02.824814] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:44.038 [2024-07-16 00:27:02.825014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.038 [2024-07-16 00:27:02.825032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:44.038 [2024-07-16 00:27:02.834451] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:44.038 [2024-07-16 00:27:02.834647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.038 [2024-07-16 00:27:02.834667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:44.038 [2024-07-16 00:27:02.844166] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:44.038 [2024-07-16 00:27:02.844377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1837 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.038 [2024-07-16 00:27:02.844397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:44.038 [2024-07-16 00:27:02.853907] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:44.038 [2024-07-16 00:27:02.854103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.038 [2024-07-16 00:27:02.854122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:44.038 [2024-07-16 00:27:02.863613] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:44.038 [2024-07-16 00:27:02.863816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1787 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.038 [2024-07-16 00:27:02.863835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:44.038 [2024-07-16 00:27:02.873356] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:44.038 [2024-07-16 00:27:02.873565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.038 [2024-07-16 00:27:02.873582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:44.038 [2024-07-16 00:27:02.883178] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:44.038 [2024-07-16 00:27:02.883402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.038 [2024-07-16 00:27:02.883421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:44.297 [2024-07-16 00:27:02.892915] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:44.297 [2024-07-16 00:27:02.893116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.297 [2024-07-16 00:27:02.893134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:44.297 [2024-07-16 00:27:02.902722] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:44.297 [2024-07-16 00:27:02.902926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.297 [2024-07-16 00:27:02.902945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:44.297 [2024-07-16 00:27:02.912448] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:44.297 [2024-07-16 00:27:02.912659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.297 [2024-07-16 00:27:02.912678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:44.297 [2024-07-16 00:27:02.922197] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:44.297 [2024-07-16 00:27:02.922399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.297 [2024-07-16 00:27:02.922418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:44.297 [2024-07-16 00:27:02.931884] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:44.297 [2024-07-16 00:27:02.932083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.297 [2024-07-16 00:27:02.932102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:44.297 [2024-07-16 00:27:02.941583] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:44.297 [2024-07-16 00:27:02.941784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.297 [2024-07-16 00:27:02.941803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:44.297 [2024-07-16 00:27:02.951304] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:44.297 [2024-07-16 00:27:02.951507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.297 [2024-07-16 00:27:02.951526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:44.297 [2024-07-16 00:27:02.961031] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:44.297 [2024-07-16 00:27:02.961231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.297 [2024-07-16 00:27:02.961249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:44.297 [2024-07-16 00:27:02.970640] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:44.297 [2024-07-16 00:27:02.970837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5229 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.297 [2024-07-16 00:27:02.970856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:44.297 [2024-07-16 00:27:02.980478] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:44.297 [2024-07-16 00:27:02.980688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:823 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.297 [2024-07-16 00:27:02.980706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:44.297 [2024-07-16 00:27:02.990118] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:44.297 [2024-07-16 00:27:02.990325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.297 [2024-07-16 00:27:02.990343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:44.297 [2024-07-16 00:27:02.999886] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:44.297 [2024-07-16 00:27:03.000085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.297 [2024-07-16 00:27:03.000104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:44.298 [2024-07-16 00:27:03.009658] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:44.298 [2024-07-16 00:27:03.009854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.298 [2024-07-16 00:27:03.009871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:44.298 [2024-07-16 00:27:03.019417] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:44.298 [2024-07-16 00:27:03.019622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.298 [2024-07-16 00:27:03.019639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:44.298 [2024-07-16 00:27:03.029263] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:44.298 [2024-07-16 00:27:03.029469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.298 [2024-07-16 00:27:03.029488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:44.298 [2024-07-16 00:27:03.039052] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:44.298 [2024-07-16 00:27:03.039252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.298 [2024-07-16 00:27:03.039270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:44.298 [2024-07-16 00:27:03.049068] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:44.298 [2024-07-16 00:27:03.049270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.298 [2024-07-16 00:27:03.049289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:44.298 [2024-07-16 00:27:03.058924] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:44.298 [2024-07-16 00:27:03.059128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.298 [2024-07-16 00:27:03.059146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:44.298 [2024-07-16 00:27:03.068622] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:44.298 [2024-07-16 00:27:03.068818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.298 [2024-07-16 00:27:03.068837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:44.298 [2024-07-16 00:27:03.078320] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:44.298 [2024-07-16 00:27:03.078518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.298 [2024-07-16 00:27:03.078537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:44.298 [2024-07-16 00:27:03.088089] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:44.298 [2024-07-16 00:27:03.088292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.298 [2024-07-16 00:27:03.088310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:44.298 [2024-07-16 00:27:03.097799] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:44.298 [2024-07-16 00:27:03.097997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.298 [2024-07-16 00:27:03.098016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:44.298 [2024-07-16 00:27:03.107540] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:44.298 [2024-07-16 00:27:03.107741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.298 [2024-07-16 00:27:03.107760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:44.298 [2024-07-16 00:27:03.117283] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:44.298 [2024-07-16 00:27:03.117479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.298 [2024-07-16 00:27:03.117497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:44.298 [2024-07-16 00:27:03.127019] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:44.298 [2024-07-16 00:27:03.127217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.298 [2024-07-16 00:27:03.127240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:44.298 [2024-07-16 00:27:03.136897] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:44.298 [2024-07-16 00:27:03.137102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.298 [2024-07-16 00:27:03.137123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:44.298 [2024-07-16 00:27:03.146603] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:44.298 [2024-07-16 00:27:03.146801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10046 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.298 [2024-07-16 00:27:03.146820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:44.560 [2024-07-16 00:27:03.156415] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:44.560 [2024-07-16 00:27:03.156616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.560 [2024-07-16 00:27:03.156643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:44.560 [2024-07-16 00:27:03.166248] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:44.560 [2024-07-16 00:27:03.166448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.560 [2024-07-16 00:27:03.166467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:44.560 [2024-07-16 00:27:03.176167] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:44.560 [2024-07-16 00:27:03.176374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.560 [2024-07-16 00:27:03.176393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:44.560 [2024-07-16 00:27:03.185980] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:44.560 [2024-07-16 00:27:03.186180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.560 [2024-07-16 00:27:03.186200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:44.560 [2024-07-16 00:27:03.195768] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:44.560 [2024-07-16 00:27:03.195967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.560 [2024-07-16 00:27:03.195985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:44.560 [2024-07-16 00:27:03.205515] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:44.560 [2024-07-16 00:27:03.205714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21618 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.560 [2024-07-16 00:27:03.205733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:44.561 [2024-07-16 00:27:03.215292] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:44.561 [2024-07-16 00:27:03.215491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.561 [2024-07-16 00:27:03.215511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:44.561 [2024-07-16 00:27:03.225069] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:44.561 [2024-07-16 00:27:03.225268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.561 [2024-07-16 00:27:03.225285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:44.561 [2024-07-16 00:27:03.234742] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:44.561 [2024-07-16 00:27:03.234937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.561 [2024-07-16 00:27:03.234957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:44.561 [2024-07-16 00:27:03.244459] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:44.561 [2024-07-16 00:27:03.244667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3430 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.561 [2024-07-16 00:27:03.244686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:44.561 [2024-07-16 00:27:03.254246] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:44.561 [2024-07-16 00:27:03.254443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.561 [2024-07-16 00:27:03.254460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:44.561 [2024-07-16 00:27:03.263981] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:44.561 [2024-07-16 00:27:03.264177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.561 [2024-07-16 00:27:03.264194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:44.561 [2024-07-16 00:27:03.273775] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:44.561 [2024-07-16 00:27:03.273969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.561 [2024-07-16 00:27:03.273989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:44.561 [2024-07-16 00:27:03.283522] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:44.561 [2024-07-16 00:27:03.283699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.561 [2024-07-16 00:27:03.283718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:44.561 [2024-07-16 00:27:03.293236] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:44.561 [2024-07-16 00:27:03.293421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.561 [2024-07-16 00:27:03.293438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:44.561 [2024-07-16 00:27:03.302986] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:44.561 [2024-07-16 00:27:03.303169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.561 [2024-07-16 00:27:03.303187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:44.561 [2024-07-16 00:27:03.312736] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:44.561 [2024-07-16 00:27:03.312933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.561 [2024-07-16 00:27:03.312952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:44.561 [2024-07-16 00:27:03.322491] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:44.561 [2024-07-16 00:27:03.322686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.561 [2024-07-16 00:27:03.322712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:44.561 [2024-07-16 00:27:03.332297] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:44.561 [2024-07-16 00:27:03.332496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.561 [2024-07-16 00:27:03.332516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:44.561 [2024-07-16 00:27:03.342013] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:44.561 [2024-07-16 00:27:03.342212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.561 [2024-07-16 00:27:03.342236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:44.561 [2024-07-16 00:27:03.351811] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:44.561 [2024-07-16 00:27:03.352009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.561 [2024-07-16 00:27:03.352028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:44.561 [2024-07-16 00:27:03.361497] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:44.561 [2024-07-16 00:27:03.361696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.561 [2024-07-16 00:27:03.361715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:44.561 [2024-07-16 00:27:03.371309] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:44.562 [2024-07-16 00:27:03.371508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.562 [2024-07-16 00:27:03.371526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:44.562 [2024-07-16 00:27:03.381044] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:44.562 [2024-07-16 00:27:03.381243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.562 [2024-07-16 00:27:03.381261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:44.562 [2024-07-16 00:27:03.390772] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:44.562 [2024-07-16 00:27:03.390968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.562 [2024-07-16 00:27:03.390986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:44.562 [2024-07-16 00:27:03.400555] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:44.562 [2024-07-16 00:27:03.400755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.562 [2024-07-16 00:27:03.400773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:44.562 [2024-07-16 00:27:03.410246] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:44.562 [2024-07-16 00:27:03.410442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.562 [2024-07-16 00:27:03.410461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:44.822 [2024-07-16 00:27:03.420050] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:44.822 [2024-07-16 00:27:03.420251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.822 [2024-07-16 00:27:03.420269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:44.822 [2024-07-16 00:27:03.429866] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:44.822 [2024-07-16 00:27:03.430072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.822 [2024-07-16 00:27:03.430090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:44.822 [2024-07-16 00:27:03.439525] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:44.822 [2024-07-16 00:27:03.439721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.822 [2024-07-16 00:27:03.439739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:44.822 [2024-07-16 00:27:03.449310] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:44.822 [2024-07-16 00:27:03.449510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.822 [2024-07-16 00:27:03.449528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:44.822 [2024-07-16 00:27:03.459008] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:44.822 [2024-07-16 00:27:03.459203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.822 [2024-07-16 00:27:03.459233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:44.822 [2024-07-16 00:27:03.468743] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:44.822 [2024-07-16 00:27:03.468944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.822 [2024-07-16 00:27:03.468961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:44.822 [2024-07-16 00:27:03.478465] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:44.822 [2024-07-16 00:27:03.478680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.822 [2024-07-16 00:27:03.478699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:44.822 [2024-07-16 00:27:03.488276] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:44.823 [2024-07-16 00:27:03.488469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.823 [2024-07-16 00:27:03.488488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:44.823 [2024-07-16 00:27:03.497976] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:44.823 [2024-07-16 00:27:03.498179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.823 [2024-07-16 00:27:03.498199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:44.823 [2024-07-16 00:27:03.507712] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:44.823 [2024-07-16 00:27:03.507907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.823 [2024-07-16 00:27:03.507926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:44.823 [2024-07-16 00:27:03.517432] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:44.823 [2024-07-16 00:27:03.517648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.823 [2024-07-16 00:27:03.517666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:44.823 [2024-07-16 00:27:03.527174] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:44.823 [2024-07-16 00:27:03.527402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.823 [2024-07-16 00:27:03.527422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:44.823 [2024-07-16 00:27:03.537000] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:44.823 [2024-07-16 00:27:03.537194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.823 [2024-07-16 00:27:03.537214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:44.823 [2024-07-16 00:27:03.546805] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:44.823 [2024-07-16 00:27:03.547004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8787 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.823 [2024-07-16 00:27:03.547024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:44.823 [2024-07-16 00:27:03.556563] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:44.823 [2024-07-16 00:27:03.556762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3243 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.823 [2024-07-16 00:27:03.556780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:44.823 [2024-07-16 00:27:03.566286] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:44.823 [2024-07-16 00:27:03.566483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.823 [2024-07-16 00:27:03.566501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:44.823 [2024-07-16 00:27:03.575995] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:44.823 [2024-07-16 00:27:03.576193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.823 [2024-07-16 00:27:03.576212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:44.823 [2024-07-16 00:27:03.585835] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:44.823 [2024-07-16 00:27:03.586033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.823 [2024-07-16 00:27:03.586052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:44.823 [2024-07-16 00:27:03.595543] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:44.823 [2024-07-16 00:27:03.595743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.823 [2024-07-16 00:27:03.595761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:44.823 [2024-07-16 00:27:03.605359] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:44.823 [2024-07-16 00:27:03.605557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.823 [2024-07-16 00:27:03.605576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:44.823 [2024-07-16 00:27:03.615043] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:44.823 [2024-07-16 00:27:03.615238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14977 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.823 [2024-07-16 00:27:03.615274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:44.823 [2024-07-16 00:27:03.624867] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:44.823 [2024-07-16 00:27:03.625066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.823 [2024-07-16 00:27:03.625085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:44.823 [2024-07-16 00:27:03.634625] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:44.823 [2024-07-16 00:27:03.634824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.823 [2024-07-16 00:27:03.634850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:44.823 [2024-07-16 00:27:03.644366] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:44.823 [2024-07-16 00:27:03.644578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.823 [2024-07-16 00:27:03.644596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:44.823 [2024-07-16 00:27:03.654205] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:44.823 [2024-07-16 00:27:03.654413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.823 [2024-07-16 00:27:03.654431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:44.823 [2024-07-16 00:27:03.664029] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:44.823 [2024-07-16 00:27:03.664233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.823 [2024-07-16 00:27:03.664251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:44.823 [2024-07-16 00:27:03.673748] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:44.823 [2024-07-16 00:27:03.673948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.823 [2024-07-16 00:27:03.673966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:45.082 [2024-07-16 00:27:03.683601] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:45.082 [2024-07-16 00:27:03.683803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.082 [2024-07-16 00:27:03.683822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:45.082 [2024-07-16 00:27:03.693369] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:45.082 [2024-07-16 00:27:03.693565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.082 [2024-07-16 00:27:03.693583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:45.082 [2024-07-16 00:27:03.703067] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:45.082 [2024-07-16 00:27:03.703279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.082 [2024-07-16 00:27:03.703297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:45.082 [2024-07-16 00:27:03.712917] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:45.082 [2024-07-16 00:27:03.713111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.082 [2024-07-16 00:27:03.713128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:45.082 [2024-07-16 00:27:03.722666] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:45.082 [2024-07-16 00:27:03.722865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.082 [2024-07-16 00:27:03.722883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:45.082 [2024-07-16 00:27:03.732389] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:45.082 [2024-07-16 00:27:03.732584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.082 [2024-07-16 00:27:03.732602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:45.082 [2024-07-16 00:27:03.742155] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:45.082 [2024-07-16 00:27:03.742377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.082 [2024-07-16 00:27:03.742396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:45.082 [2024-07-16 00:27:03.752004] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:45.082 [2024-07-16 00:27:03.752205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.082 [2024-07-16 00:27:03.752234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:45.082 [2024-07-16 00:27:03.761858] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:45.082 [2024-07-16 00:27:03.762058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.082 [2024-07-16 00:27:03.762076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:45.082 [2024-07-16 00:27:03.771687] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:45.082 [2024-07-16 00:27:03.771885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.082 [2024-07-16 00:27:03.771903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:45.082 [2024-07-16 00:27:03.781484] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:45.082 [2024-07-16 00:27:03.781685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:787 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.082 [2024-07-16 00:27:03.781703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:45.082 [2024-07-16 00:27:03.791349] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:45.082 [2024-07-16 00:27:03.791547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24896 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.082 [2024-07-16 00:27:03.791567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:45.082 [2024-07-16 00:27:03.801106] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:45.082 [2024-07-16 00:27:03.801306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11658 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.082 [2024-07-16 00:27:03.801325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:45.082 [2024-07-16 00:27:03.810831] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:45.082 [2024-07-16 00:27:03.811026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.082 [2024-07-16 00:27:03.811045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:45.082 [2024-07-16 00:27:03.820637] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:45.082 [2024-07-16 00:27:03.820831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.082 [2024-07-16 00:27:03.820849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:45.082 [2024-07-16 00:27:03.830324] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:45.082 [2024-07-16 00:27:03.830524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21639 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.082 [2024-07-16 00:27:03.830543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:45.082 [2024-07-16 00:27:03.840067] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:45.082 [2024-07-16 00:27:03.840282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.082 [2024-07-16 00:27:03.840300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:45.082 [2024-07-16 00:27:03.849790] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:45.082 [2024-07-16 00:27:03.849987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.082 [2024-07-16 00:27:03.850003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:45.082 [2024-07-16 00:27:03.859613] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:45.082 [2024-07-16 00:27:03.859807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.082 [2024-07-16 00:27:03.859824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:45.082 [2024-07-16 00:27:03.869453] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:45.082 [2024-07-16 00:27:03.869649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.082 [2024-07-16 00:27:03.869666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:45.083 [2024-07-16 00:27:03.879101] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:45.083 [2024-07-16 00:27:03.879324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.083 [2024-07-16 00:27:03.879343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:45.083 [2024-07-16 00:27:03.888862] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:45.083 [2024-07-16 00:27:03.889058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.083 [2024-07-16 00:27:03.889076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:45.083 [2024-07-16 00:27:03.898690] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:45.083 [2024-07-16 00:27:03.898889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.083 [2024-07-16 00:27:03.898908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:45.083 [2024-07-16 00:27:03.908399] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:45.083 [2024-07-16 00:27:03.908598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.083 [2024-07-16 00:27:03.908617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:45.083 [2024-07-16 00:27:03.918111] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:45.083 [2024-07-16 00:27:03.918318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.083 [2024-07-16 00:27:03.918335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:45.083 [2024-07-16 00:27:03.927945] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:45.083 [2024-07-16 00:27:03.928140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.083 [2024-07-16 00:27:03.928157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:45.342 [2024-07-16 00:27:03.937762] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:45.342 [2024-07-16 00:27:03.937959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.342 [2024-07-16 00:27:03.937977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:45.342 [2024-07-16 00:27:03.947550] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:45.342 [2024-07-16 00:27:03.947747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5669 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.342 [2024-07-16 00:27:03.947764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:45.342 [2024-07-16 00:27:03.957315] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:45.342 [2024-07-16 00:27:03.957517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.342 [2024-07-16 00:27:03.957535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:45.342 [2024-07-16 00:27:03.967131] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:45.342 [2024-07-16 00:27:03.967351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.342 [2024-07-16 00:27:03.967369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:45.342 [2024-07-16 00:27:03.976935] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:45.342 [2024-07-16 00:27:03.977132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.342 [2024-07-16 00:27:03.977150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:45.342 [2024-07-16 00:27:03.986650] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:45.342 [2024-07-16 00:27:03.986848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.342 [2024-07-16 00:27:03.986866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:45.342 [2024-07-16 00:27:03.996397] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:45.342 [2024-07-16 00:27:03.996597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.342 [2024-07-16 00:27:03.996616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:45.342 [2024-07-16 00:27:04.006087] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:45.342 [2024-07-16 00:27:04.006286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.342 [2024-07-16 00:27:04.006308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:45.342 [2024-07-16 00:27:04.015779] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:45.342 [2024-07-16 00:27:04.015978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10738 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.342 [2024-07-16 00:27:04.015996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:45.342 [2024-07-16 00:27:04.025624] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:45.342 [2024-07-16 00:27:04.025817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.342 [2024-07-16 00:27:04.025834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:45.342 [2024-07-16 00:27:04.035391] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:45.342 [2024-07-16 00:27:04.035590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.342 [2024-07-16 00:27:04.035608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:45.342 [2024-07-16 00:27:04.045410] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:45.342 [2024-07-16 00:27:04.045623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.342 [2024-07-16 00:27:04.045642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:45.342 [2024-07-16 00:27:04.055278] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:45.342 [2024-07-16 00:27:04.055478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.342 [2024-07-16 00:27:04.055498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:45.342 [2024-07-16 00:27:04.065061] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:45.342 [2024-07-16 00:27:04.065258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.342 [2024-07-16 00:27:04.065276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:45.342 [2024-07-16 00:27:04.074777] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:45.342 [2024-07-16 00:27:04.074977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.342 [2024-07-16 00:27:04.074994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:45.343 [2024-07-16 00:27:04.084513] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:45.343 [2024-07-16 00:27:04.084707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.343 [2024-07-16 00:27:04.084726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:45.343 [2024-07-16 00:27:04.094185] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:45.343 [2024-07-16 00:27:04.094389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.343 [2024-07-16 00:27:04.094412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:45.343 [2024-07-16 00:27:04.104092] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:45.343 [2024-07-16 00:27:04.104290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8667 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.343 [2024-07-16 00:27:04.104307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:45.343 [2024-07-16 00:27:04.113907] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:45.343 [2024-07-16 00:27:04.114104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16381 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.343 [2024-07-16 00:27:04.114123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:45.343 [2024-07-16 00:27:04.123700] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:45.343 [2024-07-16 00:27:04.123899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.343 [2024-07-16 00:27:04.123918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:45.343 [2024-07-16 00:27:04.133423] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:45.343 [2024-07-16 00:27:04.133618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.343 [2024-07-16 00:27:04.133644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:45.343 [2024-07-16 00:27:04.143109] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:45.343 [2024-07-16 00:27:04.143306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11200 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.343 [2024-07-16 00:27:04.143324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:45.343 [2024-07-16 00:27:04.152889] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:45.343 [2024-07-16 00:27:04.153089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.343 [2024-07-16 00:27:04.153109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:45.343 [2024-07-16 00:27:04.162643] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:45.343 [2024-07-16 00:27:04.162850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10453 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.343 [2024-07-16 00:27:04.162869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:45.343 [2024-07-16 00:27:04.172365] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:45.343 [2024-07-16 00:27:04.172573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.343 [2024-07-16 00:27:04.172592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:45.343 [2024-07-16 00:27:04.182196] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:45.343 [2024-07-16 00:27:04.182407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.343 [2024-07-16 00:27:04.182426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:45.343 [2024-07-16 00:27:04.191924] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:45.343 [2024-07-16 00:27:04.192124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.343 [2024-07-16 00:27:04.192144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:45.603 [2024-07-16 00:27:04.201705] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:45.603 [2024-07-16 00:27:04.201906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.603 [2024-07-16 00:27:04.201923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:45.603 [2024-07-16 00:27:04.211415] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:45.603 [2024-07-16 00:27:04.211613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.603 [2024-07-16 00:27:04.211632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:45.603 [2024-07-16 00:27:04.221153] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:45.603 [2024-07-16 00:27:04.221357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.603 [2024-07-16 00:27:04.221377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:45.603 [2024-07-16 00:27:04.230805] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:45.603 [2024-07-16 00:27:04.231003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.603 [2024-07-16 00:27:04.231022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:45.603 [2024-07-16 00:27:04.240605] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:45.603 [2024-07-16 00:27:04.240804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.603 [2024-07-16 00:27:04.240823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:45.603 [2024-07-16 00:27:04.250293] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:45.603 [2024-07-16 00:27:04.250489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.603 [2024-07-16 00:27:04.250508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:45.603 [2024-07-16 00:27:04.260098] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:45.603 [2024-07-16 00:27:04.260299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.603 [2024-07-16 00:27:04.260316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:45.603 [2024-07-16 00:27:04.269767] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:45.603 [2024-07-16 00:27:04.269962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.603 [2024-07-16 00:27:04.269980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:45.603 [2024-07-16 00:27:04.279522] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:45.603 [2024-07-16 00:27:04.279719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.603 [2024-07-16 00:27:04.279738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:45.603 [2024-07-16 00:27:04.289218] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:45.603 [2024-07-16 00:27:04.289420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.603 [2024-07-16 00:27:04.289438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:45.603 [2024-07-16 00:27:04.299016] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:45.603 [2024-07-16 00:27:04.299215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.603 [2024-07-16 00:27:04.299237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:45.603 [2024-07-16 00:27:04.308740] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:45.603 [2024-07-16 00:27:04.308937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11279 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.603 [2024-07-16 00:27:04.308956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:45.603 [2024-07-16 00:27:04.318524] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:45.603 [2024-07-16 00:27:04.318726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23935 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.603 [2024-07-16 00:27:04.318745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:45.603 [2024-07-16 00:27:04.328206] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:45.603 [2024-07-16 00:27:04.328413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.603 [2024-07-16 00:27:04.328432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:45.603 [2024-07-16 00:27:04.337918] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:45.603 [2024-07-16 00:27:04.338116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.603 [2024-07-16 00:27:04.338134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:45.603 [2024-07-16 00:27:04.347718] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:45.603 [2024-07-16 00:27:04.347915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.603 [2024-07-16 00:27:04.347937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:45.603 [2024-07-16 00:27:04.357464] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:45.603 [2024-07-16 00:27:04.357663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6726 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.603 [2024-07-16 00:27:04.357682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:45.603 [2024-07-16 00:27:04.367235] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:45.603 [2024-07-16 00:27:04.367453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.603 [2024-07-16 00:27:04.367473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:45.603 [2024-07-16 00:27:04.377022] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:45.603 [2024-07-16 00:27:04.377220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.603 [2024-07-16 00:27:04.377241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:45.603 [2024-07-16 00:27:04.386752] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:45.603 [2024-07-16 00:27:04.386950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.603 [2024-07-16 00:27:04.386967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:45.603 [2024-07-16 00:27:04.396510] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:45.603 [2024-07-16 00:27:04.396710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.603 [2024-07-16 00:27:04.396729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:45.604 [2024-07-16 00:27:04.406191] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:45.604 [2024-07-16 00:27:04.406392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.604 [2024-07-16 00:27:04.406411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:45.604 [2024-07-16 00:27:04.415958] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:45.604 [2024-07-16 00:27:04.416156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.604 [2024-07-16 00:27:04.416174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:45.604 [2024-07-16 00:27:04.425675] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:45.604 [2024-07-16 00:27:04.425875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.604 [2024-07-16 00:27:04.425893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:45.604 [2024-07-16 00:27:04.435382] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:45.604 [2024-07-16 00:27:04.435585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.604 [2024-07-16 00:27:04.435605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:45.604 [2024-07-16 00:27:04.445169] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:45.604 [2024-07-16 00:27:04.445370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.604 [2024-07-16 00:27:04.445389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:45.604 [2024-07-16 00:27:04.454882] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:45.863 [2024-07-16 00:27:04.455082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.863 [2024-07-16 00:27:04.455101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:45.863 [2024-07-16 00:27:04.464613] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:45.863 [2024-07-16 00:27:04.464814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4321 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.863 [2024-07-16 00:27:04.464833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:45.863 [2024-07-16 00:27:04.474302] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:45.863 [2024-07-16 00:27:04.474498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.863 [2024-07-16 00:27:04.474515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:45.863 [2024-07-16 00:27:04.484022] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:45.863 [2024-07-16 00:27:04.484221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.863 [2024-07-16 00:27:04.484244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:45.863 [2024-07-16 00:27:04.493755] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:45.863 [2024-07-16 00:27:04.493952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16243 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.863 [2024-07-16 00:27:04.493971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:45.863 [2024-07-16 00:27:04.503507] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b54d0) with pdu=0x2000190feb58 00:25:45.863 [2024-07-16 00:27:04.503705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.863 [2024-07-16 00:27:04.503722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:45.863 00:25:45.863 Latency(us) 00:25:45.863 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:45.863 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:45.863 nvme0n1 : 2.00 26106.77 101.98 0.00 0.00 4894.20 4445.05 10941.66 00:25:45.863 =================================================================================================================== 00:25:45.863 Total : 26106.77 101.98 0.00 0.00 4894.20 4445.05 10941.66 00:25:45.863 0 00:25:45.863 00:27:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:45.863 00:27:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:45.863 00:27:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:45.863 | .driver_specific 00:25:45.863 | .nvme_error 00:25:45.863 | .status_code 00:25:45.863 | .command_transient_transport_error' 00:25:45.863 00:27:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:45.863 00:27:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 205 > 0 )) 00:25:45.863 00:27:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1653202 00:25:45.863 00:27:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@942 -- # '[' -z 1653202 ']' 00:25:45.863 00:27:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # kill -0 1653202 00:25:45.863 00:27:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@947 -- # uname 00:25:45.863 00:27:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:25:45.863 00:27:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1653202 00:25:46.122 00:27:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:25:46.122 00:27:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:25:46.122 00:27:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1653202' 00:25:46.122 killing process with pid 1653202 00:25:46.122 00:27:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@961 -- # kill 1653202 00:25:46.122 Received shutdown signal, test time was about 2.000000 seconds 00:25:46.122 00:25:46.122 Latency(us) 00:25:46.122 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:46.122 =================================================================================================================== 00:25:46.122 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:46.122 00:27:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # wait 1653202 00:25:46.122 00:27:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:25:46.122 00:27:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:46.122 00:27:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:25:46.122 00:27:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:25:46.122 00:27:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:25:46.122 00:27:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1653753 00:25:46.122 00:27:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1653753 /var/tmp/bperf.sock 00:25:46.122 00:27:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:25:46.122 00:27:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@823 -- # '[' -z 1653753 ']' 00:25:46.122 00:27:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:46.122 00:27:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@828 -- # local max_retries=100 00:25:46.122 00:27:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:46.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:46.122 00:27:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # xtrace_disable 00:25:46.122 00:27:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:46.122 [2024-07-16 00:27:04.963482] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:25:46.122 [2024-07-16 00:27:04.963527] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1653753 ] 00:25:46.122 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:46.122 Zero copy mechanism will not be used. 00:25:46.381 [2024-07-16 00:27:05.017220] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:46.381 [2024-07-16 00:27:05.086252] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:46.949 00:27:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:25:46.949 00:27:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # return 0 00:25:46.949 00:27:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:46.949 00:27:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:47.208 00:27:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:47.208 00:27:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:47.208 00:27:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:47.208 00:27:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:47.208 00:27:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:47.208 00:27:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:47.466 nvme0n1 00:25:47.466 00:27:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:25:47.466 00:27:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:47.466 00:27:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:47.466 00:27:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:47.466 00:27:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:47.467 00:27:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:47.467 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:47.467 Zero copy mechanism will not be used. 00:25:47.467 Running I/O for 2 seconds... 00:25:47.726 [2024-07-16 00:27:06.319740] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:47.726 [2024-07-16 00:27:06.320208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.726 [2024-07-16 00:27:06.320243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.726 [2024-07-16 00:27:06.330103] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:47.726 [2024-07-16 00:27:06.330542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.726 [2024-07-16 00:27:06.330567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.726 [2024-07-16 00:27:06.337583] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:47.726 [2024-07-16 00:27:06.337988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.726 [2024-07-16 00:27:06.338010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.726 [2024-07-16 00:27:06.344798] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:47.726 [2024-07-16 00:27:06.345202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.726 [2024-07-16 00:27:06.345228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.726 [2024-07-16 00:27:06.350495] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:47.726 [2024-07-16 00:27:06.350876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.726 [2024-07-16 00:27:06.350896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.726 [2024-07-16 00:27:06.355567] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:47.726 [2024-07-16 00:27:06.355959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.726 [2024-07-16 00:27:06.355978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.726 [2024-07-16 00:27:06.360698] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:47.726 [2024-07-16 00:27:06.361085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.726 [2024-07-16 00:27:06.361104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.726 [2024-07-16 00:27:06.365774] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:47.726 [2024-07-16 00:27:06.366157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.726 [2024-07-16 00:27:06.366176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.726 [2024-07-16 00:27:06.370867] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:47.726 [2024-07-16 00:27:06.371262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.726 [2024-07-16 00:27:06.371282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.726 [2024-07-16 00:27:06.376111] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:47.726 [2024-07-16 00:27:06.376516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.726 [2024-07-16 00:27:06.376536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.726 [2024-07-16 00:27:06.381167] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:47.726 [2024-07-16 00:27:06.381552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.726 [2024-07-16 00:27:06.381576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.726 [2024-07-16 00:27:06.386106] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:47.726 [2024-07-16 00:27:06.386506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.727 [2024-07-16 00:27:06.386525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.727 [2024-07-16 00:27:06.391605] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:47.727 [2024-07-16 00:27:06.391986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.727 [2024-07-16 00:27:06.392005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.727 [2024-07-16 00:27:06.397013] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:47.727 [2024-07-16 00:27:06.397406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.727 [2024-07-16 00:27:06.397426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.727 [2024-07-16 00:27:06.403193] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:47.727 [2024-07-16 00:27:06.403279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.727 [2024-07-16 00:27:06.403298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.727 [2024-07-16 00:27:06.409181] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:47.727 [2024-07-16 00:27:06.409584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.727 [2024-07-16 00:27:06.409603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.727 [2024-07-16 00:27:06.415503] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:47.727 [2024-07-16 00:27:06.415874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.727 [2024-07-16 00:27:06.415894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.727 [2024-07-16 00:27:06.421702] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:47.727 [2024-07-16 00:27:06.422114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.727 [2024-07-16 00:27:06.422134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.727 [2024-07-16 00:27:06.428332] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:47.727 [2024-07-16 00:27:06.428740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.727 [2024-07-16 00:27:06.428759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.727 [2024-07-16 00:27:06.434150] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:47.727 [2024-07-16 00:27:06.434525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.727 [2024-07-16 00:27:06.434544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.727 [2024-07-16 00:27:06.439980] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:47.727 [2024-07-16 00:27:06.440347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.727 [2024-07-16 00:27:06.440374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.727 [2024-07-16 00:27:06.445984] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:47.727 [2024-07-16 00:27:06.446361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.727 [2024-07-16 00:27:06.446381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.727 [2024-07-16 00:27:06.451850] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:47.727 [2024-07-16 00:27:06.452212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.727 [2024-07-16 00:27:06.452239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.727 [2024-07-16 00:27:06.457176] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:47.727 [2024-07-16 00:27:06.457562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.727 [2024-07-16 00:27:06.457582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.727 [2024-07-16 00:27:06.463886] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:47.727 [2024-07-16 00:27:06.464347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.727 [2024-07-16 00:27:06.464368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.727 [2024-07-16 00:27:06.471431] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:47.727 [2024-07-16 00:27:06.471909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.727 [2024-07-16 00:27:06.471930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.727 [2024-07-16 00:27:06.478822] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:47.727 [2024-07-16 00:27:06.479319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.727 [2024-07-16 00:27:06.479338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.727 [2024-07-16 00:27:06.486709] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:47.727 [2024-07-16 00:27:06.487137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.727 [2024-07-16 00:27:06.487157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.727 [2024-07-16 00:27:06.494113] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:47.727 [2024-07-16 00:27:06.494580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.727 [2024-07-16 00:27:06.494601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.727 [2024-07-16 00:27:06.501517] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:47.727 [2024-07-16 00:27:06.501954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.727 [2024-07-16 00:27:06.501975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.727 [2024-07-16 00:27:06.509187] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:47.727 [2024-07-16 00:27:06.509626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.727 [2024-07-16 00:27:06.509647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.727 [2024-07-16 00:27:06.517323] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:47.727 [2024-07-16 00:27:06.517781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.727 [2024-07-16 00:27:06.517801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.727 [2024-07-16 00:27:06.524792] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:47.727 [2024-07-16 00:27:06.525149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.727 [2024-07-16 00:27:06.525169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.727 [2024-07-16 00:27:06.530600] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:47.727 [2024-07-16 00:27:06.530975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.727 [2024-07-16 00:27:06.530994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.727 [2024-07-16 00:27:06.536438] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:47.727 [2024-07-16 00:27:06.536793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.727 [2024-07-16 00:27:06.536813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.727 [2024-07-16 00:27:06.541361] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:47.727 [2024-07-16 00:27:06.541735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.727 [2024-07-16 00:27:06.541755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.727 [2024-07-16 00:27:06.546279] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:47.727 [2024-07-16 00:27:06.546649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.727 [2024-07-16 00:27:06.546672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.727 [2024-07-16 00:27:06.550969] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:47.727 [2024-07-16 00:27:06.551334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.727 [2024-07-16 00:27:06.551353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.727 [2024-07-16 00:27:06.555788] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:47.727 [2024-07-16 00:27:06.556162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.727 [2024-07-16 00:27:06.556182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.727 [2024-07-16 00:27:06.560465] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:47.727 [2024-07-16 00:27:06.560824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.727 [2024-07-16 00:27:06.560843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.727 [2024-07-16 00:27:06.565217] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:47.727 [2024-07-16 00:27:06.565598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.727 [2024-07-16 00:27:06.565617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.728 [2024-07-16 00:27:06.569943] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:47.728 [2024-07-16 00:27:06.570353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.728 [2024-07-16 00:27:06.570372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.728 [2024-07-16 00:27:06.574883] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:47.728 [2024-07-16 00:27:06.575274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.728 [2024-07-16 00:27:06.575294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.987 [2024-07-16 00:27:06.579718] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:47.987 [2024-07-16 00:27:06.580095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.987 [2024-07-16 00:27:06.580115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.987 [2024-07-16 00:27:06.585164] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:47.987 [2024-07-16 00:27:06.585528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.988 [2024-07-16 00:27:06.585548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.988 [2024-07-16 00:27:06.590490] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:47.988 [2024-07-16 00:27:06.590849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.988 [2024-07-16 00:27:06.590869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.988 [2024-07-16 00:27:06.595577] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:47.988 [2024-07-16 00:27:06.595966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.988 [2024-07-16 00:27:06.595986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.988 [2024-07-16 00:27:06.600618] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:47.988 [2024-07-16 00:27:06.600990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.988 [2024-07-16 00:27:06.601010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.988 [2024-07-16 00:27:06.605903] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:47.988 [2024-07-16 00:27:06.606271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.988 [2024-07-16 00:27:06.606290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.988 [2024-07-16 00:27:06.611535] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:47.988 [2024-07-16 00:27:06.611897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.988 [2024-07-16 00:27:06.611916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.988 [2024-07-16 00:27:06.617612] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:47.988 [2024-07-16 00:27:06.617984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.988 [2024-07-16 00:27:06.618003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.988 [2024-07-16 00:27:06.623781] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:47.988 [2024-07-16 00:27:06.624154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.988 [2024-07-16 00:27:06.624173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.988 [2024-07-16 00:27:06.630076] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:47.988 [2024-07-16 00:27:06.630443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.988 [2024-07-16 00:27:06.630462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.988 [2024-07-16 00:27:06.636675] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:47.988 [2024-07-16 00:27:06.637043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.988 [2024-07-16 00:27:06.637062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.988 [2024-07-16 00:27:06.642807] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:47.988 [2024-07-16 00:27:06.643180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.988 [2024-07-16 00:27:06.643199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.988 [2024-07-16 00:27:06.649657] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:47.988 [2024-07-16 00:27:06.650040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.988 [2024-07-16 00:27:06.650060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.988 [2024-07-16 00:27:06.655632] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:47.988 [2024-07-16 00:27:06.655994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.988 [2024-07-16 00:27:06.656013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.988 [2024-07-16 00:27:06.661021] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:47.988 [2024-07-16 00:27:06.661398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.988 [2024-07-16 00:27:06.661418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.988 [2024-07-16 00:27:06.666502] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:47.988 [2024-07-16 00:27:06.666862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.988 [2024-07-16 00:27:06.666882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.988 [2024-07-16 00:27:06.671973] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:47.988 [2024-07-16 00:27:06.672342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.988 [2024-07-16 00:27:06.672362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.988 [2024-07-16 00:27:06.677495] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:47.988 [2024-07-16 00:27:06.677843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.988 [2024-07-16 00:27:06.677862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.988 [2024-07-16 00:27:06.682476] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:47.988 [2024-07-16 00:27:06.682847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.988 [2024-07-16 00:27:06.682867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.988 [2024-07-16 00:27:06.687305] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:47.988 [2024-07-16 00:27:06.687674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.988 [2024-07-16 00:27:06.687694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.988 [2024-07-16 00:27:06.692086] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:47.988 [2024-07-16 00:27:06.692454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.988 [2024-07-16 00:27:06.692473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.988 [2024-07-16 00:27:06.696830] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:47.988 [2024-07-16 00:27:06.697189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.988 [2024-07-16 00:27:06.697208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.988 [2024-07-16 00:27:06.701548] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:47.988 [2024-07-16 00:27:06.701894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.988 [2024-07-16 00:27:06.701913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.988 [2024-07-16 00:27:06.706205] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:47.988 [2024-07-16 00:27:06.706555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.988 [2024-07-16 00:27:06.706574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.988 [2024-07-16 00:27:06.711017] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:47.988 [2024-07-16 00:27:06.711384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.988 [2024-07-16 00:27:06.711403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.988 [2024-07-16 00:27:06.715699] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:47.988 [2024-07-16 00:27:06.716051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.988 [2024-07-16 00:27:06.716071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.988 [2024-07-16 00:27:06.720507] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:47.988 [2024-07-16 00:27:06.720854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.988 [2024-07-16 00:27:06.720873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.988 [2024-07-16 00:27:06.725128] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:47.988 [2024-07-16 00:27:06.725495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.988 [2024-07-16 00:27:06.725515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.988 [2024-07-16 00:27:06.729814] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:47.988 [2024-07-16 00:27:06.730194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.988 [2024-07-16 00:27:06.730214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.988 [2024-07-16 00:27:06.734525] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:47.988 [2024-07-16 00:27:06.734873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.988 [2024-07-16 00:27:06.734893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.988 [2024-07-16 00:27:06.739149] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:47.988 [2024-07-16 00:27:06.739504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.989 [2024-07-16 00:27:06.739523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.989 [2024-07-16 00:27:06.743906] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:47.989 [2024-07-16 00:27:06.744275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.989 [2024-07-16 00:27:06.744295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.989 [2024-07-16 00:27:06.748781] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:47.989 [2024-07-16 00:27:06.749143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.989 [2024-07-16 00:27:06.749162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.989 [2024-07-16 00:27:06.753406] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:47.989 [2024-07-16 00:27:06.753761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.989 [2024-07-16 00:27:06.753781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.989 [2024-07-16 00:27:06.757973] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:47.989 [2024-07-16 00:27:06.758339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.989 [2024-07-16 00:27:06.758359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.989 [2024-07-16 00:27:06.762729] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:47.989 [2024-07-16 00:27:06.763088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.989 [2024-07-16 00:27:06.763108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.989 [2024-07-16 00:27:06.767368] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:47.989 [2024-07-16 00:27:06.767738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.989 [2024-07-16 00:27:06.767762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.989 [2024-07-16 00:27:06.772130] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:47.989 [2024-07-16 00:27:06.772506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.989 [2024-07-16 00:27:06.772526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.989 [2024-07-16 00:27:06.776803] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:47.989 [2024-07-16 00:27:06.777186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.989 [2024-07-16 00:27:06.777205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.989 [2024-07-16 00:27:06.781548] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:47.989 [2024-07-16 00:27:06.781908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.989 [2024-07-16 00:27:06.781928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.989 [2024-07-16 00:27:06.786210] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:47.989 [2024-07-16 00:27:06.786577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.989 [2024-07-16 00:27:06.786598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.989 [2024-07-16 00:27:06.790937] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:47.989 [2024-07-16 00:27:06.791310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.989 [2024-07-16 00:27:06.791331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.989 [2024-07-16 00:27:06.795585] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:47.989 [2024-07-16 00:27:06.795947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.989 [2024-07-16 00:27:06.795966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.989 [2024-07-16 00:27:06.800571] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:47.989 [2024-07-16 00:27:06.800935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.989 [2024-07-16 00:27:06.800955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.989 [2024-07-16 00:27:06.805808] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:47.989 [2024-07-16 00:27:06.806177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.989 [2024-07-16 00:27:06.806197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.989 [2024-07-16 00:27:06.810922] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:47.989 [2024-07-16 00:27:06.811329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.989 [2024-07-16 00:27:06.811348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.989 [2024-07-16 00:27:06.816187] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:47.989 [2024-07-16 00:27:06.816563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.989 [2024-07-16 00:27:06.816582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.989 [2024-07-16 00:27:06.820979] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:47.989 [2024-07-16 00:27:06.821371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.989 [2024-07-16 00:27:06.821391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.989 [2024-07-16 00:27:06.825880] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:47.989 [2024-07-16 00:27:06.826257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.989 [2024-07-16 00:27:06.826277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.989 [2024-07-16 00:27:06.830518] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:47.989 [2024-07-16 00:27:06.830883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.989 [2024-07-16 00:27:06.830903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.989 [2024-07-16 00:27:06.835418] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:47.989 [2024-07-16 00:27:06.835791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.989 [2024-07-16 00:27:06.835812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.248 [2024-07-16 00:27:06.840505] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.248 [2024-07-16 00:27:06.840878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.248 [2024-07-16 00:27:06.840899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.248 [2024-07-16 00:27:06.846427] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.248 [2024-07-16 00:27:06.846789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.248 [2024-07-16 00:27:06.846808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.248 [2024-07-16 00:27:06.852630] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.248 [2024-07-16 00:27:06.852998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.248 [2024-07-16 00:27:06.853018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.248 [2024-07-16 00:27:06.858501] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.248 [2024-07-16 00:27:06.858884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.248 [2024-07-16 00:27:06.858904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.248 [2024-07-16 00:27:06.864019] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.248 [2024-07-16 00:27:06.864385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.248 [2024-07-16 00:27:06.864404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.248 [2024-07-16 00:27:06.875083] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.248 [2024-07-16 00:27:06.875678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.248 [2024-07-16 00:27:06.875698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.248 [2024-07-16 00:27:06.885625] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.248 [2024-07-16 00:27:06.886115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.248 [2024-07-16 00:27:06.886135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.248 [2024-07-16 00:27:06.894478] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.248 [2024-07-16 00:27:06.894898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.248 [2024-07-16 00:27:06.894918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.248 [2024-07-16 00:27:06.901628] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.248 [2024-07-16 00:27:06.901985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.249 [2024-07-16 00:27:06.902004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.249 [2024-07-16 00:27:06.908662] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.249 [2024-07-16 00:27:06.909058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.249 [2024-07-16 00:27:06.909077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.249 [2024-07-16 00:27:06.915179] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.249 [2024-07-16 00:27:06.915760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.249 [2024-07-16 00:27:06.915787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.249 [2024-07-16 00:27:06.926504] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.249 [2024-07-16 00:27:06.927118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.249 [2024-07-16 00:27:06.927139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.249 [2024-07-16 00:27:06.937367] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.249 [2024-07-16 00:27:06.937785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.249 [2024-07-16 00:27:06.937805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.249 [2024-07-16 00:27:06.945065] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.249 [2024-07-16 00:27:06.945445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.249 [2024-07-16 00:27:06.945464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.249 [2024-07-16 00:27:06.951595] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.249 [2024-07-16 00:27:06.952050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.249 [2024-07-16 00:27:06.952070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.249 [2024-07-16 00:27:06.958183] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.249 [2024-07-16 00:27:06.958526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.249 [2024-07-16 00:27:06.958546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.249 [2024-07-16 00:27:06.964696] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.249 [2024-07-16 00:27:06.965087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.249 [2024-07-16 00:27:06.965106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.249 [2024-07-16 00:27:06.973371] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.249 [2024-07-16 00:27:06.973838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.249 [2024-07-16 00:27:06.973858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.249 [2024-07-16 00:27:06.983511] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.249 [2024-07-16 00:27:06.983921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.249 [2024-07-16 00:27:06.983940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.249 [2024-07-16 00:27:06.995212] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.249 [2024-07-16 00:27:06.995603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.249 [2024-07-16 00:27:06.995622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.249 [2024-07-16 00:27:07.005363] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.249 [2024-07-16 00:27:07.005718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.249 [2024-07-16 00:27:07.005738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.249 [2024-07-16 00:27:07.015491] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.249 [2024-07-16 00:27:07.015903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.249 [2024-07-16 00:27:07.015922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.249 [2024-07-16 00:27:07.022343] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.249 [2024-07-16 00:27:07.022686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.249 [2024-07-16 00:27:07.022705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.249 [2024-07-16 00:27:07.028919] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.249 [2024-07-16 00:27:07.029213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.249 [2024-07-16 00:27:07.029237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.249 [2024-07-16 00:27:07.035436] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.249 [2024-07-16 00:27:07.035862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.249 [2024-07-16 00:27:07.035880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.249 [2024-07-16 00:27:07.042290] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.249 [2024-07-16 00:27:07.042712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.249 [2024-07-16 00:27:07.042732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.249 [2024-07-16 00:27:07.050419] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.249 [2024-07-16 00:27:07.050703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.249 [2024-07-16 00:27:07.050721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.249 [2024-07-16 00:27:07.056889] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.249 [2024-07-16 00:27:07.057236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.249 [2024-07-16 00:27:07.057255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.249 [2024-07-16 00:27:07.062305] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.249 [2024-07-16 00:27:07.062606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.249 [2024-07-16 00:27:07.062624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.249 [2024-07-16 00:27:07.067031] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.249 [2024-07-16 00:27:07.067339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.249 [2024-07-16 00:27:07.067359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.249 [2024-07-16 00:27:07.071798] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.249 [2024-07-16 00:27:07.072092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.249 [2024-07-16 00:27:07.072111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.249 [2024-07-16 00:27:07.076307] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.249 [2024-07-16 00:27:07.076594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.249 [2024-07-16 00:27:07.076613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.249 [2024-07-16 00:27:07.080645] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.249 [2024-07-16 00:27:07.080926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.249 [2024-07-16 00:27:07.080946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.249 [2024-07-16 00:27:07.084747] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.249 [2024-07-16 00:27:07.084989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.249 [2024-07-16 00:27:07.085010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.249 [2024-07-16 00:27:07.088643] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.249 [2024-07-16 00:27:07.088870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.249 [2024-07-16 00:27:07.088890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.249 [2024-07-16 00:27:07.092627] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.249 [2024-07-16 00:27:07.092858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.249 [2024-07-16 00:27:07.092878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.249 [2024-07-16 00:27:07.096592] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.249 [2024-07-16 00:27:07.096833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.249 [2024-07-16 00:27:07.096853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.509 [2024-07-16 00:27:07.100907] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.509 [2024-07-16 00:27:07.101139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.509 [2024-07-16 00:27:07.101162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.509 [2024-07-16 00:27:07.105024] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.509 [2024-07-16 00:27:07.105258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.509 [2024-07-16 00:27:07.105278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.509 [2024-07-16 00:27:07.109656] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.509 [2024-07-16 00:27:07.109901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.509 [2024-07-16 00:27:07.109919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.509 [2024-07-16 00:27:07.113630] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.509 [2024-07-16 00:27:07.113867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.509 [2024-07-16 00:27:07.113886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.509 [2024-07-16 00:27:07.117581] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.509 [2024-07-16 00:27:07.117807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.509 [2024-07-16 00:27:07.117827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.509 [2024-07-16 00:27:07.121722] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.509 [2024-07-16 00:27:07.121955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.509 [2024-07-16 00:27:07.121974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.509 [2024-07-16 00:27:07.127276] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.509 [2024-07-16 00:27:07.127515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.509 [2024-07-16 00:27:07.127534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.509 [2024-07-16 00:27:07.131428] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.509 [2024-07-16 00:27:07.131645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.509 [2024-07-16 00:27:07.131665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.509 [2024-07-16 00:27:07.135834] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.510 [2024-07-16 00:27:07.136103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.510 [2024-07-16 00:27:07.136122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.510 [2024-07-16 00:27:07.140410] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.510 [2024-07-16 00:27:07.140640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.510 [2024-07-16 00:27:07.140659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.510 [2024-07-16 00:27:07.145911] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.510 [2024-07-16 00:27:07.146154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.510 [2024-07-16 00:27:07.146173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.510 [2024-07-16 00:27:07.151131] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.510 [2024-07-16 00:27:07.151392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.510 [2024-07-16 00:27:07.151411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.510 [2024-07-16 00:27:07.156539] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.510 [2024-07-16 00:27:07.156786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.510 [2024-07-16 00:27:07.156806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.510 [2024-07-16 00:27:07.162634] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.510 [2024-07-16 00:27:07.162867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.510 [2024-07-16 00:27:07.162896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.510 [2024-07-16 00:27:07.167905] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.510 [2024-07-16 00:27:07.168152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.510 [2024-07-16 00:27:07.168171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.510 [2024-07-16 00:27:07.173817] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.510 [2024-07-16 00:27:07.174059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.510 [2024-07-16 00:27:07.174079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.510 [2024-07-16 00:27:07.179303] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.510 [2024-07-16 00:27:07.179540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.510 [2024-07-16 00:27:07.179559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.510 [2024-07-16 00:27:07.187396] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.510 [2024-07-16 00:27:07.187695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.510 [2024-07-16 00:27:07.187718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.510 [2024-07-16 00:27:07.194330] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.510 [2024-07-16 00:27:07.194570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.510 [2024-07-16 00:27:07.194589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.510 [2024-07-16 00:27:07.200411] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.510 [2024-07-16 00:27:07.200652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.510 [2024-07-16 00:27:07.200671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.510 [2024-07-16 00:27:07.206306] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.510 [2024-07-16 00:27:07.206545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.510 [2024-07-16 00:27:07.206565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.510 [2024-07-16 00:27:07.211703] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.510 [2024-07-16 00:27:07.211939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.510 [2024-07-16 00:27:07.211959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.510 [2024-07-16 00:27:07.217862] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.510 [2024-07-16 00:27:07.218125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.510 [2024-07-16 00:27:07.218145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.510 [2024-07-16 00:27:07.222926] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.510 [2024-07-16 00:27:07.223180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.510 [2024-07-16 00:27:07.223200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.510 [2024-07-16 00:27:07.228019] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.510 [2024-07-16 00:27:07.228253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.510 [2024-07-16 00:27:07.228272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.510 [2024-07-16 00:27:07.233164] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.510 [2024-07-16 00:27:07.233390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.510 [2024-07-16 00:27:07.233408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.510 [2024-07-16 00:27:07.239781] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.510 [2024-07-16 00:27:07.240023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.510 [2024-07-16 00:27:07.240043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.510 [2024-07-16 00:27:07.244212] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.510 [2024-07-16 00:27:07.244493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.510 [2024-07-16 00:27:07.244512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.510 [2024-07-16 00:27:07.248346] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.510 [2024-07-16 00:27:07.248570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.510 [2024-07-16 00:27:07.248589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.510 [2024-07-16 00:27:07.252542] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.510 [2024-07-16 00:27:07.252772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.510 [2024-07-16 00:27:07.252792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.510 [2024-07-16 00:27:07.256550] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.510 [2024-07-16 00:27:07.256782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.511 [2024-07-16 00:27:07.256803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.511 [2024-07-16 00:27:07.260508] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.511 [2024-07-16 00:27:07.260736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.511 [2024-07-16 00:27:07.260755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.511 [2024-07-16 00:27:07.264782] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.511 [2024-07-16 00:27:07.265008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.511 [2024-07-16 00:27:07.265028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.511 [2024-07-16 00:27:07.268635] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.511 [2024-07-16 00:27:07.268854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.511 [2024-07-16 00:27:07.268873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.511 [2024-07-16 00:27:07.273830] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.511 [2024-07-16 00:27:07.274208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.511 [2024-07-16 00:27:07.274232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.511 [2024-07-16 00:27:07.284412] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.511 [2024-07-16 00:27:07.284768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.511 [2024-07-16 00:27:07.284788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.511 [2024-07-16 00:27:07.291786] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.511 [2024-07-16 00:27:07.292128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.511 [2024-07-16 00:27:07.292147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.511 [2024-07-16 00:27:07.296442] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.511 [2024-07-16 00:27:07.296690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.511 [2024-07-16 00:27:07.296709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.511 [2024-07-16 00:27:07.300550] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.511 [2024-07-16 00:27:07.300775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.511 [2024-07-16 00:27:07.300794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.511 [2024-07-16 00:27:07.304523] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.511 [2024-07-16 00:27:07.304754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.511 [2024-07-16 00:27:07.304773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.511 [2024-07-16 00:27:07.308438] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.511 [2024-07-16 00:27:07.308697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.511 [2024-07-16 00:27:07.308717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.511 [2024-07-16 00:27:07.312376] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.511 [2024-07-16 00:27:07.312593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.511 [2024-07-16 00:27:07.312613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.511 [2024-07-16 00:27:07.316325] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.511 [2024-07-16 00:27:07.316562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.511 [2024-07-16 00:27:07.316581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.511 [2024-07-16 00:27:07.320199] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.511 [2024-07-16 00:27:07.320440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.511 [2024-07-16 00:27:07.320461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.511 [2024-07-16 00:27:07.324192] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.511 [2024-07-16 00:27:07.324436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.511 [2024-07-16 00:27:07.324455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.511 [2024-07-16 00:27:07.328279] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.511 [2024-07-16 00:27:07.328504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.511 [2024-07-16 00:27:07.328525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.511 [2024-07-16 00:27:07.332126] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.511 [2024-07-16 00:27:07.332371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.511 [2024-07-16 00:27:07.332392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.511 [2024-07-16 00:27:07.335984] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.511 [2024-07-16 00:27:07.336212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.511 [2024-07-16 00:27:07.336238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.511 [2024-07-16 00:27:07.339859] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.511 [2024-07-16 00:27:07.340089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.511 [2024-07-16 00:27:07.340109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.511 [2024-07-16 00:27:07.344519] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.511 [2024-07-16 00:27:07.344786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.511 [2024-07-16 00:27:07.344805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.511 [2024-07-16 00:27:07.349128] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.511 [2024-07-16 00:27:07.349395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.511 [2024-07-16 00:27:07.349414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.511 [2024-07-16 00:27:07.352990] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.511 [2024-07-16 00:27:07.353220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.511 [2024-07-16 00:27:07.353244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.511 [2024-07-16 00:27:07.356886] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.511 [2024-07-16 00:27:07.357118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.511 [2024-07-16 00:27:07.357139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.771 [2024-07-16 00:27:07.360879] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.771 [2024-07-16 00:27:07.361115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.771 [2024-07-16 00:27:07.361135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.771 [2024-07-16 00:27:07.364743] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.771 [2024-07-16 00:27:07.364973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.771 [2024-07-16 00:27:07.364993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.771 [2024-07-16 00:27:07.369258] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.771 [2024-07-16 00:27:07.369490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.771 [2024-07-16 00:27:07.369510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.771 [2024-07-16 00:27:07.373481] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.771 [2024-07-16 00:27:07.373711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.771 [2024-07-16 00:27:07.373731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.771 [2024-07-16 00:27:07.378668] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.771 [2024-07-16 00:27:07.378913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.771 [2024-07-16 00:27:07.378933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.771 [2024-07-16 00:27:07.384035] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.772 [2024-07-16 00:27:07.384260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.772 [2024-07-16 00:27:07.384280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.772 [2024-07-16 00:27:07.388405] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.772 [2024-07-16 00:27:07.388643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.772 [2024-07-16 00:27:07.388663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.772 [2024-07-16 00:27:07.393629] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.772 [2024-07-16 00:27:07.393856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.772 [2024-07-16 00:27:07.393875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.772 [2024-07-16 00:27:07.398026] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.772 [2024-07-16 00:27:07.398269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.772 [2024-07-16 00:27:07.398288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.772 [2024-07-16 00:27:07.402820] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.772 [2024-07-16 00:27:07.403057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.772 [2024-07-16 00:27:07.403076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.772 [2024-07-16 00:27:07.407144] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.772 [2024-07-16 00:27:07.407377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.772 [2024-07-16 00:27:07.407396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.772 [2024-07-16 00:27:07.411740] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.772 [2024-07-16 00:27:07.412008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.772 [2024-07-16 00:27:07.412028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.772 [2024-07-16 00:27:07.416223] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.772 [2024-07-16 00:27:07.416468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.772 [2024-07-16 00:27:07.416486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.772 [2024-07-16 00:27:07.420943] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.772 [2024-07-16 00:27:07.421165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.772 [2024-07-16 00:27:07.421184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.772 [2024-07-16 00:27:07.430819] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.772 [2024-07-16 00:27:07.431394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.772 [2024-07-16 00:27:07.431414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.772 [2024-07-16 00:27:07.439551] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.772 [2024-07-16 00:27:07.439891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.772 [2024-07-16 00:27:07.439910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.772 [2024-07-16 00:27:07.446360] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.772 [2024-07-16 00:27:07.446604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.772 [2024-07-16 00:27:07.446626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.772 [2024-07-16 00:27:07.451528] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.772 [2024-07-16 00:27:07.451756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.772 [2024-07-16 00:27:07.451776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.772 [2024-07-16 00:27:07.459536] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.772 [2024-07-16 00:27:07.459966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.772 [2024-07-16 00:27:07.459985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.772 [2024-07-16 00:27:07.468276] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.772 [2024-07-16 00:27:07.468617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.772 [2024-07-16 00:27:07.468637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.772 [2024-07-16 00:27:07.475557] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.772 [2024-07-16 00:27:07.475840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.772 [2024-07-16 00:27:07.475860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.772 [2024-07-16 00:27:07.480091] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.772 [2024-07-16 00:27:07.480361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.772 [2024-07-16 00:27:07.480381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.772 [2024-07-16 00:27:07.485159] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.772 [2024-07-16 00:27:07.485399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.772 [2024-07-16 00:27:07.485419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.772 [2024-07-16 00:27:07.490517] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.772 [2024-07-16 00:27:07.490744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.772 [2024-07-16 00:27:07.490765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.772 [2024-07-16 00:27:07.496409] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.772 [2024-07-16 00:27:07.496641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.772 [2024-07-16 00:27:07.496660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.772 [2024-07-16 00:27:07.501629] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.772 [2024-07-16 00:27:07.501859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.772 [2024-07-16 00:27:07.501879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.772 [2024-07-16 00:27:07.506966] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.772 [2024-07-16 00:27:07.507317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.772 [2024-07-16 00:27:07.507336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.772 [2024-07-16 00:27:07.513590] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.772 [2024-07-16 00:27:07.513885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.772 [2024-07-16 00:27:07.513904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.772 [2024-07-16 00:27:07.519955] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.772 [2024-07-16 00:27:07.520284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.772 [2024-07-16 00:27:07.520305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.772 [2024-07-16 00:27:07.527384] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.772 [2024-07-16 00:27:07.527724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.772 [2024-07-16 00:27:07.527743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.772 [2024-07-16 00:27:07.534548] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.772 [2024-07-16 00:27:07.534864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.772 [2024-07-16 00:27:07.534883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.772 [2024-07-16 00:27:07.541688] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.772 [2024-07-16 00:27:07.541922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.772 [2024-07-16 00:27:07.541941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.772 [2024-07-16 00:27:07.548865] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.772 [2024-07-16 00:27:07.549099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.772 [2024-07-16 00:27:07.549118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.772 [2024-07-16 00:27:07.555216] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.772 [2024-07-16 00:27:07.555467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.772 [2024-07-16 00:27:07.555500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.773 [2024-07-16 00:27:07.561546] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.773 [2024-07-16 00:27:07.561824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.773 [2024-07-16 00:27:07.561843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.773 [2024-07-16 00:27:07.567068] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.773 [2024-07-16 00:27:07.567318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.773 [2024-07-16 00:27:07.567339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.773 [2024-07-16 00:27:07.573518] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.773 [2024-07-16 00:27:07.573778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.773 [2024-07-16 00:27:07.573798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.773 [2024-07-16 00:27:07.581166] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.773 [2024-07-16 00:27:07.581429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.773 [2024-07-16 00:27:07.581448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.773 [2024-07-16 00:27:07.586326] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.773 [2024-07-16 00:27:07.586601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.773 [2024-07-16 00:27:07.586622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.773 [2024-07-16 00:27:07.591377] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.773 [2024-07-16 00:27:07.591638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.773 [2024-07-16 00:27:07.591658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.773 [2024-07-16 00:27:07.596313] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.773 [2024-07-16 00:27:07.596604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.773 [2024-07-16 00:27:07.596624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.773 [2024-07-16 00:27:07.602472] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.773 [2024-07-16 00:27:07.602794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.773 [2024-07-16 00:27:07.602814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.773 [2024-07-16 00:27:07.608010] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.773 [2024-07-16 00:27:07.608296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.773 [2024-07-16 00:27:07.608316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.773 [2024-07-16 00:27:07.613252] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.773 [2024-07-16 00:27:07.613524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.773 [2024-07-16 00:27:07.613544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.773 [2024-07-16 00:27:07.617234] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.773 [2024-07-16 00:27:07.617462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.773 [2024-07-16 00:27:07.617481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.773 [2024-07-16 00:27:07.621291] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:48.773 [2024-07-16 00:27:07.621534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.773 [2024-07-16 00:27:07.621553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:49.033 [2024-07-16 00:27:07.625275] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:49.033 [2024-07-16 00:27:07.625516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.033 [2024-07-16 00:27:07.625535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:49.033 [2024-07-16 00:27:07.629223] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:49.033 [2024-07-16 00:27:07.629517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.033 [2024-07-16 00:27:07.629537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.033 [2024-07-16 00:27:07.633754] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:49.033 [2024-07-16 00:27:07.634040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.033 [2024-07-16 00:27:07.634061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:49.033 [2024-07-16 00:27:07.639841] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:49.033 [2024-07-16 00:27:07.640098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.033 [2024-07-16 00:27:07.640117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:49.033 [2024-07-16 00:27:07.646250] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:49.033 [2024-07-16 00:27:07.646491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.033 [2024-07-16 00:27:07.646511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:49.033 [2024-07-16 00:27:07.652433] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:49.033 [2024-07-16 00:27:07.652734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.033 [2024-07-16 00:27:07.652753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.033 [2024-07-16 00:27:07.658900] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:49.033 [2024-07-16 00:27:07.659126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.033 [2024-07-16 00:27:07.659145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:49.033 [2024-07-16 00:27:07.665580] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:49.033 [2024-07-16 00:27:07.665813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.033 [2024-07-16 00:27:07.665833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:49.033 [2024-07-16 00:27:07.672056] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:49.033 [2024-07-16 00:27:07.672356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.033 [2024-07-16 00:27:07.672375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:49.033 [2024-07-16 00:27:07.678903] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:49.033 [2024-07-16 00:27:07.679203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.033 [2024-07-16 00:27:07.679222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.033 [2024-07-16 00:27:07.684773] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:49.033 [2024-07-16 00:27:07.685034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.033 [2024-07-16 00:27:07.685054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:49.033 [2024-07-16 00:27:07.689723] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:49.033 [2024-07-16 00:27:07.690008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.033 [2024-07-16 00:27:07.690027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:49.033 [2024-07-16 00:27:07.694367] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:49.033 [2024-07-16 00:27:07.694666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.033 [2024-07-16 00:27:07.694685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:49.033 [2024-07-16 00:27:07.699035] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:49.033 [2024-07-16 00:27:07.699284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.033 [2024-07-16 00:27:07.699307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.033 [2024-07-16 00:27:07.703101] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:49.033 [2024-07-16 00:27:07.703336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.033 [2024-07-16 00:27:07.703356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:49.033 [2024-07-16 00:27:07.707126] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:49.033 [2024-07-16 00:27:07.707360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.033 [2024-07-16 00:27:07.707379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:49.033 [2024-07-16 00:27:07.711111] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:49.033 [2024-07-16 00:27:07.711346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.033 [2024-07-16 00:27:07.711366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:49.033 [2024-07-16 00:27:07.715094] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:49.033 [2024-07-16 00:27:07.715323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.033 [2024-07-16 00:27:07.715343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.033 [2024-07-16 00:27:07.719114] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:49.033 [2024-07-16 00:27:07.719339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.033 [2024-07-16 00:27:07.719359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:49.033 [2024-07-16 00:27:07.723002] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:49.033 [2024-07-16 00:27:07.723237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.033 [2024-07-16 00:27:07.723272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:49.033 [2024-07-16 00:27:07.726925] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:49.033 [2024-07-16 00:27:07.727149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.033 [2024-07-16 00:27:07.727168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:49.033 [2024-07-16 00:27:07.730846] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:49.033 [2024-07-16 00:27:07.731069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.033 [2024-07-16 00:27:07.731088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.033 [2024-07-16 00:27:07.734879] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:49.034 [2024-07-16 00:27:07.735117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.034 [2024-07-16 00:27:07.735137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:49.034 [2024-07-16 00:27:07.738800] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:49.034 [2024-07-16 00:27:07.739028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.034 [2024-07-16 00:27:07.739047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:49.034 [2024-07-16 00:27:07.742713] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:49.034 [2024-07-16 00:27:07.742948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.034 [2024-07-16 00:27:07.742967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:49.034 [2024-07-16 00:27:07.746666] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:49.034 [2024-07-16 00:27:07.746894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.034 [2024-07-16 00:27:07.746914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.034 [2024-07-16 00:27:07.751108] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:49.034 [2024-07-16 00:27:07.751359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.034 [2024-07-16 00:27:07.751378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:49.034 [2024-07-16 00:27:07.756496] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:49.034 [2024-07-16 00:27:07.756748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.034 [2024-07-16 00:27:07.756767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:49.034 [2024-07-16 00:27:07.761493] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:49.034 [2024-07-16 00:27:07.761720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.034 [2024-07-16 00:27:07.761739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:49.034 [2024-07-16 00:27:07.766130] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:49.034 [2024-07-16 00:27:07.766363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.034 [2024-07-16 00:27:07.766383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.034 [2024-07-16 00:27:07.770861] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:49.034 [2024-07-16 00:27:07.771089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.034 [2024-07-16 00:27:07.771108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:49.034 [2024-07-16 00:27:07.775272] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:49.034 [2024-07-16 00:27:07.775497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.034 [2024-07-16 00:27:07.775516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:49.034 [2024-07-16 00:27:07.779739] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:49.034 [2024-07-16 00:27:07.779956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.034 [2024-07-16 00:27:07.779975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:49.034 [2024-07-16 00:27:07.784415] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:49.034 [2024-07-16 00:27:07.784650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.034 [2024-07-16 00:27:07.784669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.034 [2024-07-16 00:27:07.788870] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:49.034 [2024-07-16 00:27:07.789107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.034 [2024-07-16 00:27:07.789126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:49.034 [2024-07-16 00:27:07.793410] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:49.034 [2024-07-16 00:27:07.793632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.034 [2024-07-16 00:27:07.793651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:49.034 [2024-07-16 00:27:07.797798] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:49.034 [2024-07-16 00:27:07.798031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.034 [2024-07-16 00:27:07.798050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:49.034 [2024-07-16 00:27:07.802423] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:49.034 [2024-07-16 00:27:07.802660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.034 [2024-07-16 00:27:07.802679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.034 [2024-07-16 00:27:07.807093] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:49.034 [2024-07-16 00:27:07.807296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.034 [2024-07-16 00:27:07.807316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:49.034 [2024-07-16 00:27:07.811638] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:49.034 [2024-07-16 00:27:07.811848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.034 [2024-07-16 00:27:07.811871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:49.034 [2024-07-16 00:27:07.816191] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:49.034 [2024-07-16 00:27:07.816409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.034 [2024-07-16 00:27:07.816429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:49.034 [2024-07-16 00:27:07.820700] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:49.034 [2024-07-16 00:27:07.820896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.034 [2024-07-16 00:27:07.820914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.034 [2024-07-16 00:27:07.825355] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:49.034 [2024-07-16 00:27:07.825562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.034 [2024-07-16 00:27:07.825582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:49.034 [2024-07-16 00:27:07.829621] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:49.034 [2024-07-16 00:27:07.829818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.034 [2024-07-16 00:27:07.829838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:49.034 [2024-07-16 00:27:07.834392] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:49.034 [2024-07-16 00:27:07.834597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.034 [2024-07-16 00:27:07.834615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:49.034 [2024-07-16 00:27:07.838765] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:49.034 [2024-07-16 00:27:07.838973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.034 [2024-07-16 00:27:07.838992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.034 [2024-07-16 00:27:07.843137] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:49.034 [2024-07-16 00:27:07.843375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.034 [2024-07-16 00:27:07.843395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:49.034 [2024-07-16 00:27:07.847680] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:49.034 [2024-07-16 00:27:07.847897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.034 [2024-07-16 00:27:07.847916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:49.034 [2024-07-16 00:27:07.852081] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:49.034 [2024-07-16 00:27:07.852307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.034 [2024-07-16 00:27:07.852327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:49.034 [2024-07-16 00:27:07.856899] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:49.034 [2024-07-16 00:27:07.857115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.034 [2024-07-16 00:27:07.857136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.034 [2024-07-16 00:27:07.861772] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:49.034 [2024-07-16 00:27:07.862008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.034 [2024-07-16 00:27:07.862028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:49.034 [2024-07-16 00:27:07.868329] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:49.035 [2024-07-16 00:27:07.868609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.035 [2024-07-16 00:27:07.868628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:49.035 [2024-07-16 00:27:07.874909] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:49.035 [2024-07-16 00:27:07.875160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.035 [2024-07-16 00:27:07.875180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:49.035 [2024-07-16 00:27:07.882292] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:49.035 [2024-07-16 00:27:07.882517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.035 [2024-07-16 00:27:07.882536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.295 [2024-07-16 00:27:07.890082] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:49.295 [2024-07-16 00:27:07.890417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.295 [2024-07-16 00:27:07.890437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:49.295 [2024-07-16 00:27:07.897325] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:49.295 [2024-07-16 00:27:07.897627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.295 [2024-07-16 00:27:07.897647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:49.295 [2024-07-16 00:27:07.904680] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:49.295 [2024-07-16 00:27:07.905073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.295 [2024-07-16 00:27:07.905092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:49.295 [2024-07-16 00:27:07.912475] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:49.295 [2024-07-16 00:27:07.912772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.295 [2024-07-16 00:27:07.912792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.295 [2024-07-16 00:27:07.919196] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:49.295 [2024-07-16 00:27:07.919487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.295 [2024-07-16 00:27:07.919506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:49.295 [2024-07-16 00:27:07.926850] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:49.295 [2024-07-16 00:27:07.927076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.295 [2024-07-16 00:27:07.927095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:49.295 [2024-07-16 00:27:07.933881] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:49.295 [2024-07-16 00:27:07.934201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.295 [2024-07-16 00:27:07.934220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:49.295 [2024-07-16 00:27:07.941633] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:49.295 [2024-07-16 00:27:07.941865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.295 [2024-07-16 00:27:07.941886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.295 [2024-07-16 00:27:07.949595] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:49.295 [2024-07-16 00:27:07.949914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.295 [2024-07-16 00:27:07.949934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:49.295 [2024-07-16 00:27:07.956500] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:49.295 [2024-07-16 00:27:07.956735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.295 [2024-07-16 00:27:07.956755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:49.295 [2024-07-16 00:27:07.963472] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:49.295 [2024-07-16 00:27:07.963720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.295 [2024-07-16 00:27:07.963740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:49.295 [2024-07-16 00:27:07.971168] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:49.295 [2024-07-16 00:27:07.971428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.295 [2024-07-16 00:27:07.971463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.295 [2024-07-16 00:27:07.978371] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:49.295 [2024-07-16 00:27:07.978636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.295 [2024-07-16 00:27:07.978657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:49.295 [2024-07-16 00:27:07.985790] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:49.295 [2024-07-16 00:27:07.986105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.295 [2024-07-16 00:27:07.986125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:49.295 [2024-07-16 00:27:07.992395] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:49.295 [2024-07-16 00:27:07.992643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.295 [2024-07-16 00:27:07.992662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:49.295 [2024-07-16 00:27:07.999274] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:49.295 [2024-07-16 00:27:07.999598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.295 [2024-07-16 00:27:07.999616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.295 [2024-07-16 00:27:08.006835] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:49.295 [2024-07-16 00:27:08.007092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.295 [2024-07-16 00:27:08.007111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:49.295 [2024-07-16 00:27:08.013793] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:49.295 [2024-07-16 00:27:08.014043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.295 [2024-07-16 00:27:08.014062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:49.295 [2024-07-16 00:27:08.021049] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:49.295 [2024-07-16 00:27:08.021376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.295 [2024-07-16 00:27:08.021396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:49.295 [2024-07-16 00:27:08.028861] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:49.295 [2024-07-16 00:27:08.029160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.295 [2024-07-16 00:27:08.029179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.295 [2024-07-16 00:27:08.036602] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:49.295 [2024-07-16 00:27:08.036867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.295 [2024-07-16 00:27:08.036886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:49.295 [2024-07-16 00:27:08.043607] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:49.295 [2024-07-16 00:27:08.043912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.295 [2024-07-16 00:27:08.043932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:49.295 [2024-07-16 00:27:08.051417] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:49.295 [2024-07-16 00:27:08.051681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.295 [2024-07-16 00:27:08.051700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:49.295 [2024-07-16 00:27:08.056831] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:49.295 [2024-07-16 00:27:08.057056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.295 [2024-07-16 00:27:08.057076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.295 [2024-07-16 00:27:08.061069] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:49.295 [2024-07-16 00:27:08.061320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.295 [2024-07-16 00:27:08.061340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:49.295 [2024-07-16 00:27:08.065101] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:49.295 [2024-07-16 00:27:08.065308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.295 [2024-07-16 00:27:08.065326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:49.296 [2024-07-16 00:27:08.069312] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:49.296 [2024-07-16 00:27:08.069576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.296 [2024-07-16 00:27:08.069595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:49.296 [2024-07-16 00:27:08.074688] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:49.296 [2024-07-16 00:27:08.074977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.296 [2024-07-16 00:27:08.074997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.296 [2024-07-16 00:27:08.080872] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:49.296 [2024-07-16 00:27:08.081140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.296 [2024-07-16 00:27:08.081166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:49.296 [2024-07-16 00:27:08.086869] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:49.296 [2024-07-16 00:27:08.087189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.296 [2024-07-16 00:27:08.087209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:49.296 [2024-07-16 00:27:08.093382] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:49.296 [2024-07-16 00:27:08.093706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.296 [2024-07-16 00:27:08.093726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:49.296 [2024-07-16 00:27:08.099698] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:49.296 [2024-07-16 00:27:08.100028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.296 [2024-07-16 00:27:08.100047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.296 [2024-07-16 00:27:08.106043] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:49.296 [2024-07-16 00:27:08.106303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.296 [2024-07-16 00:27:08.106323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:49.296 [2024-07-16 00:27:08.112446] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:49.296 [2024-07-16 00:27:08.112744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.296 [2024-07-16 00:27:08.112764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:49.296 [2024-07-16 00:27:08.119494] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:49.296 [2024-07-16 00:27:08.119773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.296 [2024-07-16 00:27:08.119793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:49.296 [2024-07-16 00:27:08.126605] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:49.296 [2024-07-16 00:27:08.126878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.296 [2024-07-16 00:27:08.126897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.296 [2024-07-16 00:27:08.134524] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:49.296 [2024-07-16 00:27:08.134833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.296 [2024-07-16 00:27:08.134853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:49.296 [2024-07-16 00:27:08.142382] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:49.296 [2024-07-16 00:27:08.142797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.296 [2024-07-16 00:27:08.142817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:49.555 [2024-07-16 00:27:08.150826] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:49.555 [2024-07-16 00:27:08.151084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.555 [2024-07-16 00:27:08.151103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:49.555 [2024-07-16 00:27:08.158941] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:49.555 [2024-07-16 00:27:08.159147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.555 [2024-07-16 00:27:08.159167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.555 [2024-07-16 00:27:08.167165] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:49.555 [2024-07-16 00:27:08.167413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.555 [2024-07-16 00:27:08.167434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:49.555 [2024-07-16 00:27:08.174647] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:49.555 [2024-07-16 00:27:08.174899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.555 [2024-07-16 00:27:08.174918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:49.555 [2024-07-16 00:27:08.182418] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:49.555 [2024-07-16 00:27:08.182750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.555 [2024-07-16 00:27:08.182768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:49.555 [2024-07-16 00:27:08.189535] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:49.555 [2024-07-16 00:27:08.189792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.555 [2024-07-16 00:27:08.189811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.555 [2024-07-16 00:27:08.196784] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:49.555 [2024-07-16 00:27:08.197098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.555 [2024-07-16 00:27:08.197118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:49.555 [2024-07-16 00:27:08.204579] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:49.555 [2024-07-16 00:27:08.204846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.555 [2024-07-16 00:27:08.204865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:49.555 [2024-07-16 00:27:08.211718] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:49.555 [2024-07-16 00:27:08.211931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.555 [2024-07-16 00:27:08.211950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:49.555 [2024-07-16 00:27:08.218108] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:49.555 [2024-07-16 00:27:08.218376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.555 [2024-07-16 00:27:08.218395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.555 [2024-07-16 00:27:08.223851] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:49.555 [2024-07-16 00:27:08.224135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.555 [2024-07-16 00:27:08.224154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:49.555 [2024-07-16 00:27:08.228950] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:49.555 [2024-07-16 00:27:08.229167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.555 [2024-07-16 00:27:08.229188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:49.555 [2024-07-16 00:27:08.234412] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:49.555 [2024-07-16 00:27:08.234720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.555 [2024-07-16 00:27:08.234740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:49.555 [2024-07-16 00:27:08.241240] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:49.555 [2024-07-16 00:27:08.241492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.555 [2024-07-16 00:27:08.241511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.555 [2024-07-16 00:27:08.247200] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:49.555 [2024-07-16 00:27:08.247501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.555 [2024-07-16 00:27:08.247520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:49.555 [2024-07-16 00:27:08.254533] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:49.555 [2024-07-16 00:27:08.254807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.555 [2024-07-16 00:27:08.254827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:49.555 [2024-07-16 00:27:08.260667] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:49.555 [2024-07-16 00:27:08.260882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.555 [2024-07-16 00:27:08.260905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:49.555 [2024-07-16 00:27:08.266406] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:49.556 [2024-07-16 00:27:08.266614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.556 [2024-07-16 00:27:08.266633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.556 [2024-07-16 00:27:08.272279] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:49.556 [2024-07-16 00:27:08.272519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.556 [2024-07-16 00:27:08.272539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:49.556 [2024-07-16 00:27:08.277005] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:49.556 [2024-07-16 00:27:08.277243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.556 [2024-07-16 00:27:08.277262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:49.556 [2024-07-16 00:27:08.281469] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:49.556 [2024-07-16 00:27:08.281686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.556 [2024-07-16 00:27:08.281706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:49.556 [2024-07-16 00:27:08.285869] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:49.556 [2024-07-16 00:27:08.286073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.556 [2024-07-16 00:27:08.286091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.556 [2024-07-16 00:27:08.290521] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:49.556 [2024-07-16 00:27:08.290759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.556 [2024-07-16 00:27:08.290778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:49.556 [2024-07-16 00:27:08.295299] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:49.556 [2024-07-16 00:27:08.295504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.556 [2024-07-16 00:27:08.295522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:49.556 [2024-07-16 00:27:08.299821] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:49.556 [2024-07-16 00:27:08.300036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.556 [2024-07-16 00:27:08.300054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:49.556 [2024-07-16 00:27:08.304612] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b5810) with pdu=0x2000190fef90 00:25:49.556 [2024-07-16 00:27:08.304843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.556 [2024-07-16 00:27:08.304862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.556 00:25:49.556 Latency(us) 00:25:49.556 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:49.556 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:25:49.556 nvme0n1 : 2.00 5403.35 675.42 0.00 0.00 2956.63 1837.86 13164.19 00:25:49.556 =================================================================================================================== 00:25:49.556 Total : 5403.35 675.42 0.00 0.00 2956.63 1837.86 13164.19 00:25:49.556 0 00:25:49.556 00:27:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:49.556 00:27:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:49.556 00:27:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:49.556 | .driver_specific 00:25:49.556 | .nvme_error 00:25:49.556 | .status_code 00:25:49.556 | .command_transient_transport_error' 00:25:49.556 00:27:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:49.815 00:27:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 348 > 0 )) 00:25:49.815 00:27:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1653753 00:25:49.815 00:27:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@942 -- # '[' -z 1653753 ']' 00:25:49.815 00:27:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # kill -0 1653753 00:25:49.815 00:27:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@947 -- # uname 00:25:49.815 00:27:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:25:49.815 00:27:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1653753 00:25:49.815 00:27:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:25:49.815 00:27:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:25:49.815 00:27:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1653753' 00:25:49.815 killing process with pid 1653753 00:25:49.815 00:27:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@961 -- # kill 1653753 00:25:49.815 Received shutdown signal, test time was about 2.000000 seconds 00:25:49.815 00:25:49.815 Latency(us) 00:25:49.815 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:49.815 =================================================================================================================== 00:25:49.815 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:49.815 00:27:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # wait 1653753 00:25:50.074 00:27:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1651737 00:25:50.074 00:27:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@942 -- # '[' -z 1651737 ']' 00:25:50.074 00:27:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # kill -0 1651737 00:25:50.074 00:27:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@947 -- # uname 00:25:50.074 00:27:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:25:50.074 00:27:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1651737 00:25:50.074 00:27:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:25:50.074 00:27:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:25:50.074 00:27:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1651737' 00:25:50.074 killing process with pid 1651737 00:25:50.074 00:27:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@961 -- # kill 1651737 00:25:50.074 00:27:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # wait 1651737 00:25:50.333 00:25:50.333 real 0m15.869s 00:25:50.333 user 0m30.157s 00:25:50.333 sys 0m4.251s 00:25:50.333 00:27:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1118 -- # xtrace_disable 00:25:50.333 00:27:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:50.333 ************************************ 00:25:50.333 END TEST nvmf_digest_error 00:25:50.333 ************************************ 00:25:50.333 00:27:09 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1136 -- # return 0 00:25:50.333 00:27:09 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:25:50.333 00:27:09 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:25:50.333 00:27:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:50.333 00:27:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:25:50.333 00:27:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:50.333 00:27:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:25:50.333 00:27:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:50.333 00:27:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:50.333 rmmod nvme_tcp 00:25:50.333 rmmod nvme_fabrics 00:25:50.333 rmmod nvme_keyring 00:25:50.333 00:27:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:50.333 00:27:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:25:50.333 00:27:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:25:50.333 00:27:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 1651737 ']' 00:25:50.333 00:27:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 1651737 00:25:50.333 00:27:09 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@942 -- # '[' -z 1651737 ']' 00:25:50.333 00:27:09 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@946 -- # kill -0 1651737 00:25:50.334 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 946: kill: (1651737) - No such process 00:25:50.334 00:27:09 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@969 -- # echo 'Process with pid 1651737 is not found' 00:25:50.334 Process with pid 1651737 is not found 00:25:50.334 00:27:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:50.334 00:27:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:50.334 00:27:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:50.334 00:27:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:50.334 00:27:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:50.334 00:27:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:50.334 00:27:09 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:50.334 00:27:09 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:52.870 00:27:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:52.870 00:25:52.870 real 0m39.032s 00:25:52.870 user 1m1.683s 00:25:52.870 sys 0m12.295s 00:25:52.870 00:27:11 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1118 -- # xtrace_disable 00:25:52.870 00:27:11 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:52.870 ************************************ 00:25:52.870 END TEST nvmf_digest 00:25:52.870 ************************************ 00:25:52.870 00:27:11 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:25:52.870 00:27:11 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:25:52.870 00:27:11 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 0 -eq 1 ]] 00:25:52.870 00:27:11 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ phy == phy ]] 00:25:52.870 00:27:11 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:25:52.870 00:27:11 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:25:52.870 00:27:11 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:25:52.870 00:27:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:52.870 ************************************ 00:25:52.870 START TEST nvmf_bdevperf 00:25:52.870 ************************************ 00:25:52.870 00:27:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:25:52.870 * Looking for test storage... 00:25:52.870 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:52.870 00:27:11 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:52.870 00:27:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:25:52.870 00:27:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:52.870 00:27:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:52.870 00:27:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:52.870 00:27:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:52.870 00:27:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:52.870 00:27:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:52.870 00:27:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:52.870 00:27:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:52.870 00:27:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:52.870 00:27:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:52.870 00:27:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:25:52.870 00:27:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:25:52.870 00:27:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:52.870 00:27:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:52.870 00:27:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:52.870 00:27:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:52.870 00:27:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:52.870 00:27:11 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:52.870 00:27:11 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:52.870 00:27:11 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:52.870 00:27:11 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:52.870 00:27:11 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:52.870 00:27:11 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:52.870 00:27:11 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:25:52.870 00:27:11 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:52.870 00:27:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:25:52.870 00:27:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:52.870 00:27:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:52.870 00:27:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:52.870 00:27:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:52.870 00:27:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:52.870 00:27:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:52.870 00:27:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:52.870 00:27:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:52.870 00:27:11 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:52.870 00:27:11 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:52.870 00:27:11 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:25:52.870 00:27:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:52.870 00:27:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:52.870 00:27:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:52.870 00:27:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:52.870 00:27:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:52.870 00:27:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:52.870 00:27:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:52.870 00:27:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:52.870 00:27:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:52.870 00:27:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:52.870 00:27:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:25:52.870 00:27:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:57.087 00:27:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:57.087 00:27:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:25:57.087 00:27:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:57.087 00:27:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:57.087 00:27:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:57.087 00:27:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:57.087 00:27:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:57.087 00:27:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:25:57.087 00:27:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:57.087 00:27:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:25:57.087 00:27:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:25:57.087 00:27:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:25:57.087 00:27:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:25:57.087 00:27:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:25:57.087 00:27:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:25:57.087 00:27:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:57.087 00:27:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:57.087 00:27:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:57.087 00:27:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:57.087 00:27:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:57.087 00:27:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:57.087 00:27:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:57.087 00:27:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:57.087 00:27:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:57.087 00:27:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:57.087 00:27:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:57.087 00:27:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:57.087 00:27:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:57.087 00:27:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:57.087 00:27:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:57.087 00:27:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:57.087 00:27:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:57.087 00:27:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:57.087 00:27:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:57.087 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:57.087 00:27:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:57.087 00:27:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:57.087 00:27:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:57.087 00:27:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:57.087 00:27:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:57.087 00:27:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:57.087 00:27:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:57.087 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:57.087 00:27:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:57.087 00:27:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:57.087 00:27:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:57.087 00:27:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:57.087 00:27:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:57.087 00:27:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:57.087 00:27:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:57.087 00:27:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:57.087 00:27:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:57.087 00:27:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:57.087 00:27:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:57.087 00:27:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:57.087 00:27:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:57.087 00:27:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:57.087 00:27:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:57.087 00:27:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:57.087 Found net devices under 0000:86:00.0: cvl_0_0 00:25:57.087 00:27:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:57.087 00:27:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:57.087 00:27:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:57.087 00:27:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:57.087 00:27:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:57.087 00:27:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:57.087 00:27:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:57.087 00:27:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:57.087 00:27:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:57.087 Found net devices under 0000:86:00.1: cvl_0_1 00:25:57.087 00:27:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:57.087 00:27:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:57.087 00:27:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:25:57.087 00:27:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:57.087 00:27:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:57.087 00:27:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:57.087 00:27:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:57.087 00:27:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:57.087 00:27:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:57.087 00:27:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:57.087 00:27:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:57.087 00:27:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:57.087 00:27:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:57.087 00:27:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:57.087 00:27:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:57.087 00:27:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:57.087 00:27:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:57.087 00:27:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:57.087 00:27:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:57.346 00:27:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:57.346 00:27:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:57.346 00:27:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:57.346 00:27:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:57.346 00:27:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:57.346 00:27:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:57.346 00:27:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:57.346 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:57.346 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.174 ms 00:25:57.346 00:25:57.346 --- 10.0.0.2 ping statistics --- 00:25:57.346 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:57.346 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:25:57.346 00:27:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:57.346 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:57.346 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.188 ms 00:25:57.346 00:25:57.346 --- 10.0.0.1 ping statistics --- 00:25:57.346 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:57.346 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:25:57.346 00:27:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:57.346 00:27:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:25:57.346 00:27:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:57.346 00:27:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:57.346 00:27:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:57.346 00:27:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:57.346 00:27:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:57.346 00:27:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:57.346 00:27:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:57.346 00:27:16 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:25:57.346 00:27:16 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:25:57.346 00:27:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:57.346 00:27:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:57.346 00:27:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:57.346 00:27:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1658142 00:25:57.346 00:27:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1658142 00:25:57.346 00:27:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:57.346 00:27:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@823 -- # '[' -z 1658142 ']' 00:25:57.347 00:27:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:57.347 00:27:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@828 -- # local max_retries=100 00:25:57.347 00:27:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:57.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:57.347 00:27:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@832 -- # xtrace_disable 00:25:57.347 00:27:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:57.347 [2024-07-16 00:27:16.137771] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:25:57.347 [2024-07-16 00:27:16.137814] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:57.347 [2024-07-16 00:27:16.194318] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:57.605 [2024-07-16 00:27:16.273745] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:57.605 [2024-07-16 00:27:16.273785] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:57.605 [2024-07-16 00:27:16.273792] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:57.605 [2024-07-16 00:27:16.273799] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:57.606 [2024-07-16 00:27:16.273804] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:57.606 [2024-07-16 00:27:16.273900] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:57.606 [2024-07-16 00:27:16.273980] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:57.606 [2024-07-16 00:27:16.273981] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:58.173 00:27:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:25:58.173 00:27:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@856 -- # return 0 00:25:58.173 00:27:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:58.173 00:27:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:58.173 00:27:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:58.173 00:27:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:58.173 00:27:16 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:58.173 00:27:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:58.173 00:27:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:58.173 [2024-07-16 00:27:16.992958] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:58.173 00:27:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:58.173 00:27:16 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:58.173 00:27:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:58.173 00:27:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:58.466 Malloc0 00:25:58.466 00:27:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:58.466 00:27:17 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:58.466 00:27:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:58.466 00:27:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:58.466 00:27:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:58.466 00:27:17 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:58.466 00:27:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:58.466 00:27:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:58.466 00:27:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:58.466 00:27:17 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:58.466 00:27:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:58.466 00:27:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:58.466 [2024-07-16 00:27:17.065859] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:58.466 00:27:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:58.466 00:27:17 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:25:58.466 00:27:17 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:25:58.466 00:27:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:25:58.466 00:27:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:25:58.466 00:27:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:58.466 00:27:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:58.466 { 00:25:58.466 "params": { 00:25:58.466 "name": "Nvme$subsystem", 00:25:58.466 "trtype": "$TEST_TRANSPORT", 00:25:58.466 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:58.466 "adrfam": "ipv4", 00:25:58.466 "trsvcid": "$NVMF_PORT", 00:25:58.466 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:58.466 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:58.466 "hdgst": ${hdgst:-false}, 00:25:58.466 "ddgst": ${ddgst:-false} 00:25:58.466 }, 00:25:58.466 "method": "bdev_nvme_attach_controller" 00:25:58.466 } 00:25:58.466 EOF 00:25:58.466 )") 00:25:58.466 00:27:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:25:58.466 00:27:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:25:58.466 00:27:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:25:58.466 00:27:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:25:58.466 "params": { 00:25:58.466 "name": "Nvme1", 00:25:58.466 "trtype": "tcp", 00:25:58.466 "traddr": "10.0.0.2", 00:25:58.466 "adrfam": "ipv4", 00:25:58.466 "trsvcid": "4420", 00:25:58.466 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:58.466 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:58.466 "hdgst": false, 00:25:58.466 "ddgst": false 00:25:58.466 }, 00:25:58.466 "method": "bdev_nvme_attach_controller" 00:25:58.466 }' 00:25:58.466 [2024-07-16 00:27:17.113731] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:25:58.466 [2024-07-16 00:27:17.113775] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1658388 ] 00:25:58.466 [2024-07-16 00:27:17.167931] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:58.466 [2024-07-16 00:27:17.241788] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:58.757 Running I/O for 1 seconds... 00:25:59.694 00:25:59.694 Latency(us) 00:25:59.694 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:59.694 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:59.694 Verification LBA range: start 0x0 length 0x4000 00:25:59.694 Nvme1n1 : 1.01 10817.32 42.26 0.00 0.00 11787.52 1239.49 13278.16 00:25:59.694 =================================================================================================================== 00:25:59.694 Total : 10817.32 42.26 0.00 0.00 11787.52 1239.49 13278.16 00:25:59.954 00:27:18 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1658628 00:25:59.954 00:27:18 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:25:59.954 00:27:18 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:25:59.954 00:27:18 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:25:59.954 00:27:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:25:59.954 00:27:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:25:59.954 00:27:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:59.954 00:27:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:59.954 { 00:25:59.954 "params": { 00:25:59.954 "name": "Nvme$subsystem", 00:25:59.954 "trtype": "$TEST_TRANSPORT", 00:25:59.954 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:59.954 "adrfam": "ipv4", 00:25:59.954 "trsvcid": "$NVMF_PORT", 00:25:59.954 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:59.954 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:59.954 "hdgst": ${hdgst:-false}, 00:25:59.954 "ddgst": ${ddgst:-false} 00:25:59.954 }, 00:25:59.954 "method": "bdev_nvme_attach_controller" 00:25:59.954 } 00:25:59.954 EOF 00:25:59.954 )") 00:25:59.954 00:27:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:25:59.954 00:27:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:25:59.954 00:27:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:25:59.954 00:27:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:25:59.954 "params": { 00:25:59.954 "name": "Nvme1", 00:25:59.954 "trtype": "tcp", 00:25:59.954 "traddr": "10.0.0.2", 00:25:59.954 "adrfam": "ipv4", 00:25:59.954 "trsvcid": "4420", 00:25:59.954 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:59.954 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:59.954 "hdgst": false, 00:25:59.954 "ddgst": false 00:25:59.954 }, 00:25:59.954 "method": "bdev_nvme_attach_controller" 00:25:59.954 }' 00:25:59.954 [2024-07-16 00:27:18.671842] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:25:59.954 [2024-07-16 00:27:18.671891] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1658628 ] 00:25:59.954 [2024-07-16 00:27:18.725916] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:59.954 [2024-07-16 00:27:18.795321] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:00.213 Running I/O for 15 seconds... 00:26:03.501 00:27:21 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1658142 00:26:03.501 00:27:21 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:26:03.501 [2024-07-16 00:27:21.642355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:97528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.501 [2024-07-16 00:27:21.642395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.501 [2024-07-16 00:27:21.642413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:97536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.501 [2024-07-16 00:27:21.642421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.501 [2024-07-16 00:27:21.642431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:97544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.501 [2024-07-16 00:27:21.642439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.501 [2024-07-16 00:27:21.642448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:97552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.501 [2024-07-16 00:27:21.642454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.501 [2024-07-16 00:27:21.642468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:97560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.501 [2024-07-16 00:27:21.642475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.501 [2024-07-16 00:27:21.642484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:97568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.501 [2024-07-16 00:27:21.642490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.501 [2024-07-16 00:27:21.642499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:97576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.501 [2024-07-16 00:27:21.642506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.501 [2024-07-16 00:27:21.642515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:97584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.501 [2024-07-16 00:27:21.642522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.501 [2024-07-16 00:27:21.642531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:97592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.501 [2024-07-16 00:27:21.642537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.501 [2024-07-16 00:27:21.642545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:97600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.501 [2024-07-16 00:27:21.642551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.501 [2024-07-16 00:27:21.642561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:97608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.501 [2024-07-16 00:27:21.642569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.501 [2024-07-16 00:27:21.642579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:97616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.501 [2024-07-16 00:27:21.642586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.501 [2024-07-16 00:27:21.642595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:97624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.501 [2024-07-16 00:27:21.642603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.501 [2024-07-16 00:27:21.642615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:97632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.501 [2024-07-16 00:27:21.642624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.501 [2024-07-16 00:27:21.642636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:97640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.501 [2024-07-16 00:27:21.642644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.501 [2024-07-16 00:27:21.642654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:97648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.501 [2024-07-16 00:27:21.642661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.502 [2024-07-16 00:27:21.642670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:97656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.502 [2024-07-16 00:27:21.642679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.502 [2024-07-16 00:27:21.642692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:97664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.502 [2024-07-16 00:27:21.642703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.502 [2024-07-16 00:27:21.642712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:97672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.502 [2024-07-16 00:27:21.642719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.502 [2024-07-16 00:27:21.642728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:97680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.502 [2024-07-16 00:27:21.642734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.502 [2024-07-16 00:27:21.642742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:97688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.502 [2024-07-16 00:27:21.642750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.502 [2024-07-16 00:27:21.642758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:97696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.502 [2024-07-16 00:27:21.642765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.502 [2024-07-16 00:27:21.642774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:97704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.502 [2024-07-16 00:27:21.642781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.502 [2024-07-16 00:27:21.642789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:97712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.502 [2024-07-16 00:27:21.642796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.502 [2024-07-16 00:27:21.642805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:97720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.502 [2024-07-16 00:27:21.642811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.502 [2024-07-16 00:27:21.642819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:97728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.502 [2024-07-16 00:27:21.642826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.502 [2024-07-16 00:27:21.642834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:97736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.502 [2024-07-16 00:27:21.642841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.502 [2024-07-16 00:27:21.642849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:97744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.502 [2024-07-16 00:27:21.642856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.502 [2024-07-16 00:27:21.642864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:97752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.502 [2024-07-16 00:27:21.642871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.502 [2024-07-16 00:27:21.642881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:97760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.502 [2024-07-16 00:27:21.642887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.502 [2024-07-16 00:27:21.642896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:97768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.502 [2024-07-16 00:27:21.642902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.502 [2024-07-16 00:27:21.642910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:97776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.502 [2024-07-16 00:27:21.642917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.502 [2024-07-16 00:27:21.642925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:97784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.502 [2024-07-16 00:27:21.642931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.502 [2024-07-16 00:27:21.642940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:97792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.502 [2024-07-16 00:27:21.642946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.502 [2024-07-16 00:27:21.642954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:97800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.502 [2024-07-16 00:27:21.642960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.502 [2024-07-16 00:27:21.642968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:97808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.502 [2024-07-16 00:27:21.642974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.502 [2024-07-16 00:27:21.642982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:97816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.502 [2024-07-16 00:27:21.642989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.502 [2024-07-16 00:27:21.642997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:97824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.502 [2024-07-16 00:27:21.643003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.502 [2024-07-16 00:27:21.643012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:97832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.502 [2024-07-16 00:27:21.643018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.502 [2024-07-16 00:27:21.643026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:97840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.502 [2024-07-16 00:27:21.643034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.502 [2024-07-16 00:27:21.643041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:97848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.502 [2024-07-16 00:27:21.643048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.502 [2024-07-16 00:27:21.643056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:97856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.502 [2024-07-16 00:27:21.643063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.502 [2024-07-16 00:27:21.643071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:97864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.502 [2024-07-16 00:27:21.643078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.502 [2024-07-16 00:27:21.643086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:97872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.502 [2024-07-16 00:27:21.643092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.502 [2024-07-16 00:27:21.643101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:97880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.502 [2024-07-16 00:27:21.643108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.502 [2024-07-16 00:27:21.643116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:97888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.502 [2024-07-16 00:27:21.643123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.502 [2024-07-16 00:27:21.643130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:97896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.502 [2024-07-16 00:27:21.643136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.502 [2024-07-16 00:27:21.643144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:97904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.502 [2024-07-16 00:27:21.643150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.502 [2024-07-16 00:27:21.643158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:97912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.502 [2024-07-16 00:27:21.643165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.502 [2024-07-16 00:27:21.643173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:97920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.502 [2024-07-16 00:27:21.643180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.502 [2024-07-16 00:27:21.643188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:97928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.502 [2024-07-16 00:27:21.643195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.502 [2024-07-16 00:27:21.643202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:97936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.502 [2024-07-16 00:27:21.643209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.502 [2024-07-16 00:27:21.643217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:97944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.502 [2024-07-16 00:27:21.643347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.502 [2024-07-16 00:27:21.643358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:97952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.502 [2024-07-16 00:27:21.643364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.502 [2024-07-16 00:27:21.643373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:97960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.502 [2024-07-16 00:27:21.643380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.502 [2024-07-16 00:27:21.643389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:97968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.502 [2024-07-16 00:27:21.643396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.502 [2024-07-16 00:27:21.643404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:97976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.502 [2024-07-16 00:27:21.643411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.502 [2024-07-16 00:27:21.643419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:97984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.502 [2024-07-16 00:27:21.643425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.502 [2024-07-16 00:27:21.643433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:97992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.503 [2024-07-16 00:27:21.643440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.503 [2024-07-16 00:27:21.643448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:98000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.503 [2024-07-16 00:27:21.643454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.503 [2024-07-16 00:27:21.643462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:98008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.503 [2024-07-16 00:27:21.643469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.503 [2024-07-16 00:27:21.643477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:98016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.503 [2024-07-16 00:27:21.643483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.503 [2024-07-16 00:27:21.643491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:98024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.503 [2024-07-16 00:27:21.643497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.503 [2024-07-16 00:27:21.643505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:98032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.503 [2024-07-16 00:27:21.643511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.503 [2024-07-16 00:27:21.643520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:98040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.503 [2024-07-16 00:27:21.643526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.503 [2024-07-16 00:27:21.643535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:98048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.503 [2024-07-16 00:27:21.643541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.503 [2024-07-16 00:27:21.643549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:98056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.503 [2024-07-16 00:27:21.643556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.503 [2024-07-16 00:27:21.643565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:98064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.503 [2024-07-16 00:27:21.643573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.503 [2024-07-16 00:27:21.643581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:98072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.503 [2024-07-16 00:27:21.643587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.503 [2024-07-16 00:27:21.643596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:98080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.503 [2024-07-16 00:27:21.643602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.503 [2024-07-16 00:27:21.643609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:98088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.503 [2024-07-16 00:27:21.643616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.503 [2024-07-16 00:27:21.643625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:98096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.503 [2024-07-16 00:27:21.643633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.503 [2024-07-16 00:27:21.643641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.503 [2024-07-16 00:27:21.643647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.503 [2024-07-16 00:27:21.643655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:98112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.503 [2024-07-16 00:27:21.643661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.503 [2024-07-16 00:27:21.643670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:98120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.503 [2024-07-16 00:27:21.643676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.503 [2024-07-16 00:27:21.643685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:98128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.503 [2024-07-16 00:27:21.643692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.503 [2024-07-16 00:27:21.643699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:98136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.503 [2024-07-16 00:27:21.643706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.503 [2024-07-16 00:27:21.643714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:98144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.503 [2024-07-16 00:27:21.643720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.503 [2024-07-16 00:27:21.643728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:98152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.503 [2024-07-16 00:27:21.643735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.503 [2024-07-16 00:27:21.643743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:98160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.503 [2024-07-16 00:27:21.643751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.503 [2024-07-16 00:27:21.643759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:98168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.503 [2024-07-16 00:27:21.643765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.503 [2024-07-16 00:27:21.643773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:98176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.503 [2024-07-16 00:27:21.643779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.503 [2024-07-16 00:27:21.643789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:98184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.503 [2024-07-16 00:27:21.643796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.503 [2024-07-16 00:27:21.643804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:98192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.503 [2024-07-16 00:27:21.643810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.503 [2024-07-16 00:27:21.643818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:98200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.503 [2024-07-16 00:27:21.643824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.503 [2024-07-16 00:27:21.643832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:98208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.503 [2024-07-16 00:27:21.643839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.503 [2024-07-16 00:27:21.643847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:98216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.503 [2024-07-16 00:27:21.643854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.503 [2024-07-16 00:27:21.643862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:98224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.503 [2024-07-16 00:27:21.643871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.503 [2024-07-16 00:27:21.643879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:98232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.503 [2024-07-16 00:27:21.643885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.503 [2024-07-16 00:27:21.643894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:98240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.503 [2024-07-16 00:27:21.643901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.503 [2024-07-16 00:27:21.643909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:98248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.503 [2024-07-16 00:27:21.643917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.503 [2024-07-16 00:27:21.643925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:98256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.503 [2024-07-16 00:27:21.643932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.503 [2024-07-16 00:27:21.643942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:98264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.503 [2024-07-16 00:27:21.643949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.503 [2024-07-16 00:27:21.643957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:98272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.503 [2024-07-16 00:27:21.643963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.503 [2024-07-16 00:27:21.643972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:98280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.503 [2024-07-16 00:27:21.643978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.503 [2024-07-16 00:27:21.643986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:98288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.503 [2024-07-16 00:27:21.643994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.503 [2024-07-16 00:27:21.644002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:98296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.503 [2024-07-16 00:27:21.644008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.503 [2024-07-16 00:27:21.644016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:98304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.503 [2024-07-16 00:27:21.644022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.503 [2024-07-16 00:27:21.644030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:98312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.503 [2024-07-16 00:27:21.644036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.503 [2024-07-16 00:27:21.644045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:98320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.503 [2024-07-16 00:27:21.644052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.503 [2024-07-16 00:27:21.644060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:98328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.503 [2024-07-16 00:27:21.644066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.503 [2024-07-16 00:27:21.644074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:98336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.504 [2024-07-16 00:27:21.644080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.504 [2024-07-16 00:27:21.644088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:98344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.504 [2024-07-16 00:27:21.644095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.504 [2024-07-16 00:27:21.644104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:98352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.504 [2024-07-16 00:27:21.644112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.504 [2024-07-16 00:27:21.644120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:98360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.504 [2024-07-16 00:27:21.644130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.504 [2024-07-16 00:27:21.644138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:98368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.504 [2024-07-16 00:27:21.644145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.504 [2024-07-16 00:27:21.644152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:98376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.504 [2024-07-16 00:27:21.644161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.504 [2024-07-16 00:27:21.644169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:98384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.504 [2024-07-16 00:27:21.644176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.504 [2024-07-16 00:27:21.644184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:98392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.504 [2024-07-16 00:27:21.644190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.504 [2024-07-16 00:27:21.644198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:98400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.504 [2024-07-16 00:27:21.644205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.504 [2024-07-16 00:27:21.644213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:98408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.504 [2024-07-16 00:27:21.644220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.504 [2024-07-16 00:27:21.644231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:98416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.504 [2024-07-16 00:27:21.644238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.504 [2024-07-16 00:27:21.644246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:98424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.504 [2024-07-16 00:27:21.644252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.504 [2024-07-16 00:27:21.644260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.504 [2024-07-16 00:27:21.644266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.504 [2024-07-16 00:27:21.644275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:98440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.504 [2024-07-16 00:27:21.644282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.504 [2024-07-16 00:27:21.644290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:98448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.504 [2024-07-16 00:27:21.644296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.504 [2024-07-16 00:27:21.644304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:98456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.504 [2024-07-16 00:27:21.644310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.504 [2024-07-16 00:27:21.644318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:98464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.504 [2024-07-16 00:27:21.644326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.504 [2024-07-16 00:27:21.644335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:98472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.504 [2024-07-16 00:27:21.644341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.504 [2024-07-16 00:27:21.644349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:98480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.504 [2024-07-16 00:27:21.644356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.504 [2024-07-16 00:27:21.644365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:98488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.504 [2024-07-16 00:27:21.644371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.504 [2024-07-16 00:27:21.644381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:98496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.504 [2024-07-16 00:27:21.644388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.504 [2024-07-16 00:27:21.644396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.504 [2024-07-16 00:27:21.644405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.504 [2024-07-16 00:27:21.644413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:98512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.504 [2024-07-16 00:27:21.644419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.504 [2024-07-16 00:27:21.644428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:98520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.504 [2024-07-16 00:27:21.644435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.504 [2024-07-16 00:27:21.644444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:98528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.504 [2024-07-16 00:27:21.644450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.504 [2024-07-16 00:27:21.644458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:98536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.504 [2024-07-16 00:27:21.644464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.504 [2024-07-16 00:27:21.644472] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd35c70 is same with the state(5) to be set 00:26:03.504 [2024-07-16 00:27:21.644481] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:03.504 [2024-07-16 00:27:21.644487] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:03.504 [2024-07-16 00:27:21.644492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98544 len:8 PRP1 0x0 PRP2 0x0 00:26:03.504 [2024-07-16 00:27:21.644498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.504 [2024-07-16 00:27:21.644541] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xd35c70 was disconnected and freed. reset controller. 00:26:03.504 [2024-07-16 00:27:21.647412] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.504 [2024-07-16 00:27:21.647469] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:03.504 [2024-07-16 00:27:21.648138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.504 [2024-07-16 00:27:21.648180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:03.504 [2024-07-16 00:27:21.648202] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:03.504 [2024-07-16 00:27:21.648794] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:03.504 [2024-07-16 00:27:21.649273] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.504 [2024-07-16 00:27:21.649282] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.504 [2024-07-16 00:27:21.649289] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.504 [2024-07-16 00:27:21.652119] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.504 [2024-07-16 00:27:21.660761] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.504 [2024-07-16 00:27:21.661266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.504 [2024-07-16 00:27:21.661312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:03.504 [2024-07-16 00:27:21.661333] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:03.504 [2024-07-16 00:27:21.661913] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:03.504 [2024-07-16 00:27:21.662371] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.504 [2024-07-16 00:27:21.662385] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.504 [2024-07-16 00:27:21.662395] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.504 [2024-07-16 00:27:21.666452] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.504 [2024-07-16 00:27:21.674208] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.504 [2024-07-16 00:27:21.674686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.504 [2024-07-16 00:27:21.674730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:03.504 [2024-07-16 00:27:21.674751] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:03.504 [2024-07-16 00:27:21.675154] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:03.504 [2024-07-16 00:27:21.675347] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.504 [2024-07-16 00:27:21.675358] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.504 [2024-07-16 00:27:21.675364] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.504 [2024-07-16 00:27:21.678111] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.504 [2024-07-16 00:27:21.687016] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.504 [2024-07-16 00:27:21.687476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.504 [2024-07-16 00:27:21.687519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:03.505 [2024-07-16 00:27:21.687549] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:03.505 [2024-07-16 00:27:21.688081] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:03.505 [2024-07-16 00:27:21.688267] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.505 [2024-07-16 00:27:21.688277] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.505 [2024-07-16 00:27:21.688284] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.505 [2024-07-16 00:27:21.690950] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.505 [2024-07-16 00:27:21.699919] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.505 [2024-07-16 00:27:21.700332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.505 [2024-07-16 00:27:21.700374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:03.505 [2024-07-16 00:27:21.700397] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:03.505 [2024-07-16 00:27:21.700892] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:03.505 [2024-07-16 00:27:21.701057] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.505 [2024-07-16 00:27:21.701067] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.505 [2024-07-16 00:27:21.701073] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.505 [2024-07-16 00:27:21.703855] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.505 [2024-07-16 00:27:21.712825] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.505 [2024-07-16 00:27:21.713303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.505 [2024-07-16 00:27:21.713320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:03.505 [2024-07-16 00:27:21.713327] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:03.505 [2024-07-16 00:27:21.713491] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:03.505 [2024-07-16 00:27:21.713654] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.505 [2024-07-16 00:27:21.713663] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.505 [2024-07-16 00:27:21.713669] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.505 [2024-07-16 00:27:21.716355] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.505 [2024-07-16 00:27:21.725689] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.505 [2024-07-16 00:27:21.726160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.505 [2024-07-16 00:27:21.726201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:03.505 [2024-07-16 00:27:21.726222] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:03.505 [2024-07-16 00:27:21.726816] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:03.505 [2024-07-16 00:27:21.727302] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.505 [2024-07-16 00:27:21.727315] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.505 [2024-07-16 00:27:21.727322] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.505 [2024-07-16 00:27:21.730045] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.505 [2024-07-16 00:27:21.738583] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.505 [2024-07-16 00:27:21.739066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.505 [2024-07-16 00:27:21.739108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:03.505 [2024-07-16 00:27:21.739131] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:03.505 [2024-07-16 00:27:21.739480] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:03.505 [2024-07-16 00:27:21.739654] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.505 [2024-07-16 00:27:21.739663] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.505 [2024-07-16 00:27:21.739670] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.505 [2024-07-16 00:27:21.742317] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.505 [2024-07-16 00:27:21.751437] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.505 [2024-07-16 00:27:21.751915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.505 [2024-07-16 00:27:21.751931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:03.505 [2024-07-16 00:27:21.751938] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:03.505 [2024-07-16 00:27:21.752100] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:03.505 [2024-07-16 00:27:21.752286] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.505 [2024-07-16 00:27:21.752295] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.505 [2024-07-16 00:27:21.752302] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.505 [2024-07-16 00:27:21.754968] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.505 [2024-07-16 00:27:21.764283] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.505 [2024-07-16 00:27:21.764741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.505 [2024-07-16 00:27:21.764757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:03.505 [2024-07-16 00:27:21.764764] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:03.505 [2024-07-16 00:27:21.764927] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:03.505 [2024-07-16 00:27:21.765090] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.505 [2024-07-16 00:27:21.765099] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.505 [2024-07-16 00:27:21.765105] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.505 [2024-07-16 00:27:21.767743] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.505 [2024-07-16 00:27:21.777180] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.505 [2024-07-16 00:27:21.777676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.505 [2024-07-16 00:27:21.777718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:03.505 [2024-07-16 00:27:21.777740] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:03.505 [2024-07-16 00:27:21.778221] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:03.505 [2024-07-16 00:27:21.778399] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.505 [2024-07-16 00:27:21.778409] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.505 [2024-07-16 00:27:21.778415] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.505 [2024-07-16 00:27:21.781112] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.505 [2024-07-16 00:27:21.790136] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.505 [2024-07-16 00:27:21.790546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.505 [2024-07-16 00:27:21.790563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:03.505 [2024-07-16 00:27:21.790569] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:03.505 [2024-07-16 00:27:21.790731] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:03.505 [2024-07-16 00:27:21.790894] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.505 [2024-07-16 00:27:21.790903] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.505 [2024-07-16 00:27:21.790909] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.505 [2024-07-16 00:27:21.793600] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.505 [2024-07-16 00:27:21.803062] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.505 [2024-07-16 00:27:21.803551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.505 [2024-07-16 00:27:21.803568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:03.505 [2024-07-16 00:27:21.803575] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:03.505 [2024-07-16 00:27:21.803747] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:03.506 [2024-07-16 00:27:21.803929] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.506 [2024-07-16 00:27:21.803939] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.506 [2024-07-16 00:27:21.803945] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.506 [2024-07-16 00:27:21.806637] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.506 [2024-07-16 00:27:21.816015] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.506 [2024-07-16 00:27:21.816475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.506 [2024-07-16 00:27:21.816517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:03.506 [2024-07-16 00:27:21.816539] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:03.506 [2024-07-16 00:27:21.817124] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:03.506 [2024-07-16 00:27:21.817340] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.506 [2024-07-16 00:27:21.817349] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.506 [2024-07-16 00:27:21.817355] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.506 [2024-07-16 00:27:21.820016] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.506 [2024-07-16 00:27:21.828831] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.506 [2024-07-16 00:27:21.829288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.506 [2024-07-16 00:27:21.829304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:03.506 [2024-07-16 00:27:21.829311] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:03.506 [2024-07-16 00:27:21.829474] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:03.506 [2024-07-16 00:27:21.829636] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.506 [2024-07-16 00:27:21.829646] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.506 [2024-07-16 00:27:21.829652] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.506 [2024-07-16 00:27:21.832368] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.506 [2024-07-16 00:27:21.841725] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.506 [2024-07-16 00:27:21.842145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.506 [2024-07-16 00:27:21.842161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:03.506 [2024-07-16 00:27:21.842169] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:03.506 [2024-07-16 00:27:21.842358] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:03.506 [2024-07-16 00:27:21.842532] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.506 [2024-07-16 00:27:21.842542] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.506 [2024-07-16 00:27:21.842548] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.506 [2024-07-16 00:27:21.845202] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.506 [2024-07-16 00:27:21.854632] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.506 [2024-07-16 00:27:21.855046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.506 [2024-07-16 00:27:21.855088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:03.506 [2024-07-16 00:27:21.855109] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:03.506 [2024-07-16 00:27:21.855583] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:03.506 [2024-07-16 00:27:21.855758] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.506 [2024-07-16 00:27:21.855767] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.506 [2024-07-16 00:27:21.855776] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.506 [2024-07-16 00:27:21.858479] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.506 [2024-07-16 00:27:21.867598] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.506 [2024-07-16 00:27:21.868080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.506 [2024-07-16 00:27:21.868097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:03.506 [2024-07-16 00:27:21.868104] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:03.506 [2024-07-16 00:27:21.868288] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:03.506 [2024-07-16 00:27:21.868476] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.506 [2024-07-16 00:27:21.868485] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.506 [2024-07-16 00:27:21.868492] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.506 [2024-07-16 00:27:21.871234] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.506 [2024-07-16 00:27:21.880495] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.506 [2024-07-16 00:27:21.880973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.506 [2024-07-16 00:27:21.880988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:03.506 [2024-07-16 00:27:21.880995] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:03.506 [2024-07-16 00:27:21.881157] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:03.506 [2024-07-16 00:27:21.881345] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.506 [2024-07-16 00:27:21.881354] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.506 [2024-07-16 00:27:21.881361] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.506 [2024-07-16 00:27:21.884021] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.506 [2024-07-16 00:27:21.893363] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.506 [2024-07-16 00:27:21.893822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.506 [2024-07-16 00:27:21.893865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:03.506 [2024-07-16 00:27:21.893886] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:03.506 [2024-07-16 00:27:21.894468] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:03.506 [2024-07-16 00:27:21.894641] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.506 [2024-07-16 00:27:21.894651] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.506 [2024-07-16 00:27:21.894657] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.506 [2024-07-16 00:27:21.898574] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.506 [2024-07-16 00:27:21.907120] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.506 [2024-07-16 00:27:21.907615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.506 [2024-07-16 00:27:21.907637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:03.506 [2024-07-16 00:27:21.907644] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:03.506 [2024-07-16 00:27:21.907837] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:03.506 [2024-07-16 00:27:21.908016] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.506 [2024-07-16 00:27:21.908025] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.506 [2024-07-16 00:27:21.908033] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.506 [2024-07-16 00:27:21.910867] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.506 [2024-07-16 00:27:21.920216] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.506 [2024-07-16 00:27:21.920695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.506 [2024-07-16 00:27:21.920737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:03.506 [2024-07-16 00:27:21.920759] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:03.506 [2024-07-16 00:27:21.921352] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:03.506 [2024-07-16 00:27:21.921815] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.506 [2024-07-16 00:27:21.921825] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.506 [2024-07-16 00:27:21.921832] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.506 [2024-07-16 00:27:21.924538] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.506 [2024-07-16 00:27:21.933234] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.506 [2024-07-16 00:27:21.933696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.506 [2024-07-16 00:27:21.933712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:03.506 [2024-07-16 00:27:21.933719] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:03.506 [2024-07-16 00:27:21.933881] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:03.506 [2024-07-16 00:27:21.934044] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.506 [2024-07-16 00:27:21.934053] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.506 [2024-07-16 00:27:21.934060] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.506 [2024-07-16 00:27:21.936692] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.506 [2024-07-16 00:27:21.946129] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.506 [2024-07-16 00:27:21.946619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.506 [2024-07-16 00:27:21.946661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:03.506 [2024-07-16 00:27:21.946683] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:03.506 [2024-07-16 00:27:21.947276] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:03.507 [2024-07-16 00:27:21.947841] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.507 [2024-07-16 00:27:21.947851] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.507 [2024-07-16 00:27:21.947857] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.507 [2024-07-16 00:27:21.950497] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.507 [2024-07-16 00:27:21.959101] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.507 [2024-07-16 00:27:21.959567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.507 [2024-07-16 00:27:21.959609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:03.507 [2024-07-16 00:27:21.959632] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:03.507 [2024-07-16 00:27:21.960212] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:03.507 [2024-07-16 00:27:21.960780] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.507 [2024-07-16 00:27:21.960791] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.507 [2024-07-16 00:27:21.960797] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.507 [2024-07-16 00:27:21.963440] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.507 [2024-07-16 00:27:21.971887] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.507 [2024-07-16 00:27:21.972343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.507 [2024-07-16 00:27:21.972359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:03.507 [2024-07-16 00:27:21.972366] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:03.507 [2024-07-16 00:27:21.972529] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:03.507 [2024-07-16 00:27:21.972692] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.507 [2024-07-16 00:27:21.972701] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.507 [2024-07-16 00:27:21.972708] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.507 [2024-07-16 00:27:21.975400] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.507 [2024-07-16 00:27:21.984798] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.507 [2024-07-16 00:27:21.985277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.507 [2024-07-16 00:27:21.985320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:03.507 [2024-07-16 00:27:21.985341] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:03.507 [2024-07-16 00:27:21.985919] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:03.507 [2024-07-16 00:27:21.986149] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.507 [2024-07-16 00:27:21.986158] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.507 [2024-07-16 00:27:21.986164] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.507 [2024-07-16 00:27:21.988854] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.507 [2024-07-16 00:27:21.997673] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.507 [2024-07-16 00:27:21.998090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.507 [2024-07-16 00:27:21.998131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:03.507 [2024-07-16 00:27:21.998153] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:03.507 [2024-07-16 00:27:21.998630] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:03.507 [2024-07-16 00:27:21.998804] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.507 [2024-07-16 00:27:21.998814] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.507 [2024-07-16 00:27:21.998820] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.507 [2024-07-16 00:27:22.001463] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.507 [2024-07-16 00:27:22.010568] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.507 [2024-07-16 00:27:22.011050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.507 [2024-07-16 00:27:22.011092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:03.507 [2024-07-16 00:27:22.011113] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:03.507 [2024-07-16 00:27:22.011671] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:03.507 [2024-07-16 00:27:22.011846] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.507 [2024-07-16 00:27:22.011855] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.507 [2024-07-16 00:27:22.011862] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.507 [2024-07-16 00:27:22.014499] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.507 [2024-07-16 00:27:22.023470] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.507 [2024-07-16 00:27:22.023938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.507 [2024-07-16 00:27:22.023955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:03.507 [2024-07-16 00:27:22.023962] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:03.507 [2024-07-16 00:27:22.024124] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:03.507 [2024-07-16 00:27:22.024311] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.507 [2024-07-16 00:27:22.024320] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.507 [2024-07-16 00:27:22.024327] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.507 [2024-07-16 00:27:22.026993] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.507 [2024-07-16 00:27:22.036362] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.507 [2024-07-16 00:27:22.036833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.507 [2024-07-16 00:27:22.036849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:03.507 [2024-07-16 00:27:22.036858] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:03.507 [2024-07-16 00:27:22.037021] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:03.507 [2024-07-16 00:27:22.037184] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.507 [2024-07-16 00:27:22.037191] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.507 [2024-07-16 00:27:22.037198] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.507 [2024-07-16 00:27:22.039886] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.507 [2024-07-16 00:27:22.049270] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.507 [2024-07-16 00:27:22.049748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.507 [2024-07-16 00:27:22.049765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:03.507 [2024-07-16 00:27:22.049772] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:03.507 [2024-07-16 00:27:22.049934] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:03.507 [2024-07-16 00:27:22.050096] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.507 [2024-07-16 00:27:22.050105] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.507 [2024-07-16 00:27:22.050111] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.507 [2024-07-16 00:27:22.052804] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.507 [2024-07-16 00:27:22.062173] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.507 [2024-07-16 00:27:22.062649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.507 [2024-07-16 00:27:22.062693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:03.507 [2024-07-16 00:27:22.062715] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:03.507 [2024-07-16 00:27:22.063222] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:03.507 [2024-07-16 00:27:22.063416] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.507 [2024-07-16 00:27:22.063424] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.507 [2024-07-16 00:27:22.063431] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.507 [2024-07-16 00:27:22.066092] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.507 [2024-07-16 00:27:22.075062] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.507 [2024-07-16 00:27:22.075548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.507 [2024-07-16 00:27:22.075591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:03.507 [2024-07-16 00:27:22.075613] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:03.507 [2024-07-16 00:27:22.076086] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:03.507 [2024-07-16 00:27:22.076278] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.507 [2024-07-16 00:27:22.076291] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.507 [2024-07-16 00:27:22.076298] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.507 [2024-07-16 00:27:22.079008] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.507 [2024-07-16 00:27:22.088001] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.507 [2024-07-16 00:27:22.088403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.507 [2024-07-16 00:27:22.088419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:03.507 [2024-07-16 00:27:22.088426] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:03.507 [2024-07-16 00:27:22.088995] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:03.507 [2024-07-16 00:27:22.089185] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.508 [2024-07-16 00:27:22.089195] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.508 [2024-07-16 00:27:22.089200] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.508 [2024-07-16 00:27:22.091886] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.508 [2024-07-16 00:27:22.100953] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.508 [2024-07-16 00:27:22.101420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.508 [2024-07-16 00:27:22.101436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:03.508 [2024-07-16 00:27:22.101443] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:03.508 [2024-07-16 00:27:22.101606] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:03.508 [2024-07-16 00:27:22.101769] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.508 [2024-07-16 00:27:22.101778] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.508 [2024-07-16 00:27:22.101784] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.508 [2024-07-16 00:27:22.104514] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.508 [2024-07-16 00:27:22.113942] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.508 [2024-07-16 00:27:22.114430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.508 [2024-07-16 00:27:22.114473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:03.508 [2024-07-16 00:27:22.114495] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:03.508 [2024-07-16 00:27:22.115014] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:03.508 [2024-07-16 00:27:22.115178] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.508 [2024-07-16 00:27:22.115187] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.508 [2024-07-16 00:27:22.115195] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.508 [2024-07-16 00:27:22.117839] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.508 [2024-07-16 00:27:22.126759] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.508 [2024-07-16 00:27:22.127156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.508 [2024-07-16 00:27:22.127172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:03.508 [2024-07-16 00:27:22.127179] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:03.508 [2024-07-16 00:27:22.127367] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:03.508 [2024-07-16 00:27:22.127540] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.508 [2024-07-16 00:27:22.127549] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.508 [2024-07-16 00:27:22.127555] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.508 [2024-07-16 00:27:22.130330] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.508 [2024-07-16 00:27:22.139768] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.508 [2024-07-16 00:27:22.140260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.508 [2024-07-16 00:27:22.140303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:03.508 [2024-07-16 00:27:22.140324] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:03.508 [2024-07-16 00:27:22.140903] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:03.508 [2024-07-16 00:27:22.141436] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.508 [2024-07-16 00:27:22.141446] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.508 [2024-07-16 00:27:22.141452] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.508 [2024-07-16 00:27:22.144111] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.508 [2024-07-16 00:27:22.152824] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.508 [2024-07-16 00:27:22.153311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.508 [2024-07-16 00:27:22.153328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:03.508 [2024-07-16 00:27:22.153336] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:03.508 [2024-07-16 00:27:22.153513] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:03.508 [2024-07-16 00:27:22.153691] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.508 [2024-07-16 00:27:22.153701] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.508 [2024-07-16 00:27:22.153710] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.508 [2024-07-16 00:27:22.156516] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.508 [2024-07-16 00:27:22.165911] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.508 [2024-07-16 00:27:22.166367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.508 [2024-07-16 00:27:22.166384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:03.508 [2024-07-16 00:27:22.166391] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:03.508 [2024-07-16 00:27:22.166569] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:03.508 [2024-07-16 00:27:22.166743] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.508 [2024-07-16 00:27:22.166753] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.508 [2024-07-16 00:27:22.166759] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.508 [2024-07-16 00:27:22.169505] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.508 [2024-07-16 00:27:22.178948] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.508 [2024-07-16 00:27:22.179479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.508 [2024-07-16 00:27:22.179522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:03.508 [2024-07-16 00:27:22.179543] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:03.508 [2024-07-16 00:27:22.180147] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:03.508 [2024-07-16 00:27:22.180376] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.508 [2024-07-16 00:27:22.180386] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.508 [2024-07-16 00:27:22.180392] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.508 [2024-07-16 00:27:22.183133] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.508 [2024-07-16 00:27:22.191921] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.508 [2024-07-16 00:27:22.192395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.508 [2024-07-16 00:27:22.192439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:03.508 [2024-07-16 00:27:22.192462] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:03.508 [2024-07-16 00:27:22.193041] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:03.508 [2024-07-16 00:27:22.193593] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.508 [2024-07-16 00:27:22.193603] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.508 [2024-07-16 00:27:22.193609] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.508 [2024-07-16 00:27:22.196278] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.508 [2024-07-16 00:27:22.204909] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.508 [2024-07-16 00:27:22.205415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.508 [2024-07-16 00:27:22.205458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:03.508 [2024-07-16 00:27:22.205480] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:03.508 [2024-07-16 00:27:22.206059] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:03.508 [2024-07-16 00:27:22.206502] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.508 [2024-07-16 00:27:22.206513] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.508 [2024-07-16 00:27:22.206523] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.508 [2024-07-16 00:27:22.209245] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.508 [2024-07-16 00:27:22.217878] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.508 [2024-07-16 00:27:22.218354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.508 [2024-07-16 00:27:22.218396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:03.508 [2024-07-16 00:27:22.218418] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:03.508 [2024-07-16 00:27:22.218997] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:03.508 [2024-07-16 00:27:22.219476] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.508 [2024-07-16 00:27:22.219486] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.508 [2024-07-16 00:27:22.219492] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.508 [2024-07-16 00:27:22.222180] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.508 [2024-07-16 00:27:22.231135] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.508 [2024-07-16 00:27:22.231638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.508 [2024-07-16 00:27:22.231680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:03.508 [2024-07-16 00:27:22.231702] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:03.508 [2024-07-16 00:27:22.232295] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:03.508 [2024-07-16 00:27:22.232696] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.508 [2024-07-16 00:27:22.232706] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.509 [2024-07-16 00:27:22.232712] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.509 [2024-07-16 00:27:22.235390] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.509 [2024-07-16 00:27:22.244059] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.509 [2024-07-16 00:27:22.244546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.509 [2024-07-16 00:27:22.244590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:03.509 [2024-07-16 00:27:22.244612] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:03.509 [2024-07-16 00:27:22.245198] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:03.509 [2024-07-16 00:27:22.245369] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.509 [2024-07-16 00:27:22.245378] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.509 [2024-07-16 00:27:22.245385] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.509 [2024-07-16 00:27:22.248042] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.509 [2024-07-16 00:27:22.257035] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.509 [2024-07-16 00:27:22.257516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.509 [2024-07-16 00:27:22.257536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:03.509 [2024-07-16 00:27:22.257544] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:03.509 [2024-07-16 00:27:22.257706] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:03.509 [2024-07-16 00:27:22.257870] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.509 [2024-07-16 00:27:22.257879] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.509 [2024-07-16 00:27:22.257885] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.509 [2024-07-16 00:27:22.260723] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.509 [2024-07-16 00:27:22.269926] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.509 [2024-07-16 00:27:22.270333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.509 [2024-07-16 00:27:22.270376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:03.509 [2024-07-16 00:27:22.270398] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:03.509 [2024-07-16 00:27:22.270978] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:03.509 [2024-07-16 00:27:22.271579] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.509 [2024-07-16 00:27:22.271589] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.509 [2024-07-16 00:27:22.271595] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.509 [2024-07-16 00:27:22.274260] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.509 [2024-07-16 00:27:22.282957] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.509 [2024-07-16 00:27:22.283402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.509 [2024-07-16 00:27:22.283444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:03.509 [2024-07-16 00:27:22.283466] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:03.509 [2024-07-16 00:27:22.284044] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:03.509 [2024-07-16 00:27:22.284642] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.509 [2024-07-16 00:27:22.284668] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.509 [2024-07-16 00:27:22.284688] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.509 [2024-07-16 00:27:22.287398] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.509 [2024-07-16 00:27:22.295914] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.509 [2024-07-16 00:27:22.296366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.509 [2024-07-16 00:27:22.296383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:03.509 [2024-07-16 00:27:22.296390] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:03.509 [2024-07-16 00:27:22.296553] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:03.509 [2024-07-16 00:27:22.296720] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.509 [2024-07-16 00:27:22.296729] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.509 [2024-07-16 00:27:22.296736] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.509 [2024-07-16 00:27:22.299483] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.509 [2024-07-16 00:27:22.309053] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.509 [2024-07-16 00:27:22.309496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.509 [2024-07-16 00:27:22.309538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:03.509 [2024-07-16 00:27:22.309561] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:03.509 [2024-07-16 00:27:22.310140] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:03.509 [2024-07-16 00:27:22.310645] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.509 [2024-07-16 00:27:22.310655] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.509 [2024-07-16 00:27:22.310661] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.509 [2024-07-16 00:27:22.313351] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.509 [2024-07-16 00:27:22.321966] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.509 [2024-07-16 00:27:22.322388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.509 [2024-07-16 00:27:22.322431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:03.509 [2024-07-16 00:27:22.322454] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:03.509 [2024-07-16 00:27:22.323033] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:03.509 [2024-07-16 00:27:22.323622] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.509 [2024-07-16 00:27:22.323648] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.509 [2024-07-16 00:27:22.323667] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.509 [2024-07-16 00:27:22.326346] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.509 [2024-07-16 00:27:22.335000] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.509 [2024-07-16 00:27:22.335465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.509 [2024-07-16 00:27:22.335508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:03.509 [2024-07-16 00:27:22.335529] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:03.509 [2024-07-16 00:27:22.335879] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:03.509 [2024-07-16 00:27:22.336044] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.509 [2024-07-16 00:27:22.336053] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.509 [2024-07-16 00:27:22.336060] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.509 [2024-07-16 00:27:22.338757] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.509 [2024-07-16 00:27:22.348014] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.509 [2024-07-16 00:27:22.348504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.509 [2024-07-16 00:27:22.348521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:03.509 [2024-07-16 00:27:22.348529] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:03.509 [2024-07-16 00:27:22.348702] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:03.509 [2024-07-16 00:27:22.348875] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.509 [2024-07-16 00:27:22.348884] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.509 [2024-07-16 00:27:22.348891] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.769 [2024-07-16 00:27:22.351729] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.769 [2024-07-16 00:27:22.361066] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.769 [2024-07-16 00:27:22.361515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.769 [2024-07-16 00:27:22.361533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:03.769 [2024-07-16 00:27:22.361540] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:03.769 [2024-07-16 00:27:22.361718] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:03.769 [2024-07-16 00:27:22.361895] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.769 [2024-07-16 00:27:22.361905] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.769 [2024-07-16 00:27:22.361911] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.770 [2024-07-16 00:27:22.364744] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.770 [2024-07-16 00:27:22.374105] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.770 [2024-07-16 00:27:22.374457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.770 [2024-07-16 00:27:22.374474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:03.770 [2024-07-16 00:27:22.374482] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:03.770 [2024-07-16 00:27:22.374659] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:03.770 [2024-07-16 00:27:22.374839] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.770 [2024-07-16 00:27:22.374849] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.770 [2024-07-16 00:27:22.374855] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.770 [2024-07-16 00:27:22.377682] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.770 [2024-07-16 00:27:22.387217] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.770 [2024-07-16 00:27:22.387621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.770 [2024-07-16 00:27:22.387638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:03.770 [2024-07-16 00:27:22.387649] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:03.770 [2024-07-16 00:27:22.387826] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:03.770 [2024-07-16 00:27:22.388005] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.770 [2024-07-16 00:27:22.388015] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.770 [2024-07-16 00:27:22.388021] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.770 [2024-07-16 00:27:22.390856] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.770 [2024-07-16 00:27:22.400388] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.770 [2024-07-16 00:27:22.400880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.770 [2024-07-16 00:27:22.400897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:03.770 [2024-07-16 00:27:22.400904] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:03.770 [2024-07-16 00:27:22.401082] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:03.770 [2024-07-16 00:27:22.401266] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.770 [2024-07-16 00:27:22.401276] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.770 [2024-07-16 00:27:22.401282] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.770 [2024-07-16 00:27:22.404110] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.770 [2024-07-16 00:27:22.413409] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.770 [2024-07-16 00:27:22.413870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.770 [2024-07-16 00:27:22.413887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:03.770 [2024-07-16 00:27:22.413895] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:03.770 [2024-07-16 00:27:22.414068] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:03.770 [2024-07-16 00:27:22.414248] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.770 [2024-07-16 00:27:22.414258] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.770 [2024-07-16 00:27:22.414265] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.770 [2024-07-16 00:27:22.417012] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.770 [2024-07-16 00:27:22.426415] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.770 [2024-07-16 00:27:22.426849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.770 [2024-07-16 00:27:22.426866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:03.770 [2024-07-16 00:27:22.426873] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:03.770 [2024-07-16 00:27:22.427045] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:03.770 [2024-07-16 00:27:22.427219] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.770 [2024-07-16 00:27:22.427238] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.770 [2024-07-16 00:27:22.427245] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.770 [2024-07-16 00:27:22.430040] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.770 [2024-07-16 00:27:22.439496] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.770 [2024-07-16 00:27:22.439911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.770 [2024-07-16 00:27:22.439927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:03.770 [2024-07-16 00:27:22.439934] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:03.770 [2024-07-16 00:27:22.440105] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:03.770 [2024-07-16 00:27:22.440283] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.770 [2024-07-16 00:27:22.440293] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.770 [2024-07-16 00:27:22.440299] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.770 [2024-07-16 00:27:22.443047] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.770 [2024-07-16 00:27:22.452448] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.770 [2024-07-16 00:27:22.452962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.770 [2024-07-16 00:27:22.452978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:03.770 [2024-07-16 00:27:22.452985] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:03.770 [2024-07-16 00:27:22.453157] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:03.770 [2024-07-16 00:27:22.453336] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.770 [2024-07-16 00:27:22.453346] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.770 [2024-07-16 00:27:22.453352] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.770 [2024-07-16 00:27:22.456096] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.770 [2024-07-16 00:27:22.465407] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.770 [2024-07-16 00:27:22.465819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.770 [2024-07-16 00:27:22.465836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:03.770 [2024-07-16 00:27:22.465844] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:03.770 [2024-07-16 00:27:22.466017] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:03.770 [2024-07-16 00:27:22.466192] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.770 [2024-07-16 00:27:22.466201] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.770 [2024-07-16 00:27:22.466207] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.770 [2024-07-16 00:27:22.468959] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.770 [2024-07-16 00:27:22.478371] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.770 [2024-07-16 00:27:22.478849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.770 [2024-07-16 00:27:22.478866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:03.770 [2024-07-16 00:27:22.478873] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:03.770 [2024-07-16 00:27:22.479045] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:03.770 [2024-07-16 00:27:22.479218] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.770 [2024-07-16 00:27:22.479241] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.770 [2024-07-16 00:27:22.479249] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.770 [2024-07-16 00:27:22.481993] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.770 [2024-07-16 00:27:22.491403] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.770 [2024-07-16 00:27:22.491789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.770 [2024-07-16 00:27:22.491805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:03.770 [2024-07-16 00:27:22.491812] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:03.770 [2024-07-16 00:27:22.491984] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:03.770 [2024-07-16 00:27:22.492158] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.770 [2024-07-16 00:27:22.492168] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.770 [2024-07-16 00:27:22.492174] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.770 [2024-07-16 00:27:22.494922] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.770 [2024-07-16 00:27:22.504512] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.770 [2024-07-16 00:27:22.505002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.770 [2024-07-16 00:27:22.505019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:03.770 [2024-07-16 00:27:22.505026] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:03.770 [2024-07-16 00:27:22.505197] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:03.770 [2024-07-16 00:27:22.505377] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.770 [2024-07-16 00:27:22.505387] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.770 [2024-07-16 00:27:22.505394] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.770 [2024-07-16 00:27:22.508137] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.771 [2024-07-16 00:27:22.517611] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.771 [2024-07-16 00:27:22.518122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.771 [2024-07-16 00:27:22.518139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:03.771 [2024-07-16 00:27:22.518147] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:03.771 [2024-07-16 00:27:22.518329] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:03.771 [2024-07-16 00:27:22.518502] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.771 [2024-07-16 00:27:22.518512] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.771 [2024-07-16 00:27:22.518518] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.771 [2024-07-16 00:27:22.521266] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.771 [2024-07-16 00:27:22.530675] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.771 [2024-07-16 00:27:22.531155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.771 [2024-07-16 00:27:22.531171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:03.771 [2024-07-16 00:27:22.531179] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:03.771 [2024-07-16 00:27:22.531374] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:03.771 [2024-07-16 00:27:22.531554] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.771 [2024-07-16 00:27:22.531564] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.771 [2024-07-16 00:27:22.531571] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.771 [2024-07-16 00:27:22.534361] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.771 [2024-07-16 00:27:22.543770] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.771 [2024-07-16 00:27:22.544267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.771 [2024-07-16 00:27:22.544285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:03.771 [2024-07-16 00:27:22.544292] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:03.771 [2024-07-16 00:27:22.544465] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:03.771 [2024-07-16 00:27:22.544639] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.771 [2024-07-16 00:27:22.544648] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.771 [2024-07-16 00:27:22.544655] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.771 [2024-07-16 00:27:22.547404] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.771 [2024-07-16 00:27:22.556807] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.771 [2024-07-16 00:27:22.557289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.771 [2024-07-16 00:27:22.557307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:03.771 [2024-07-16 00:27:22.557314] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:03.771 [2024-07-16 00:27:22.557486] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:03.771 [2024-07-16 00:27:22.557660] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.771 [2024-07-16 00:27:22.557669] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.771 [2024-07-16 00:27:22.557679] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.771 [2024-07-16 00:27:22.560461] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.771 [2024-07-16 00:27:22.569904] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.771 [2024-07-16 00:27:22.570326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.771 [2024-07-16 00:27:22.570344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:03.771 [2024-07-16 00:27:22.570352] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:03.771 [2024-07-16 00:27:22.570536] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:03.771 [2024-07-16 00:27:22.570701] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.771 [2024-07-16 00:27:22.570711] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.771 [2024-07-16 00:27:22.570717] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.771 [2024-07-16 00:27:22.573452] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.771 [2024-07-16 00:27:22.582958] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.771 [2024-07-16 00:27:22.583421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.771 [2024-07-16 00:27:22.583439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:03.771 [2024-07-16 00:27:22.583446] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:03.771 [2024-07-16 00:27:22.583623] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:03.771 [2024-07-16 00:27:22.583788] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.771 [2024-07-16 00:27:22.583797] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.771 [2024-07-16 00:27:22.583803] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.771 [2024-07-16 00:27:22.586543] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.771 [2024-07-16 00:27:22.595943] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.771 [2024-07-16 00:27:22.596377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.771 [2024-07-16 00:27:22.596394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:03.771 [2024-07-16 00:27:22.596401] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:03.771 [2024-07-16 00:27:22.596573] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:03.771 [2024-07-16 00:27:22.596747] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.771 [2024-07-16 00:27:22.596757] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.771 [2024-07-16 00:27:22.596763] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.771 [2024-07-16 00:27:22.599509] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.771 [2024-07-16 00:27:22.608948] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.771 [2024-07-16 00:27:22.609399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.771 [2024-07-16 00:27:22.609419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:03.771 [2024-07-16 00:27:22.609426] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:03.771 [2024-07-16 00:27:22.609598] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:03.771 [2024-07-16 00:27:22.609772] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.771 [2024-07-16 00:27:22.609781] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.771 [2024-07-16 00:27:22.609788] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.771 [2024-07-16 00:27:22.612533] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.032 [2024-07-16 00:27:22.622062] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.032 [2024-07-16 00:27:22.622442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.032 [2024-07-16 00:27:22.622459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:04.032 [2024-07-16 00:27:22.622467] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:04.032 [2024-07-16 00:27:22.622639] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:04.032 [2024-07-16 00:27:22.622812] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.032 [2024-07-16 00:27:22.622822] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.032 [2024-07-16 00:27:22.622828] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.032 [2024-07-16 00:27:22.625612] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.032 [2024-07-16 00:27:22.635193] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.032 [2024-07-16 00:27:22.635658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.032 [2024-07-16 00:27:22.635675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:04.032 [2024-07-16 00:27:22.635682] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:04.032 [2024-07-16 00:27:22.635854] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:04.032 [2024-07-16 00:27:22.636026] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.032 [2024-07-16 00:27:22.636035] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.032 [2024-07-16 00:27:22.636041] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.032 [2024-07-16 00:27:22.638784] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.032 [2024-07-16 00:27:22.648138] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.032 [2024-07-16 00:27:22.648623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.032 [2024-07-16 00:27:22.648639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:04.032 [2024-07-16 00:27:22.648646] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:04.032 [2024-07-16 00:27:22.648818] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:04.032 [2024-07-16 00:27:22.648995] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.032 [2024-07-16 00:27:22.649005] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.032 [2024-07-16 00:27:22.649011] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.032 [2024-07-16 00:27:22.651763] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.032 [2024-07-16 00:27:22.661267] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.032 [2024-07-16 00:27:22.661747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.032 [2024-07-16 00:27:22.661764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:04.032 [2024-07-16 00:27:22.661771] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:04.033 [2024-07-16 00:27:22.661955] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:04.033 [2024-07-16 00:27:22.662129] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.033 [2024-07-16 00:27:22.662140] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.033 [2024-07-16 00:27:22.662146] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.033 [2024-07-16 00:27:22.664899] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.033 [2024-07-16 00:27:22.674340] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.033 [2024-07-16 00:27:22.674744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.033 [2024-07-16 00:27:22.674761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:04.033 [2024-07-16 00:27:22.674769] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:04.033 [2024-07-16 00:27:22.674942] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:04.033 [2024-07-16 00:27:22.675115] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.033 [2024-07-16 00:27:22.675125] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.033 [2024-07-16 00:27:22.675131] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.033 [2024-07-16 00:27:22.677883] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.033 [2024-07-16 00:27:22.687404] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.033 [2024-07-16 00:27:22.687883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.033 [2024-07-16 00:27:22.687900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:04.033 [2024-07-16 00:27:22.687907] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:04.033 [2024-07-16 00:27:22.688079] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:04.033 [2024-07-16 00:27:22.688258] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.033 [2024-07-16 00:27:22.688268] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.033 [2024-07-16 00:27:22.688274] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.033 [2024-07-16 00:27:22.691022] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.033 [2024-07-16 00:27:22.700428] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.033 [2024-07-16 00:27:22.700910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.033 [2024-07-16 00:27:22.700927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:04.033 [2024-07-16 00:27:22.700933] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:04.033 [2024-07-16 00:27:22.701105] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:04.033 [2024-07-16 00:27:22.701285] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.033 [2024-07-16 00:27:22.701294] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.033 [2024-07-16 00:27:22.701301] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.033 [2024-07-16 00:27:22.704047] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.033 [2024-07-16 00:27:22.713454] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.033 [2024-07-16 00:27:22.713848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.033 [2024-07-16 00:27:22.713864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:04.033 [2024-07-16 00:27:22.713871] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:04.033 [2024-07-16 00:27:22.714044] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:04.033 [2024-07-16 00:27:22.714216] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.033 [2024-07-16 00:27:22.714231] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.033 [2024-07-16 00:27:22.714238] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.033 [2024-07-16 00:27:22.716984] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.033 [2024-07-16 00:27:22.726433] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.033 [2024-07-16 00:27:22.726894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.033 [2024-07-16 00:27:22.726911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:04.033 [2024-07-16 00:27:22.726918] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:04.033 [2024-07-16 00:27:22.727089] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:04.033 [2024-07-16 00:27:22.727271] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.033 [2024-07-16 00:27:22.727282] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.033 [2024-07-16 00:27:22.727288] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.033 [2024-07-16 00:27:22.730091] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.033 [2024-07-16 00:27:22.739407] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.033 [2024-07-16 00:27:22.739891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.033 [2024-07-16 00:27:22.739907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:04.033 [2024-07-16 00:27:22.739917] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:04.033 [2024-07-16 00:27:22.740089] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:04.033 [2024-07-16 00:27:22.740268] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.033 [2024-07-16 00:27:22.740277] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.033 [2024-07-16 00:27:22.740284] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.033 [2024-07-16 00:27:22.743032] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.033 [2024-07-16 00:27:22.752453] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.033 [2024-07-16 00:27:22.752957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.033 [2024-07-16 00:27:22.752974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:04.033 [2024-07-16 00:27:22.752981] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:04.033 [2024-07-16 00:27:22.753153] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:04.033 [2024-07-16 00:27:22.753333] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.033 [2024-07-16 00:27:22.753343] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.033 [2024-07-16 00:27:22.753350] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.033 [2024-07-16 00:27:22.756096] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.033 [2024-07-16 00:27:22.765507] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.033 [2024-07-16 00:27:22.765960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.033 [2024-07-16 00:27:22.765977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:04.033 [2024-07-16 00:27:22.765984] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:04.033 [2024-07-16 00:27:22.766155] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:04.033 [2024-07-16 00:27:22.766335] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.033 [2024-07-16 00:27:22.766345] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.033 [2024-07-16 00:27:22.766352] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.033 [2024-07-16 00:27:22.769096] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.033 [2024-07-16 00:27:22.778492] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.033 [2024-07-16 00:27:22.778975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.033 [2024-07-16 00:27:22.778992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:04.033 [2024-07-16 00:27:22.778999] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:04.033 [2024-07-16 00:27:22.779172] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:04.033 [2024-07-16 00:27:22.779377] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.033 [2024-07-16 00:27:22.779392] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.033 [2024-07-16 00:27:22.779398] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.033 [2024-07-16 00:27:22.782146] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.033 [2024-07-16 00:27:22.791580] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.033 [2024-07-16 00:27:22.792063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.033 [2024-07-16 00:27:22.792081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:04.033 [2024-07-16 00:27:22.792088] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:04.033 [2024-07-16 00:27:22.792266] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:04.033 [2024-07-16 00:27:22.792460] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.033 [2024-07-16 00:27:22.792470] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.033 [2024-07-16 00:27:22.792477] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.033 [2024-07-16 00:27:22.795260] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.033 [2024-07-16 00:27:22.804655] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.033 [2024-07-16 00:27:22.805136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.034 [2024-07-16 00:27:22.805153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:04.034 [2024-07-16 00:27:22.805159] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:04.034 [2024-07-16 00:27:22.805364] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:04.034 [2024-07-16 00:27:22.805539] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.034 [2024-07-16 00:27:22.805548] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.034 [2024-07-16 00:27:22.805554] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.034 [2024-07-16 00:27:22.808304] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.034 [2024-07-16 00:27:22.817693] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.034 [2024-07-16 00:27:22.818172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.034 [2024-07-16 00:27:22.818190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:04.034 [2024-07-16 00:27:22.818197] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:04.034 [2024-07-16 00:27:22.818375] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:04.034 [2024-07-16 00:27:22.818548] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.034 [2024-07-16 00:27:22.818558] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.034 [2024-07-16 00:27:22.818564] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.034 [2024-07-16 00:27:22.821307] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.034 [2024-07-16 00:27:22.830705] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.034 [2024-07-16 00:27:22.831126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.034 [2024-07-16 00:27:22.831142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:04.034 [2024-07-16 00:27:22.831149] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:04.034 [2024-07-16 00:27:22.831329] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:04.034 [2024-07-16 00:27:22.831502] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.034 [2024-07-16 00:27:22.831511] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.034 [2024-07-16 00:27:22.831518] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.034 [2024-07-16 00:27:22.834306] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.034 [2024-07-16 00:27:22.843698] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.034 [2024-07-16 00:27:22.844117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.034 [2024-07-16 00:27:22.844135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:04.034 [2024-07-16 00:27:22.844142] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:04.034 [2024-07-16 00:27:22.844340] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:04.034 [2024-07-16 00:27:22.844519] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.034 [2024-07-16 00:27:22.844529] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.034 [2024-07-16 00:27:22.844538] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.034 [2024-07-16 00:27:22.847322] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.034 [2024-07-16 00:27:22.856720] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.034 [2024-07-16 00:27:22.857099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.034 [2024-07-16 00:27:22.857115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:04.034 [2024-07-16 00:27:22.857122] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:04.034 [2024-07-16 00:27:22.857300] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:04.034 [2024-07-16 00:27:22.857473] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.034 [2024-07-16 00:27:22.857483] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.034 [2024-07-16 00:27:22.857489] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.034 [2024-07-16 00:27:22.860236] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.034 [2024-07-16 00:27:22.869788] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.034 [2024-07-16 00:27:22.870266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.034 [2024-07-16 00:27:22.870283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:04.034 [2024-07-16 00:27:22.870290] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:04.034 [2024-07-16 00:27:22.870473] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:04.034 [2024-07-16 00:27:22.870636] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.034 [2024-07-16 00:27:22.870645] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.034 [2024-07-16 00:27:22.870651] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.034 [2024-07-16 00:27:22.873413] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.034 [2024-07-16 00:27:22.882901] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.034 [2024-07-16 00:27:22.883320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.034 [2024-07-16 00:27:22.883338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:04.034 [2024-07-16 00:27:22.883345] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:04.034 [2024-07-16 00:27:22.883518] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:04.034 [2024-07-16 00:27:22.883691] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.294 [2024-07-16 00:27:22.883701] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.294 [2024-07-16 00:27:22.883709] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.294 [2024-07-16 00:27:22.886519] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.294 [2024-07-16 00:27:22.895972] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.294 [2024-07-16 00:27:22.896451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.294 [2024-07-16 00:27:22.896467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:04.294 [2024-07-16 00:27:22.896475] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:04.294 [2024-07-16 00:27:22.896638] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:04.295 [2024-07-16 00:27:22.896801] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.295 [2024-07-16 00:27:22.896811] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.295 [2024-07-16 00:27:22.896817] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.295 [2024-07-16 00:27:22.899557] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.295 [2024-07-16 00:27:22.908979] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.295 [2024-07-16 00:27:22.909390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.295 [2024-07-16 00:27:22.909408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:04.295 [2024-07-16 00:27:22.909414] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:04.295 [2024-07-16 00:27:22.909586] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:04.295 [2024-07-16 00:27:22.909759] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.295 [2024-07-16 00:27:22.909768] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.295 [2024-07-16 00:27:22.909778] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.295 [2024-07-16 00:27:22.912633] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.295 [2024-07-16 00:27:22.921971] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.295 [2024-07-16 00:27:22.922458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.295 [2024-07-16 00:27:22.922487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:04.295 [2024-07-16 00:27:22.922495] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:04.295 [2024-07-16 00:27:22.922658] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:04.295 [2024-07-16 00:27:22.922822] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.295 [2024-07-16 00:27:22.922831] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.295 [2024-07-16 00:27:22.922837] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.295 [2024-07-16 00:27:22.925575] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.295 [2024-07-16 00:27:22.935020] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.295 [2024-07-16 00:27:22.935501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.295 [2024-07-16 00:27:22.935519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:04.295 [2024-07-16 00:27:22.935526] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:04.295 [2024-07-16 00:27:22.935698] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:04.295 [2024-07-16 00:27:22.935870] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.295 [2024-07-16 00:27:22.935879] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.295 [2024-07-16 00:27:22.935885] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.295 [2024-07-16 00:27:22.938634] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.295 [2024-07-16 00:27:22.948027] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.295 [2024-07-16 00:27:22.948510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.295 [2024-07-16 00:27:22.948526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:04.295 [2024-07-16 00:27:22.948533] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:04.295 [2024-07-16 00:27:22.948705] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:04.295 [2024-07-16 00:27:22.948877] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.295 [2024-07-16 00:27:22.948885] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.295 [2024-07-16 00:27:22.948892] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.295 [2024-07-16 00:27:22.951637] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.295 [2024-07-16 00:27:22.961026] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.295 [2024-07-16 00:27:22.961513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.295 [2024-07-16 00:27:22.961533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:04.295 [2024-07-16 00:27:22.961540] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:04.295 [2024-07-16 00:27:22.961712] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:04.295 [2024-07-16 00:27:22.961885] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.295 [2024-07-16 00:27:22.961894] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.295 [2024-07-16 00:27:22.961901] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.295 [2024-07-16 00:27:22.964650] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.295 [2024-07-16 00:27:22.974036] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.295 [2024-07-16 00:27:22.974512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.295 [2024-07-16 00:27:22.974528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:04.295 [2024-07-16 00:27:22.974535] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:04.295 [2024-07-16 00:27:22.974706] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:04.295 [2024-07-16 00:27:22.974878] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.295 [2024-07-16 00:27:22.974886] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.295 [2024-07-16 00:27:22.974893] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.295 [2024-07-16 00:27:22.977695] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.295 [2024-07-16 00:27:22.987135] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.295 [2024-07-16 00:27:22.987597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.295 [2024-07-16 00:27:22.987614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:04.295 [2024-07-16 00:27:22.987621] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:04.295 [2024-07-16 00:27:22.987793] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:04.295 [2024-07-16 00:27:22.987966] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.295 [2024-07-16 00:27:22.987976] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.295 [2024-07-16 00:27:22.987982] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.295 [2024-07-16 00:27:22.990729] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.295 [2024-07-16 00:27:23.000118] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.295 [2024-07-16 00:27:23.000575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.295 [2024-07-16 00:27:23.000592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:04.295 [2024-07-16 00:27:23.000599] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:04.295 [2024-07-16 00:27:23.000771] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:04.295 [2024-07-16 00:27:23.000948] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.295 [2024-07-16 00:27:23.000957] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.295 [2024-07-16 00:27:23.000964] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.295 [2024-07-16 00:27:23.003709] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.295 [2024-07-16 00:27:23.013137] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.295 [2024-07-16 00:27:23.013600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.295 [2024-07-16 00:27:23.013617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:04.296 [2024-07-16 00:27:23.013625] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:04.296 [2024-07-16 00:27:23.013796] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:04.296 [2024-07-16 00:27:23.013968] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.296 [2024-07-16 00:27:23.013978] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.296 [2024-07-16 00:27:23.013984] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.296 [2024-07-16 00:27:23.016729] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.296 [2024-07-16 00:27:23.026112] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.296 [2024-07-16 00:27:23.026588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.296 [2024-07-16 00:27:23.026605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:04.296 [2024-07-16 00:27:23.026611] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:04.296 [2024-07-16 00:27:23.026783] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:04.296 [2024-07-16 00:27:23.026954] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.296 [2024-07-16 00:27:23.026963] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.296 [2024-07-16 00:27:23.026970] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.296 [2024-07-16 00:27:23.029767] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.296 [2024-07-16 00:27:23.039187] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.296 [2024-07-16 00:27:23.039666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.296 [2024-07-16 00:27:23.039683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:04.296 [2024-07-16 00:27:23.039690] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:04.296 [2024-07-16 00:27:23.039861] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:04.296 [2024-07-16 00:27:23.040032] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.296 [2024-07-16 00:27:23.040041] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.296 [2024-07-16 00:27:23.040047] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.296 [2024-07-16 00:27:23.042999] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.296 [2024-07-16 00:27:23.052229] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.296 [2024-07-16 00:27:23.052712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.296 [2024-07-16 00:27:23.052729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:04.296 [2024-07-16 00:27:23.052736] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:04.296 [2024-07-16 00:27:23.052908] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:04.296 [2024-07-16 00:27:23.053081] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.296 [2024-07-16 00:27:23.053089] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.296 [2024-07-16 00:27:23.053095] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.296 [2024-07-16 00:27:23.055841] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.296 [2024-07-16 00:27:23.065234] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.296 [2024-07-16 00:27:23.065721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.296 [2024-07-16 00:27:23.065737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:04.296 [2024-07-16 00:27:23.065744] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:04.296 [2024-07-16 00:27:23.065916] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:04.296 [2024-07-16 00:27:23.066087] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.296 [2024-07-16 00:27:23.066095] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.296 [2024-07-16 00:27:23.066102] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.296 [2024-07-16 00:27:23.068846] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.296 [2024-07-16 00:27:23.078237] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.296 [2024-07-16 00:27:23.078712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.296 [2024-07-16 00:27:23.078730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:04.296 [2024-07-16 00:27:23.078737] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:04.296 [2024-07-16 00:27:23.078908] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:04.296 [2024-07-16 00:27:23.079079] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.296 [2024-07-16 00:27:23.079089] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.296 [2024-07-16 00:27:23.079095] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.296 [2024-07-16 00:27:23.081921] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.296 [2024-07-16 00:27:23.091189] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.296 [2024-07-16 00:27:23.091666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.296 [2024-07-16 00:27:23.091683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:04.296 [2024-07-16 00:27:23.091692] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:04.296 [2024-07-16 00:27:23.091864] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:04.296 [2024-07-16 00:27:23.092036] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.296 [2024-07-16 00:27:23.092044] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.296 [2024-07-16 00:27:23.092050] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.296 [2024-07-16 00:27:23.094793] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.296 [2024-07-16 00:27:23.104180] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.296 [2024-07-16 00:27:23.104665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.296 [2024-07-16 00:27:23.104681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:04.296 [2024-07-16 00:27:23.104688] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:04.296 [2024-07-16 00:27:23.104860] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:04.296 [2024-07-16 00:27:23.105032] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.296 [2024-07-16 00:27:23.105040] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.296 [2024-07-16 00:27:23.105047] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.296 [2024-07-16 00:27:23.107831] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.296 [2024-07-16 00:27:23.117221] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.296 [2024-07-16 00:27:23.117703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.296 [2024-07-16 00:27:23.117719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:04.296 [2024-07-16 00:27:23.117726] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:04.296 [2024-07-16 00:27:23.117898] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:04.296 [2024-07-16 00:27:23.118070] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.296 [2024-07-16 00:27:23.118080] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.296 [2024-07-16 00:27:23.118086] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.297 [2024-07-16 00:27:23.120832] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.297 [2024-07-16 00:27:23.130222] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.297 [2024-07-16 00:27:23.130705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.297 [2024-07-16 00:27:23.130721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:04.297 [2024-07-16 00:27:23.130728] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:04.297 [2024-07-16 00:27:23.130899] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:04.297 [2024-07-16 00:27:23.131071] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.297 [2024-07-16 00:27:23.131082] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.297 [2024-07-16 00:27:23.131088] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.297 [2024-07-16 00:27:23.133926] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.297 [2024-07-16 00:27:23.143164] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.297 [2024-07-16 00:27:23.143639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.297 [2024-07-16 00:27:23.143656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:04.297 [2024-07-16 00:27:23.143664] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:04.297 [2024-07-16 00:27:23.143836] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:04.297 [2024-07-16 00:27:23.144010] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.297 [2024-07-16 00:27:23.144020] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.297 [2024-07-16 00:27:23.144026] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.557 [2024-07-16 00:27:23.146862] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.557 [2024-07-16 00:27:23.156125] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.557 [2024-07-16 00:27:23.156614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.557 [2024-07-16 00:27:23.156631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:04.557 [2024-07-16 00:27:23.156639] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:04.557 [2024-07-16 00:27:23.156810] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:04.557 [2024-07-16 00:27:23.156984] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.557 [2024-07-16 00:27:23.156993] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.557 [2024-07-16 00:27:23.157000] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.557 [2024-07-16 00:27:23.159746] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.557 [2024-07-16 00:27:23.169242] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.557 [2024-07-16 00:27:23.169724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.557 [2024-07-16 00:27:23.169741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:04.557 [2024-07-16 00:27:23.169748] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:04.557 [2024-07-16 00:27:23.169925] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:04.557 [2024-07-16 00:27:23.170104] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.557 [2024-07-16 00:27:23.170114] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.557 [2024-07-16 00:27:23.170120] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.557 [2024-07-16 00:27:23.172884] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.557 [2024-07-16 00:27:23.182290] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.557 [2024-07-16 00:27:23.182771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.557 [2024-07-16 00:27:23.182788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:04.557 [2024-07-16 00:27:23.182796] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:04.557 [2024-07-16 00:27:23.182968] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:04.557 [2024-07-16 00:27:23.183140] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.557 [2024-07-16 00:27:23.183149] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.557 [2024-07-16 00:27:23.183155] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.557 [2024-07-16 00:27:23.185932] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.557 [2024-07-16 00:27:23.195341] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.557 [2024-07-16 00:27:23.195759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.557 [2024-07-16 00:27:23.195776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:04.557 [2024-07-16 00:27:23.195785] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:04.557 [2024-07-16 00:27:23.195957] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:04.557 [2024-07-16 00:27:23.196129] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.558 [2024-07-16 00:27:23.196139] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.558 [2024-07-16 00:27:23.196146] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.558 [2024-07-16 00:27:23.198894] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.558 [2024-07-16 00:27:23.208349] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.558 [2024-07-16 00:27:23.208757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.558 [2024-07-16 00:27:23.208774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:04.558 [2024-07-16 00:27:23.208782] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:04.558 [2024-07-16 00:27:23.208954] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:04.558 [2024-07-16 00:27:23.209128] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.558 [2024-07-16 00:27:23.209137] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.558 [2024-07-16 00:27:23.209144] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.558 [2024-07-16 00:27:23.211889] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.558 [2024-07-16 00:27:23.221445] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.558 [2024-07-16 00:27:23.221858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.558 [2024-07-16 00:27:23.221875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:04.558 [2024-07-16 00:27:23.221881] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:04.558 [2024-07-16 00:27:23.222057] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:04.558 [2024-07-16 00:27:23.222236] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.558 [2024-07-16 00:27:23.222247] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.558 [2024-07-16 00:27:23.222254] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.558 [2024-07-16 00:27:23.224994] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.558 [2024-07-16 00:27:23.234436] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.558 [2024-07-16 00:27:23.234892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.558 [2024-07-16 00:27:23.234909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:04.558 [2024-07-16 00:27:23.234916] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:04.558 [2024-07-16 00:27:23.235089] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:04.558 [2024-07-16 00:27:23.235267] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.558 [2024-07-16 00:27:23.235277] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.558 [2024-07-16 00:27:23.235283] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.558 [2024-07-16 00:27:23.238024] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.558 [2024-07-16 00:27:23.247530] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.558 [2024-07-16 00:27:23.248006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.558 [2024-07-16 00:27:23.248023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:04.558 [2024-07-16 00:27:23.248031] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:04.558 [2024-07-16 00:27:23.248202] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:04.558 [2024-07-16 00:27:23.248400] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.558 [2024-07-16 00:27:23.248411] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.558 [2024-07-16 00:27:23.248418] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.558 [2024-07-16 00:27:23.251202] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.558 [2024-07-16 00:27:23.260592] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.558 [2024-07-16 00:27:23.261080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.558 [2024-07-16 00:27:23.261097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:04.558 [2024-07-16 00:27:23.261104] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:04.558 [2024-07-16 00:27:23.261286] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:04.558 [2024-07-16 00:27:23.261464] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.558 [2024-07-16 00:27:23.261485] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.558 [2024-07-16 00:27:23.261495] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.558 [2024-07-16 00:27:23.264279] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.558 [2024-07-16 00:27:23.273604] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.558 [2024-07-16 00:27:23.274083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.558 [2024-07-16 00:27:23.274101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:04.558 [2024-07-16 00:27:23.274108] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:04.558 [2024-07-16 00:27:23.274284] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:04.558 [2024-07-16 00:27:23.274457] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.558 [2024-07-16 00:27:23.274466] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.558 [2024-07-16 00:27:23.274472] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.558 [2024-07-16 00:27:23.277213] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.558 [2024-07-16 00:27:23.286550] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.558 [2024-07-16 00:27:23.287008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.558 [2024-07-16 00:27:23.287025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:04.558 [2024-07-16 00:27:23.287033] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:04.558 [2024-07-16 00:27:23.287205] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:04.558 [2024-07-16 00:27:23.287384] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.558 [2024-07-16 00:27:23.287394] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.558 [2024-07-16 00:27:23.287400] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.558 [2024-07-16 00:27:23.290199] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.558 [2024-07-16 00:27:23.299597] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.558 [2024-07-16 00:27:23.300079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.558 [2024-07-16 00:27:23.300095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:04.558 [2024-07-16 00:27:23.300102] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:04.558 [2024-07-16 00:27:23.300277] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:04.558 [2024-07-16 00:27:23.300450] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.558 [2024-07-16 00:27:23.300459] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.558 [2024-07-16 00:27:23.300465] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.558 [2024-07-16 00:27:23.303206] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.558 [2024-07-16 00:27:23.312645] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.558 [2024-07-16 00:27:23.312987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.558 [2024-07-16 00:27:23.313003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:04.558 [2024-07-16 00:27:23.313010] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:04.558 [2024-07-16 00:27:23.313182] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:04.558 [2024-07-16 00:27:23.313360] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.558 [2024-07-16 00:27:23.313369] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.558 [2024-07-16 00:27:23.313375] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.558 [2024-07-16 00:27:23.316117] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.558 [2024-07-16 00:27:23.325676] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.558 [2024-07-16 00:27:23.326155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.558 [2024-07-16 00:27:23.326171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:04.558 [2024-07-16 00:27:23.326178] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:04.558 [2024-07-16 00:27:23.326356] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:04.558 [2024-07-16 00:27:23.326529] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.558 [2024-07-16 00:27:23.326538] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.558 [2024-07-16 00:27:23.326545] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.558 [2024-07-16 00:27:23.329288] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.558 [2024-07-16 00:27:23.338708] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.558 [2024-07-16 00:27:23.339188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.558 [2024-07-16 00:27:23.339204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:04.558 [2024-07-16 00:27:23.339211] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:04.558 [2024-07-16 00:27:23.339390] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:04.558 [2024-07-16 00:27:23.339562] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.559 [2024-07-16 00:27:23.339572] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.559 [2024-07-16 00:27:23.339579] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.559 [2024-07-16 00:27:23.342321] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.559 [2024-07-16 00:27:23.351710] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.559 [2024-07-16 00:27:23.352117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.559 [2024-07-16 00:27:23.352134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:04.559 [2024-07-16 00:27:23.352141] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:04.559 [2024-07-16 00:27:23.352318] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:04.559 [2024-07-16 00:27:23.352494] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.559 [2024-07-16 00:27:23.352505] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.559 [2024-07-16 00:27:23.352511] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.559 [2024-07-16 00:27:23.355257] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.559 [2024-07-16 00:27:23.364806] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.559 [2024-07-16 00:27:23.365191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.559 [2024-07-16 00:27:23.365207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:04.559 [2024-07-16 00:27:23.365216] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:04.559 [2024-07-16 00:27:23.365394] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:04.559 [2024-07-16 00:27:23.365567] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.559 [2024-07-16 00:27:23.365576] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.559 [2024-07-16 00:27:23.365582] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.559 [2024-07-16 00:27:23.368324] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.559 [2024-07-16 00:27:23.377873] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.559 [2024-07-16 00:27:23.378358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.559 [2024-07-16 00:27:23.378375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:04.559 [2024-07-16 00:27:23.378382] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:04.559 [2024-07-16 00:27:23.378555] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:04.559 [2024-07-16 00:27:23.378728] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.559 [2024-07-16 00:27:23.378738] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.559 [2024-07-16 00:27:23.378744] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.559 [2024-07-16 00:27:23.381555] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.559 [2024-07-16 00:27:23.390844] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.559 [2024-07-16 00:27:23.391328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.559 [2024-07-16 00:27:23.391345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:04.559 [2024-07-16 00:27:23.391352] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:04.559 [2024-07-16 00:27:23.391530] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:04.559 [2024-07-16 00:27:23.391693] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.559 [2024-07-16 00:27:23.391702] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.559 [2024-07-16 00:27:23.391709] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.559 [2024-07-16 00:27:23.394510] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.559 [2024-07-16 00:27:23.403904] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.559 [2024-07-16 00:27:23.404380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.559 [2024-07-16 00:27:23.404398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:04.559 [2024-07-16 00:27:23.404405] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:04.559 [2024-07-16 00:27:23.404576] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:04.559 [2024-07-16 00:27:23.404748] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.559 [2024-07-16 00:27:23.404757] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.559 [2024-07-16 00:27:23.404763] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.559 [2024-07-16 00:27:23.407585] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.819 [2024-07-16 00:27:23.416904] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.819 [2024-07-16 00:27:23.417341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.819 [2024-07-16 00:27:23.417358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:04.819 [2024-07-16 00:27:23.417365] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:04.819 [2024-07-16 00:27:23.417537] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:04.819 [2024-07-16 00:27:23.417710] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.819 [2024-07-16 00:27:23.417720] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.819 [2024-07-16 00:27:23.417726] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.819 [2024-07-16 00:27:23.420583] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.819 [2024-07-16 00:27:23.430048] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.819 [2024-07-16 00:27:23.430463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.819 [2024-07-16 00:27:23.430480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:04.819 [2024-07-16 00:27:23.430487] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:04.819 [2024-07-16 00:27:23.430659] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:04.819 [2024-07-16 00:27:23.430833] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.819 [2024-07-16 00:27:23.430842] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.819 [2024-07-16 00:27:23.430849] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.819 [2024-07-16 00:27:23.433622] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.819 [2024-07-16 00:27:23.443046] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.819 [2024-07-16 00:27:23.443531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.819 [2024-07-16 00:27:23.443548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:04.819 [2024-07-16 00:27:23.443559] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:04.819 [2024-07-16 00:27:23.443730] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:04.819 [2024-07-16 00:27:23.443903] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.819 [2024-07-16 00:27:23.443913] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.819 [2024-07-16 00:27:23.443919] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.819 [2024-07-16 00:27:23.446665] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.819 [2024-07-16 00:27:23.456053] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.819 [2024-07-16 00:27:23.456536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.819 [2024-07-16 00:27:23.456554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:04.819 [2024-07-16 00:27:23.456561] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:04.819 [2024-07-16 00:27:23.456732] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:04.819 [2024-07-16 00:27:23.456905] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.819 [2024-07-16 00:27:23.456913] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.819 [2024-07-16 00:27:23.456919] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.819 [2024-07-16 00:27:23.459663] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.819 [2024-07-16 00:27:23.469056] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.819 [2024-07-16 00:27:23.469542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.819 [2024-07-16 00:27:23.469559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:04.819 [2024-07-16 00:27:23.469566] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:04.819 [2024-07-16 00:27:23.469738] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:04.820 [2024-07-16 00:27:23.469910] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.820 [2024-07-16 00:27:23.469918] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.820 [2024-07-16 00:27:23.469924] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.820 [2024-07-16 00:27:23.472668] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.820 [2024-07-16 00:27:23.482038] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.820 [2024-07-16 00:27:23.482515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.820 [2024-07-16 00:27:23.482531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:04.820 [2024-07-16 00:27:23.482539] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:04.820 [2024-07-16 00:27:23.482701] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:04.820 [2024-07-16 00:27:23.482865] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.820 [2024-07-16 00:27:23.482881] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.820 [2024-07-16 00:27:23.482887] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.820 [2024-07-16 00:27:23.485585] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.820 [2024-07-16 00:27:23.494869] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.820 [2024-07-16 00:27:23.495322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.820 [2024-07-16 00:27:23.495339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:04.820 [2024-07-16 00:27:23.495347] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:04.820 [2024-07-16 00:27:23.495509] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:04.820 [2024-07-16 00:27:23.495673] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.820 [2024-07-16 00:27:23.495682] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.820 [2024-07-16 00:27:23.495688] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.820 [2024-07-16 00:27:23.498381] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.820 [2024-07-16 00:27:23.507710] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.820 [2024-07-16 00:27:23.508170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.820 [2024-07-16 00:27:23.508213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:04.820 [2024-07-16 00:27:23.508251] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:04.820 [2024-07-16 00:27:23.508624] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:04.820 [2024-07-16 00:27:23.508799] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.820 [2024-07-16 00:27:23.508809] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.820 [2024-07-16 00:27:23.508815] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.820 [2024-07-16 00:27:23.511457] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.820 [2024-07-16 00:27:23.520518] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.820 [2024-07-16 00:27:23.520987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.820 [2024-07-16 00:27:23.521003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:04.820 [2024-07-16 00:27:23.521010] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:04.820 [2024-07-16 00:27:23.521172] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:04.820 [2024-07-16 00:27:23.521361] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.820 [2024-07-16 00:27:23.521371] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.820 [2024-07-16 00:27:23.521377] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.820 [2024-07-16 00:27:23.524042] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.820 [2024-07-16 00:27:23.533575] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.820 [2024-07-16 00:27:23.533971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.820 [2024-07-16 00:27:23.533988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:04.820 [2024-07-16 00:27:23.533995] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:04.820 [2024-07-16 00:27:23.534167] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:04.820 [2024-07-16 00:27:23.534345] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.820 [2024-07-16 00:27:23.534356] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.820 [2024-07-16 00:27:23.534363] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.820 [2024-07-16 00:27:23.537028] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.820 [2024-07-16 00:27:23.546433] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.820 [2024-07-16 00:27:23.546841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.820 [2024-07-16 00:27:23.546857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:04.820 [2024-07-16 00:27:23.546865] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:04.820 [2024-07-16 00:27:23.547037] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:04.820 [2024-07-16 00:27:23.547211] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.820 [2024-07-16 00:27:23.547219] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.820 [2024-07-16 00:27:23.547231] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.820 [2024-07-16 00:27:23.549847] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.820 [2024-07-16 00:27:23.559227] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.820 [2024-07-16 00:27:23.559707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.820 [2024-07-16 00:27:23.559749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:04.820 [2024-07-16 00:27:23.559771] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:04.820 [2024-07-16 00:27:23.560365] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:04.820 [2024-07-16 00:27:23.560897] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.820 [2024-07-16 00:27:23.560907] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.820 [2024-07-16 00:27:23.560913] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.820 [2024-07-16 00:27:23.563553] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.820 [2024-07-16 00:27:23.572169] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.820 [2024-07-16 00:27:23.572595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.820 [2024-07-16 00:27:23.572612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:04.820 [2024-07-16 00:27:23.572620] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:04.820 [2024-07-16 00:27:23.572797] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:04.820 [2024-07-16 00:27:23.572970] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.820 [2024-07-16 00:27:23.572980] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.820 [2024-07-16 00:27:23.572988] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.820 [2024-07-16 00:27:23.575625] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.820 [2024-07-16 00:27:23.585049] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.820 [2024-07-16 00:27:23.585489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.820 [2024-07-16 00:27:23.585505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:04.820 [2024-07-16 00:27:23.585511] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:04.820 [2024-07-16 00:27:23.585673] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:04.820 [2024-07-16 00:27:23.585836] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.820 [2024-07-16 00:27:23.585845] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.820 [2024-07-16 00:27:23.585851] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.820 [2024-07-16 00:27:23.588476] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.820 [2024-07-16 00:27:23.598074] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.820 [2024-07-16 00:27:23.598525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.820 [2024-07-16 00:27:23.598541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:04.820 [2024-07-16 00:27:23.598549] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:04.820 [2024-07-16 00:27:23.598711] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:04.821 [2024-07-16 00:27:23.598874] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.821 [2024-07-16 00:27:23.598883] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.821 [2024-07-16 00:27:23.598889] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.821 [2024-07-16 00:27:23.601587] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.821 [2024-07-16 00:27:23.611024] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.821 [2024-07-16 00:27:23.611477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.821 [2024-07-16 00:27:23.611495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:04.821 [2024-07-16 00:27:23.611502] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:04.821 [2024-07-16 00:27:23.611664] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:04.821 [2024-07-16 00:27:23.611827] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.821 [2024-07-16 00:27:23.611836] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.821 [2024-07-16 00:27:23.611846] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.821 [2024-07-16 00:27:23.614537] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.821 [2024-07-16 00:27:23.623920] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.821 [2024-07-16 00:27:23.624376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.821 [2024-07-16 00:27:23.624418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:04.821 [2024-07-16 00:27:23.624440] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:04.821 [2024-07-16 00:27:23.624948] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:04.821 [2024-07-16 00:27:23.625122] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.821 [2024-07-16 00:27:23.625131] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.821 [2024-07-16 00:27:23.625138] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.821 [2024-07-16 00:27:23.627765] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.821 [2024-07-16 00:27:23.636784] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.821 [2024-07-16 00:27:23.637271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.821 [2024-07-16 00:27:23.637289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:04.821 [2024-07-16 00:27:23.637296] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:04.821 [2024-07-16 00:27:23.637475] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:04.821 [2024-07-16 00:27:23.637638] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.821 [2024-07-16 00:27:23.637647] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.821 [2024-07-16 00:27:23.637653] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.821 [2024-07-16 00:27:23.640292] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.821 [2024-07-16 00:27:23.649712] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.821 [2024-07-16 00:27:23.650119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.821 [2024-07-16 00:27:23.650134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:04.821 [2024-07-16 00:27:23.650141] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:04.821 [2024-07-16 00:27:23.650318] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:04.821 [2024-07-16 00:27:23.650493] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.821 [2024-07-16 00:27:23.650502] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.821 [2024-07-16 00:27:23.650508] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.821 [2024-07-16 00:27:23.653166] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.821 [2024-07-16 00:27:23.662918] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.821 [2024-07-16 00:27:23.663379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.821 [2024-07-16 00:27:23.663395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:04.821 [2024-07-16 00:27:23.663403] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:04.821 [2024-07-16 00:27:23.663574] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:04.821 [2024-07-16 00:27:23.663750] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.821 [2024-07-16 00:27:23.663759] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.821 [2024-07-16 00:27:23.663765] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.821 [2024-07-16 00:27:23.666518] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.080 [2024-07-16 00:27:23.675970] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.080 [2024-07-16 00:27:23.676393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.080 [2024-07-16 00:27:23.676410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:05.080 [2024-07-16 00:27:23.676417] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:05.080 [2024-07-16 00:27:23.676589] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:05.080 [2024-07-16 00:27:23.676763] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.080 [2024-07-16 00:27:23.676772] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.080 [2024-07-16 00:27:23.676778] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.081 [2024-07-16 00:27:23.679586] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.081 [2024-07-16 00:27:23.689036] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.081 [2024-07-16 00:27:23.689432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.081 [2024-07-16 00:27:23.689476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:05.081 [2024-07-16 00:27:23.689499] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:05.081 [2024-07-16 00:27:23.689993] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:05.081 [2024-07-16 00:27:23.690157] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.081 [2024-07-16 00:27:23.690167] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.081 [2024-07-16 00:27:23.690173] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.081 [2024-07-16 00:27:23.692819] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.081 [2024-07-16 00:27:23.701957] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.081 [2024-07-16 00:27:23.702401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.081 [2024-07-16 00:27:23.702452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:05.081 [2024-07-16 00:27:23.702475] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:05.081 [2024-07-16 00:27:23.703000] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:05.081 [2024-07-16 00:27:23.703169] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.081 [2024-07-16 00:27:23.703178] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.081 [2024-07-16 00:27:23.703185] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.081 [2024-07-16 00:27:23.705934] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.081 [2024-07-16 00:27:23.714922] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.081 [2024-07-16 00:27:23.715309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.081 [2024-07-16 00:27:23.715327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:05.081 [2024-07-16 00:27:23.715335] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:05.081 [2024-07-16 00:27:23.715516] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:05.081 [2024-07-16 00:27:23.715681] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.081 [2024-07-16 00:27:23.715690] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.081 [2024-07-16 00:27:23.715696] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.081 [2024-07-16 00:27:23.718317] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.081 [2024-07-16 00:27:23.727798] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.081 [2024-07-16 00:27:23.728204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.081 [2024-07-16 00:27:23.728220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:05.081 [2024-07-16 00:27:23.728231] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:05.081 [2024-07-16 00:27:23.728417] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:05.081 [2024-07-16 00:27:23.728591] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.081 [2024-07-16 00:27:23.728601] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.081 [2024-07-16 00:27:23.728607] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.081 [2024-07-16 00:27:23.731266] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.081 [2024-07-16 00:27:23.740654] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.081 [2024-07-16 00:27:23.741026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.081 [2024-07-16 00:27:23.741043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:05.081 [2024-07-16 00:27:23.741050] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:05.081 [2024-07-16 00:27:23.741212] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:05.081 [2024-07-16 00:27:23.741403] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.081 [2024-07-16 00:27:23.741414] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.081 [2024-07-16 00:27:23.741420] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.081 [2024-07-16 00:27:23.744088] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.081 [2024-07-16 00:27:23.753563] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.081 [2024-07-16 00:27:23.753994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.081 [2024-07-16 00:27:23.754010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:05.081 [2024-07-16 00:27:23.754018] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:05.081 [2024-07-16 00:27:23.754180] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:05.081 [2024-07-16 00:27:23.754349] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.081 [2024-07-16 00:27:23.754359] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.081 [2024-07-16 00:27:23.754365] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.081 [2024-07-16 00:27:23.757074] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.081 [2024-07-16 00:27:23.766574] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.081 [2024-07-16 00:27:23.767086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.081 [2024-07-16 00:27:23.767127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:05.081 [2024-07-16 00:27:23.767150] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:05.081 [2024-07-16 00:27:23.767742] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:05.081 [2024-07-16 00:27:23.768265] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.081 [2024-07-16 00:27:23.768275] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.081 [2024-07-16 00:27:23.768281] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.081 [2024-07-16 00:27:23.770972] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.081 [2024-07-16 00:27:23.779596] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.081 [2024-07-16 00:27:23.780063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.081 [2024-07-16 00:27:23.780105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:05.081 [2024-07-16 00:27:23.780127] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:05.081 [2024-07-16 00:27:23.780714] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:05.081 [2024-07-16 00:27:23.781292] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.081 [2024-07-16 00:27:23.781302] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.081 [2024-07-16 00:27:23.781308] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.081 [2024-07-16 00:27:23.783978] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.081 [2024-07-16 00:27:23.792593] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.081 [2024-07-16 00:27:23.793001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.081 [2024-07-16 00:27:23.793040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:05.081 [2024-07-16 00:27:23.793070] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:05.081 [2024-07-16 00:27:23.793661] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:05.081 [2024-07-16 00:27:23.794249] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.081 [2024-07-16 00:27:23.794275] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.081 [2024-07-16 00:27:23.794296] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.081 [2024-07-16 00:27:23.796966] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.081 [2024-07-16 00:27:23.805665] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.081 [2024-07-16 00:27:23.806074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.081 [2024-07-16 00:27:23.806091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:05.081 [2024-07-16 00:27:23.806099] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:05.081 [2024-07-16 00:27:23.806275] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:05.081 [2024-07-16 00:27:23.806448] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.081 [2024-07-16 00:27:23.806458] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.081 [2024-07-16 00:27:23.806464] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.081 [2024-07-16 00:27:23.809433] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.081 [2024-07-16 00:27:23.818691] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.081 [2024-07-16 00:27:23.819165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.081 [2024-07-16 00:27:23.819208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:05.081 [2024-07-16 00:27:23.819241] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:05.081 [2024-07-16 00:27:23.819648] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:05.081 [2024-07-16 00:27:23.819822] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.081 [2024-07-16 00:27:23.819832] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.081 [2024-07-16 00:27:23.819838] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.081 [2024-07-16 00:27:23.822583] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.082 [2024-07-16 00:27:23.831664] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.082 [2024-07-16 00:27:23.832080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.082 [2024-07-16 00:27:23.832122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:05.082 [2024-07-16 00:27:23.832145] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:05.082 [2024-07-16 00:27:23.832654] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:05.082 [2024-07-16 00:27:23.832829] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.082 [2024-07-16 00:27:23.832841] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.082 [2024-07-16 00:27:23.832848] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.082 [2024-07-16 00:27:23.835620] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.082 [2024-07-16 00:27:23.844628] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.082 [2024-07-16 00:27:23.845121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.082 [2024-07-16 00:27:23.845164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:05.082 [2024-07-16 00:27:23.845186] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:05.082 [2024-07-16 00:27:23.845780] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:05.082 [2024-07-16 00:27:23.846382] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.082 [2024-07-16 00:27:23.846392] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.082 [2024-07-16 00:27:23.846398] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.082 [2024-07-16 00:27:23.849087] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.082 [2024-07-16 00:27:23.857552] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.082 [2024-07-16 00:27:23.858038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.082 [2024-07-16 00:27:23.858081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:05.082 [2024-07-16 00:27:23.858102] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:05.082 [2024-07-16 00:27:23.858501] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:05.082 [2024-07-16 00:27:23.858667] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.082 [2024-07-16 00:27:23.858677] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.082 [2024-07-16 00:27:23.858683] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.082 [2024-07-16 00:27:23.861426] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.082 [2024-07-16 00:27:23.870554] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.082 [2024-07-16 00:27:23.871034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.082 [2024-07-16 00:27:23.871076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:05.082 [2024-07-16 00:27:23.871098] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:05.082 [2024-07-16 00:27:23.871608] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:05.082 [2024-07-16 00:27:23.871774] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.082 [2024-07-16 00:27:23.871783] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.082 [2024-07-16 00:27:23.871789] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.082 [2024-07-16 00:27:23.874469] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.082 [2024-07-16 00:27:23.883562] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.082 [2024-07-16 00:27:23.883971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.082 [2024-07-16 00:27:23.883987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:05.082 [2024-07-16 00:27:23.883994] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:05.082 [2024-07-16 00:27:23.884156] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:05.082 [2024-07-16 00:27:23.884344] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.082 [2024-07-16 00:27:23.884355] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.082 [2024-07-16 00:27:23.884362] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.082 [2024-07-16 00:27:23.887085] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.082 [2024-07-16 00:27:23.896548] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.082 [2024-07-16 00:27:23.897007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.082 [2024-07-16 00:27:23.897025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:05.082 [2024-07-16 00:27:23.897032] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:05.082 [2024-07-16 00:27:23.897204] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:05.082 [2024-07-16 00:27:23.897383] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.082 [2024-07-16 00:27:23.897393] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.082 [2024-07-16 00:27:23.897399] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.082 [2024-07-16 00:27:23.900007] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.082 [2024-07-16 00:27:23.909584] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.082 [2024-07-16 00:27:23.909920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.082 [2024-07-16 00:27:23.909937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:05.082 [2024-07-16 00:27:23.909944] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:05.082 [2024-07-16 00:27:23.910106] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:05.082 [2024-07-16 00:27:23.910276] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.082 [2024-07-16 00:27:23.910285] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.082 [2024-07-16 00:27:23.910291] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.082 [2024-07-16 00:27:23.912973] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.082 [2024-07-16 00:27:23.922630] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.082 [2024-07-16 00:27:23.923022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.082 [2024-07-16 00:27:23.923062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:05.082 [2024-07-16 00:27:23.923085] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:05.082 [2024-07-16 00:27:23.923635] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:05.082 [2024-07-16 00:27:23.923801] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.082 [2024-07-16 00:27:23.923810] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.082 [2024-07-16 00:27:23.923816] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.082 [2024-07-16 00:27:23.926505] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.343 [2024-07-16 00:27:23.935814] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.343 [2024-07-16 00:27:23.936337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.343 [2024-07-16 00:27:23.936355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:05.343 [2024-07-16 00:27:23.936363] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:05.343 [2024-07-16 00:27:23.936541] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:05.343 [2024-07-16 00:27:23.936719] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.343 [2024-07-16 00:27:23.936729] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.343 [2024-07-16 00:27:23.936736] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.343 [2024-07-16 00:27:23.939565] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.343 [2024-07-16 00:27:23.948912] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.343 [2024-07-16 00:27:23.949399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.343 [2024-07-16 00:27:23.949417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:05.343 [2024-07-16 00:27:23.949424] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:05.343 [2024-07-16 00:27:23.949601] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:05.343 [2024-07-16 00:27:23.949780] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.343 [2024-07-16 00:27:23.949790] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.343 [2024-07-16 00:27:23.949796] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.343 [2024-07-16 00:27:23.952625] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.343 [2024-07-16 00:27:23.961973] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.343 [2024-07-16 00:27:23.962467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.343 [2024-07-16 00:27:23.962485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:05.343 [2024-07-16 00:27:23.962492] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:05.343 [2024-07-16 00:27:23.962670] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:05.343 [2024-07-16 00:27:23.962847] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.343 [2024-07-16 00:27:23.962856] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.343 [2024-07-16 00:27:23.962866] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.343 [2024-07-16 00:27:23.965700] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.343 [2024-07-16 00:27:23.975053] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.343 [2024-07-16 00:27:23.975524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.343 [2024-07-16 00:27:23.975542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:05.343 [2024-07-16 00:27:23.975549] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:05.343 [2024-07-16 00:27:23.975727] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:05.344 [2024-07-16 00:27:23.975905] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.344 [2024-07-16 00:27:23.975915] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.344 [2024-07-16 00:27:23.975921] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.344 [2024-07-16 00:27:23.978758] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.344 [2024-07-16 00:27:23.988116] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.344 [2024-07-16 00:27:23.988586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.344 [2024-07-16 00:27:23.988604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:05.344 [2024-07-16 00:27:23.988612] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:05.344 [2024-07-16 00:27:23.988788] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:05.344 [2024-07-16 00:27:23.988968] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.344 [2024-07-16 00:27:23.988978] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.344 [2024-07-16 00:27:23.988985] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.344 [2024-07-16 00:27:23.991821] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.344 [2024-07-16 00:27:24.001174] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.344 [2024-07-16 00:27:24.001667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.344 [2024-07-16 00:27:24.001685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:05.344 [2024-07-16 00:27:24.001692] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:05.344 [2024-07-16 00:27:24.001869] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:05.344 [2024-07-16 00:27:24.002047] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.344 [2024-07-16 00:27:24.002057] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.344 [2024-07-16 00:27:24.002063] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.344 [2024-07-16 00:27:24.004898] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.344 [2024-07-16 00:27:24.014259] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.344 [2024-07-16 00:27:24.014757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.344 [2024-07-16 00:27:24.014772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:05.344 [2024-07-16 00:27:24.014780] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:05.344 [2024-07-16 00:27:24.014957] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:05.344 [2024-07-16 00:27:24.015134] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.344 [2024-07-16 00:27:24.015143] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.344 [2024-07-16 00:27:24.015149] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.344 [2024-07-16 00:27:24.017983] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.344 [2024-07-16 00:27:24.027341] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.344 [2024-07-16 00:27:24.027823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.344 [2024-07-16 00:27:24.027840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:05.344 [2024-07-16 00:27:24.027847] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:05.344 [2024-07-16 00:27:24.028023] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:05.344 [2024-07-16 00:27:24.028200] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.344 [2024-07-16 00:27:24.028208] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.344 [2024-07-16 00:27:24.028215] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.344 [2024-07-16 00:27:24.031048] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.344 [2024-07-16 00:27:24.040478] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.344 [2024-07-16 00:27:24.041014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.344 [2024-07-16 00:27:24.041059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:05.344 [2024-07-16 00:27:24.041080] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:05.344 [2024-07-16 00:27:24.041690] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:05.344 [2024-07-16 00:27:24.041864] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.344 [2024-07-16 00:27:24.041872] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.344 [2024-07-16 00:27:24.041879] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.344 [2024-07-16 00:27:24.044625] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.344 [2024-07-16 00:27:24.053317] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.344 [2024-07-16 00:27:24.053782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.344 [2024-07-16 00:27:24.053799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:05.344 [2024-07-16 00:27:24.053806] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:05.344 [2024-07-16 00:27:24.053969] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:05.344 [2024-07-16 00:27:24.054135] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.344 [2024-07-16 00:27:24.054146] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.344 [2024-07-16 00:27:24.054152] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.344 [2024-07-16 00:27:24.056846] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.344 [2024-07-16 00:27:24.066214] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.344 [2024-07-16 00:27:24.066636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.344 [2024-07-16 00:27:24.066679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:05.344 [2024-07-16 00:27:24.066700] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:05.344 [2024-07-16 00:27:24.067294] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:05.344 [2024-07-16 00:27:24.067876] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.344 [2024-07-16 00:27:24.067902] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.344 [2024-07-16 00:27:24.067922] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.344 [2024-07-16 00:27:24.070578] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.344 [2024-07-16 00:27:24.079089] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.344 [2024-07-16 00:27:24.079494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.344 [2024-07-16 00:27:24.079511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:05.344 [2024-07-16 00:27:24.079519] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:05.344 [2024-07-16 00:27:24.079682] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:05.344 [2024-07-16 00:27:24.079845] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.344 [2024-07-16 00:27:24.079854] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.344 [2024-07-16 00:27:24.079860] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.344 [2024-07-16 00:27:24.082569] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.344 [2024-07-16 00:27:24.091872] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.344 [2024-07-16 00:27:24.092343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.344 [2024-07-16 00:27:24.092360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:05.344 [2024-07-16 00:27:24.092366] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:05.344 [2024-07-16 00:27:24.092528] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:05.344 [2024-07-16 00:27:24.092691] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.344 [2024-07-16 00:27:24.092699] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.344 [2024-07-16 00:27:24.092706] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.344 [2024-07-16 00:27:24.095399] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.344 [2024-07-16 00:27:24.104763] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.344 [2024-07-16 00:27:24.105247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.344 [2024-07-16 00:27:24.105291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:05.344 [2024-07-16 00:27:24.105313] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:05.344 [2024-07-16 00:27:24.105862] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:05.344 [2024-07-16 00:27:24.106027] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.344 [2024-07-16 00:27:24.106034] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.344 [2024-07-16 00:27:24.106041] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.344 [2024-07-16 00:27:24.108731] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.344 [2024-07-16 00:27:24.117549] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.344 [2024-07-16 00:27:24.118026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.344 [2024-07-16 00:27:24.118068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:05.344 [2024-07-16 00:27:24.118090] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:05.345 [2024-07-16 00:27:24.118683] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:05.345 [2024-07-16 00:27:24.119136] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.345 [2024-07-16 00:27:24.119146] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.345 [2024-07-16 00:27:24.119153] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.345 [2024-07-16 00:27:24.121867] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.345 [2024-07-16 00:27:24.130527] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.345 [2024-07-16 00:27:24.131007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.345 [2024-07-16 00:27:24.131049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:05.345 [2024-07-16 00:27:24.131070] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:05.345 [2024-07-16 00:27:24.131663] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:05.345 [2024-07-16 00:27:24.132186] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.345 [2024-07-16 00:27:24.132196] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.345 [2024-07-16 00:27:24.132203] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.345 [2024-07-16 00:27:24.134940] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.345 [2024-07-16 00:27:24.143444] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.345 [2024-07-16 00:27:24.143928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.345 [2024-07-16 00:27:24.143969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:05.345 [2024-07-16 00:27:24.144004] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:05.345 [2024-07-16 00:27:24.144599] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:05.345 [2024-07-16 00:27:24.145134] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.345 [2024-07-16 00:27:24.145140] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.345 [2024-07-16 00:27:24.145146] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.345 [2024-07-16 00:27:24.147740] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.345 [2024-07-16 00:27:24.156622] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.345 [2024-07-16 00:27:24.157109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.345 [2024-07-16 00:27:24.157152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:05.345 [2024-07-16 00:27:24.157174] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:05.345 [2024-07-16 00:27:24.157766] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:05.345 [2024-07-16 00:27:24.158269] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.345 [2024-07-16 00:27:24.158279] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.345 [2024-07-16 00:27:24.158286] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.345 [2024-07-16 00:27:24.161016] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.345 [2024-07-16 00:27:24.169504] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.345 [2024-07-16 00:27:24.169956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.345 [2024-07-16 00:27:24.169972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:05.345 [2024-07-16 00:27:24.169979] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:05.345 [2024-07-16 00:27:24.170142] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:05.345 [2024-07-16 00:27:24.170329] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.345 [2024-07-16 00:27:24.170339] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.345 [2024-07-16 00:27:24.170345] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.345 [2024-07-16 00:27:24.172991] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.345 [2024-07-16 00:27:24.182481] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.345 [2024-07-16 00:27:24.182942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.345 [2024-07-16 00:27:24.182984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:05.345 [2024-07-16 00:27:24.183006] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:05.345 [2024-07-16 00:27:24.183601] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:05.345 [2024-07-16 00:27:24.184185] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.345 [2024-07-16 00:27:24.184230] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.345 [2024-07-16 00:27:24.184238] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.345 [2024-07-16 00:27:24.186989] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.604 [2024-07-16 00:27:24.195604] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.604 [2024-07-16 00:27:24.196060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.604 [2024-07-16 00:27:24.196101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:05.604 [2024-07-16 00:27:24.196124] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:05.604 [2024-07-16 00:27:24.196715] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:05.604 [2024-07-16 00:27:24.197292] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.604 [2024-07-16 00:27:24.197302] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.604 [2024-07-16 00:27:24.197309] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.604 [2024-07-16 00:27:24.200134] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.604 [2024-07-16 00:27:24.208666] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.604 [2024-07-16 00:27:24.209112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.604 [2024-07-16 00:27:24.209154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:05.604 [2024-07-16 00:27:24.209176] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:05.604 [2024-07-16 00:27:24.209772] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:05.604 [2024-07-16 00:27:24.210332] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.604 [2024-07-16 00:27:24.210342] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.604 [2024-07-16 00:27:24.210349] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.604 [2024-07-16 00:27:24.213132] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.604 [2024-07-16 00:27:24.221763] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.604 [2024-07-16 00:27:24.222245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.604 [2024-07-16 00:27:24.222288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:05.604 [2024-07-16 00:27:24.222310] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:05.604 [2024-07-16 00:27:24.222889] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:05.604 [2024-07-16 00:27:24.223378] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.604 [2024-07-16 00:27:24.223389] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.604 [2024-07-16 00:27:24.223395] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.604 [2024-07-16 00:27:24.226170] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.604 [2024-07-16 00:27:24.234642] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.604 [2024-07-16 00:27:24.235103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.604 [2024-07-16 00:27:24.235145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:05.604 [2024-07-16 00:27:24.235166] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:05.604 [2024-07-16 00:27:24.235581] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:05.604 [2024-07-16 00:27:24.235755] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.604 [2024-07-16 00:27:24.235765] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.604 [2024-07-16 00:27:24.235772] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.604 [2024-07-16 00:27:24.238414] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.604 [2024-07-16 00:27:24.247528] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.604 [2024-07-16 00:27:24.248000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.604 [2024-07-16 00:27:24.248045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:05.604 [2024-07-16 00:27:24.248066] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:05.604 [2024-07-16 00:27:24.248600] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:05.604 [2024-07-16 00:27:24.248764] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.604 [2024-07-16 00:27:24.248773] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.604 [2024-07-16 00:27:24.248780] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.604 [2024-07-16 00:27:24.251411] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.604 [2024-07-16 00:27:24.260401] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.605 [2024-07-16 00:27:24.260888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.605 [2024-07-16 00:27:24.260930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:05.605 [2024-07-16 00:27:24.260951] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:05.605 [2024-07-16 00:27:24.261465] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:05.605 [2024-07-16 00:27:24.261640] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.605 [2024-07-16 00:27:24.261650] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.605 [2024-07-16 00:27:24.261656] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.605 [2024-07-16 00:27:24.264348] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.605 [2024-07-16 00:27:24.273227] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.605 [2024-07-16 00:27:24.273688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.605 [2024-07-16 00:27:24.273730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:05.605 [2024-07-16 00:27:24.273751] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:05.605 [2024-07-16 00:27:24.274271] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:05.605 [2024-07-16 00:27:24.274446] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.605 [2024-07-16 00:27:24.274456] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.605 [2024-07-16 00:27:24.274463] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.605 [2024-07-16 00:27:24.277123] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.605 [2024-07-16 00:27:24.286164] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.605 [2024-07-16 00:27:24.286509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.605 [2024-07-16 00:27:24.286526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:05.605 [2024-07-16 00:27:24.286532] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:05.605 [2024-07-16 00:27:24.286695] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:05.605 [2024-07-16 00:27:24.286859] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.605 [2024-07-16 00:27:24.286868] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.605 [2024-07-16 00:27:24.286874] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.605 [2024-07-16 00:27:24.289566] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.605 [2024-07-16 00:27:24.299170] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.605 [2024-07-16 00:27:24.299662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.605 [2024-07-16 00:27:24.299704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:05.605 [2024-07-16 00:27:24.299726] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:05.605 [2024-07-16 00:27:24.300317] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:05.605 [2024-07-16 00:27:24.300511] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.605 [2024-07-16 00:27:24.300521] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.605 [2024-07-16 00:27:24.300528] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.605 [2024-07-16 00:27:24.303184] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.605 [2024-07-16 00:27:24.312132] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.605 [2024-07-16 00:27:24.312595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.605 [2024-07-16 00:27:24.312638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:05.605 [2024-07-16 00:27:24.312661] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:05.605 [2024-07-16 00:27:24.313254] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:05.605 [2024-07-16 00:27:24.313678] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.605 [2024-07-16 00:27:24.313688] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.605 [2024-07-16 00:27:24.313699] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.605 [2024-07-16 00:27:24.316390] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.605 [2024-07-16 00:27:24.324961] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.605 [2024-07-16 00:27:24.325429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.605 [2024-07-16 00:27:24.325470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:05.605 [2024-07-16 00:27:24.325492] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:05.605 [2024-07-16 00:27:24.325877] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:05.605 [2024-07-16 00:27:24.326041] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.605 [2024-07-16 00:27:24.326050] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.605 [2024-07-16 00:27:24.326056] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.605 [2024-07-16 00:27:24.328808] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.605 [2024-07-16 00:27:24.338061] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.605 [2024-07-16 00:27:24.338549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.605 [2024-07-16 00:27:24.338565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:05.605 [2024-07-16 00:27:24.338573] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:05.605 [2024-07-16 00:27:24.338745] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:05.605 [2024-07-16 00:27:24.338919] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.605 [2024-07-16 00:27:24.338928] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.605 [2024-07-16 00:27:24.338935] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.605 [2024-07-16 00:27:24.341684] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.605 [2024-07-16 00:27:24.350951] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.605 [2024-07-16 00:27:24.351427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.605 [2024-07-16 00:27:24.351471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:05.605 [2024-07-16 00:27:24.351494] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:05.605 [2024-07-16 00:27:24.352074] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:05.605 [2024-07-16 00:27:24.352619] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.605 [2024-07-16 00:27:24.352630] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.605 [2024-07-16 00:27:24.352637] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.605 [2024-07-16 00:27:24.355293] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.605 [2024-07-16 00:27:24.363917] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.605 [2024-07-16 00:27:24.364397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.605 [2024-07-16 00:27:24.364413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:05.605 [2024-07-16 00:27:24.364420] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:05.605 [2024-07-16 00:27:24.364593] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:05.605 [2024-07-16 00:27:24.364765] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.605 [2024-07-16 00:27:24.364775] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.605 [2024-07-16 00:27:24.364781] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.605 [2024-07-16 00:27:24.367426] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.605 [2024-07-16 00:27:24.376848] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.605 [2024-07-16 00:27:24.377327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.605 [2024-07-16 00:27:24.377344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:05.605 [2024-07-16 00:27:24.377351] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:05.605 [2024-07-16 00:27:24.377523] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:05.605 [2024-07-16 00:27:24.377696] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.605 [2024-07-16 00:27:24.377705] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.605 [2024-07-16 00:27:24.377711] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.605 [2024-07-16 00:27:24.380361] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.605 [2024-07-16 00:27:24.389718] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.605 [2024-07-16 00:27:24.390177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.605 [2024-07-16 00:27:24.390219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:05.605 [2024-07-16 00:27:24.390254] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:05.605 [2024-07-16 00:27:24.390779] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:05.605 [2024-07-16 00:27:24.390953] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.605 [2024-07-16 00:27:24.390961] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.605 [2024-07-16 00:27:24.390968] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.605 [2024-07-16 00:27:24.393656] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.605 [2024-07-16 00:27:24.402626] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.605 [2024-07-16 00:27:24.403105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.605 [2024-07-16 00:27:24.403147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:05.605 [2024-07-16 00:27:24.403168] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:05.605 [2024-07-16 00:27:24.403624] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:05.605 [2024-07-16 00:27:24.403883] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.605 [2024-07-16 00:27:24.403897] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.605 [2024-07-16 00:27:24.403907] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.605 [2024-07-16 00:27:24.407969] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.605 [2024-07-16 00:27:24.415950] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.605 [2024-07-16 00:27:24.416440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.605 [2024-07-16 00:27:24.416483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:05.605 [2024-07-16 00:27:24.416504] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:05.605 [2024-07-16 00:27:24.416866] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:05.605 [2024-07-16 00:27:24.417035] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.605 [2024-07-16 00:27:24.417045] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.605 [2024-07-16 00:27:24.417051] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.605 [2024-07-16 00:27:24.419783] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.605 [2024-07-16 00:27:24.428829] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.605 [2024-07-16 00:27:24.429303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.605 [2024-07-16 00:27:24.429319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:05.605 [2024-07-16 00:27:24.429327] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:05.605 [2024-07-16 00:27:24.429490] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:05.605 [2024-07-16 00:27:24.429653] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.605 [2024-07-16 00:27:24.429662] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.605 [2024-07-16 00:27:24.429668] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.605 [2024-07-16 00:27:24.432379] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.605 [2024-07-16 00:27:24.441674] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.605 [2024-07-16 00:27:24.442071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.605 [2024-07-16 00:27:24.442088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:05.605 [2024-07-16 00:27:24.442094] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:05.606 [2024-07-16 00:27:24.442279] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:05.606 [2024-07-16 00:27:24.442452] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.606 [2024-07-16 00:27:24.442462] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.606 [2024-07-16 00:27:24.442469] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.606 [2024-07-16 00:27:24.445303] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.606 [2024-07-16 00:27:24.454805] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.606 [2024-07-16 00:27:24.455265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.606 [2024-07-16 00:27:24.455282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:05.606 [2024-07-16 00:27:24.455290] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:05.606 [2024-07-16 00:27:24.455462] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:05.606 [2024-07-16 00:27:24.455639] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.606 [2024-07-16 00:27:24.455649] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.606 [2024-07-16 00:27:24.455654] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.865 [2024-07-16 00:27:24.458445] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.865 [2024-07-16 00:27:24.467654] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.865 [2024-07-16 00:27:24.468126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.865 [2024-07-16 00:27:24.468143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:05.865 [2024-07-16 00:27:24.468150] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:05.865 [2024-07-16 00:27:24.468336] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:05.865 [2024-07-16 00:27:24.468509] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.865 [2024-07-16 00:27:24.468519] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.865 [2024-07-16 00:27:24.468525] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.865 [2024-07-16 00:27:24.471178] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.866 [2024-07-16 00:27:24.480601] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.866 [2024-07-16 00:27:24.481024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.866 [2024-07-16 00:27:24.481041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:05.866 [2024-07-16 00:27:24.481048] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:05.866 [2024-07-16 00:27:24.481220] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:05.866 [2024-07-16 00:27:24.481400] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.866 [2024-07-16 00:27:24.481410] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.866 [2024-07-16 00:27:24.481417] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.866 [2024-07-16 00:27:24.484083] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.866 [2024-07-16 00:27:24.493387] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.866 [2024-07-16 00:27:24.493799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.866 [2024-07-16 00:27:24.493842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:05.866 [2024-07-16 00:27:24.493871] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:05.866 [2024-07-16 00:27:24.494461] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:05.866 [2024-07-16 00:27:24.494749] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.866 [2024-07-16 00:27:24.494762] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.866 [2024-07-16 00:27:24.494771] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.866 [2024-07-16 00:27:24.498829] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.866 [2024-07-16 00:27:24.506934] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.866 [2024-07-16 00:27:24.507411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.866 [2024-07-16 00:27:24.507427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:05.866 [2024-07-16 00:27:24.507434] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:05.866 [2024-07-16 00:27:24.507600] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:05.866 [2024-07-16 00:27:24.507768] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.866 [2024-07-16 00:27:24.507776] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.866 [2024-07-16 00:27:24.507783] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.866 [2024-07-16 00:27:24.510513] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.866 [2024-07-16 00:27:24.519724] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.866 [2024-07-16 00:27:24.520152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.866 [2024-07-16 00:27:24.520193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:05.866 [2024-07-16 00:27:24.520215] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:05.866 [2024-07-16 00:27:24.520808] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:05.866 [2024-07-16 00:27:24.521274] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.866 [2024-07-16 00:27:24.521284] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.866 [2024-07-16 00:27:24.521291] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.866 [2024-07-16 00:27:24.523902] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.866 [2024-07-16 00:27:24.532513] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.866 [2024-07-16 00:27:24.532985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.866 [2024-07-16 00:27:24.533002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:05.866 [2024-07-16 00:27:24.533008] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:05.866 [2024-07-16 00:27:24.533169] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:05.866 [2024-07-16 00:27:24.533359] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.866 [2024-07-16 00:27:24.533371] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.866 [2024-07-16 00:27:24.533377] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.866 [2024-07-16 00:27:24.536068] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.866 [2024-07-16 00:27:24.545503] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.866 [2024-07-16 00:27:24.545899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.866 [2024-07-16 00:27:24.545916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:05.866 [2024-07-16 00:27:24.545922] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:05.866 [2024-07-16 00:27:24.546085] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:05.866 [2024-07-16 00:27:24.546270] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.866 [2024-07-16 00:27:24.546281] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.866 [2024-07-16 00:27:24.546287] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.866 [2024-07-16 00:27:24.548962] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.866 [2024-07-16 00:27:24.558384] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.866 [2024-07-16 00:27:24.558776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.866 [2024-07-16 00:27:24.558792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:05.866 [2024-07-16 00:27:24.558799] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:05.866 [2024-07-16 00:27:24.558971] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:05.866 [2024-07-16 00:27:24.559144] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.866 [2024-07-16 00:27:24.559154] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.866 [2024-07-16 00:27:24.559160] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.866 [2024-07-16 00:27:24.561882] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.866 [2024-07-16 00:27:24.571345] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.866 [2024-07-16 00:27:24.571823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.866 [2024-07-16 00:27:24.571866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:05.866 [2024-07-16 00:27:24.571888] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:05.866 [2024-07-16 00:27:24.572392] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:05.866 [2024-07-16 00:27:24.572557] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.866 [2024-07-16 00:27:24.572566] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.866 [2024-07-16 00:27:24.572573] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.866 [2024-07-16 00:27:24.575240] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.866 [2024-07-16 00:27:24.584189] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.866 [2024-07-16 00:27:24.584672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.866 [2024-07-16 00:27:24.584714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:05.866 [2024-07-16 00:27:24.584736] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:05.866 [2024-07-16 00:27:24.585268] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:05.866 [2024-07-16 00:27:24.585524] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.866 [2024-07-16 00:27:24.585536] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.866 [2024-07-16 00:27:24.585546] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.866 [2024-07-16 00:27:24.589604] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.866 [2024-07-16 00:27:24.597625] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.866 [2024-07-16 00:27:24.598104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.866 [2024-07-16 00:27:24.598121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:05.866 [2024-07-16 00:27:24.598128] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:05.866 [2024-07-16 00:27:24.598303] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:05.866 [2024-07-16 00:27:24.598487] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.866 [2024-07-16 00:27:24.598496] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.866 [2024-07-16 00:27:24.598502] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.866 [2024-07-16 00:27:24.601166] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.866 [2024-07-16 00:27:24.610669] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.866 [2024-07-16 00:27:24.611152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.866 [2024-07-16 00:27:24.611193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:05.866 [2024-07-16 00:27:24.611215] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:05.866 [2024-07-16 00:27:24.611810] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:05.866 [2024-07-16 00:27:24.612319] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.867 [2024-07-16 00:27:24.612329] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.867 [2024-07-16 00:27:24.612336] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.867 [2024-07-16 00:27:24.615041] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.867 [2024-07-16 00:27:24.623535] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.867 [2024-07-16 00:27:24.624018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.867 [2024-07-16 00:27:24.624035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:05.867 [2024-07-16 00:27:24.624042] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:05.867 [2024-07-16 00:27:24.624218] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:05.867 [2024-07-16 00:27:24.624397] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.867 [2024-07-16 00:27:24.624407] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.867 [2024-07-16 00:27:24.624414] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.867 [2024-07-16 00:27:24.627022] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.867 [2024-07-16 00:27:24.636411] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.867 [2024-07-16 00:27:24.636819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.867 [2024-07-16 00:27:24.636835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:05.867 [2024-07-16 00:27:24.636843] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:05.867 [2024-07-16 00:27:24.637015] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:05.867 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1658142 Killed "${NVMF_APP[@]}" "$@" 00:26:05.867 [2024-07-16 00:27:24.637187] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.867 [2024-07-16 00:27:24.637196] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.867 [2024-07-16 00:27:24.637202] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.867 00:27:24 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:26:05.867 00:27:24 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:26:05.867 00:27:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:05.867 00:27:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:05.867 00:27:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:05.867 [2024-07-16 00:27:24.639981] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.867 00:27:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1659559 00:26:05.867 00:27:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1659559 00:26:05.867 00:27:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:05.867 00:27:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@823 -- # '[' -z 1659559 ']' 00:26:05.867 00:27:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:05.867 00:27:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@828 -- # local max_retries=100 00:26:05.867 00:27:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:05.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:05.867 00:27:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@832 -- # xtrace_disable 00:26:05.867 00:27:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:05.867 [2024-07-16 00:27:24.649523] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.867 [2024-07-16 00:27:24.650018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.867 [2024-07-16 00:27:24.650035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:05.867 [2024-07-16 00:27:24.650043] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:05.867 [2024-07-16 00:27:24.650230] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:05.867 [2024-07-16 00:27:24.650408] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.867 [2024-07-16 00:27:24.650417] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.867 [2024-07-16 00:27:24.650424] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.867 [2024-07-16 00:27:24.653257] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.867 [2024-07-16 00:27:24.662614] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.867 [2024-07-16 00:27:24.663107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.867 [2024-07-16 00:27:24.663124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:05.867 [2024-07-16 00:27:24.663132] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:05.867 [2024-07-16 00:27:24.663315] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:05.867 [2024-07-16 00:27:24.663493] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.867 [2024-07-16 00:27:24.663502] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.867 [2024-07-16 00:27:24.663509] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.867 [2024-07-16 00:27:24.666341] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.867 [2024-07-16 00:27:24.675701] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.867 [2024-07-16 00:27:24.676188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.867 [2024-07-16 00:27:24.676204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:05.867 [2024-07-16 00:27:24.676211] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:05.867 [2024-07-16 00:27:24.676396] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:05.867 [2024-07-16 00:27:24.676574] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.867 [2024-07-16 00:27:24.676584] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.867 [2024-07-16 00:27:24.676590] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.867 [2024-07-16 00:27:24.679443] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.867 [2024-07-16 00:27:24.688783] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.867 [2024-07-16 00:27:24.689244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.867 [2024-07-16 00:27:24.689260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:05.867 [2024-07-16 00:27:24.689268] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:05.867 [2024-07-16 00:27:24.689440] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:05.867 [2024-07-16 00:27:24.689612] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.867 [2024-07-16 00:27:24.689622] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.867 [2024-07-16 00:27:24.689632] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.867 [2024-07-16 00:27:24.690598] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:26:05.867 [2024-07-16 00:27:24.690641] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:05.867 [2024-07-16 00:27:24.692463] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.867 [2024-07-16 00:27:24.701979] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.867 [2024-07-16 00:27:24.702364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.867 [2024-07-16 00:27:24.702382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:05.867 [2024-07-16 00:27:24.702400] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:05.867 [2024-07-16 00:27:24.702573] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:05.867 [2024-07-16 00:27:24.702746] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.867 [2024-07-16 00:27:24.702755] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.867 [2024-07-16 00:27:24.702763] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.867 [2024-07-16 00:27:24.705573] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.867 [2024-07-16 00:27:24.715015] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.867 [2024-07-16 00:27:24.715504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.867 [2024-07-16 00:27:24.715520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:05.867 [2024-07-16 00:27:24.715529] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:05.867 [2024-07-16 00:27:24.715706] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:05.867 [2024-07-16 00:27:24.715885] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.867 [2024-07-16 00:27:24.715895] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.867 [2024-07-16 00:27:24.715903] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.127 [2024-07-16 00:27:24.718737] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.127 [2024-07-16 00:27:24.728056] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.127 [2024-07-16 00:27:24.728552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.127 [2024-07-16 00:27:24.728570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:06.127 [2024-07-16 00:27:24.728578] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:06.127 [2024-07-16 00:27:24.728757] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:06.127 [2024-07-16 00:27:24.728936] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.127 [2024-07-16 00:27:24.728946] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.127 [2024-07-16 00:27:24.728953] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.127 [2024-07-16 00:27:24.731787] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.127 [2024-07-16 00:27:24.741104] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.127 [2024-07-16 00:27:24.741585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.127 [2024-07-16 00:27:24.741602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:06.127 [2024-07-16 00:27:24.741610] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:06.127 [2024-07-16 00:27:24.741782] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:06.127 [2024-07-16 00:27:24.741955] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.127 [2024-07-16 00:27:24.741964] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.127 [2024-07-16 00:27:24.741970] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.127 [2024-07-16 00:27:24.744779] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.127 [2024-07-16 00:27:24.748881] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:06.127 [2024-07-16 00:27:24.754180] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.127 [2024-07-16 00:27:24.754594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.127 [2024-07-16 00:27:24.754613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:06.127 [2024-07-16 00:27:24.754620] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:06.127 [2024-07-16 00:27:24.754793] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:06.127 [2024-07-16 00:27:24.754966] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.127 [2024-07-16 00:27:24.754976] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.127 [2024-07-16 00:27:24.754983] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.127 [2024-07-16 00:27:24.757794] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.127 [2024-07-16 00:27:24.767214] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.127 [2024-07-16 00:27:24.767718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.127 [2024-07-16 00:27:24.767734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:06.127 [2024-07-16 00:27:24.767742] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:06.127 [2024-07-16 00:27:24.767919] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:06.127 [2024-07-16 00:27:24.768097] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.127 [2024-07-16 00:27:24.768107] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.127 [2024-07-16 00:27:24.768113] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.127 [2024-07-16 00:27:24.770994] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.127 [2024-07-16 00:27:24.780220] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.127 [2024-07-16 00:27:24.780645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.127 [2024-07-16 00:27:24.780670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:06.127 [2024-07-16 00:27:24.780677] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:06.127 [2024-07-16 00:27:24.780849] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:06.127 [2024-07-16 00:27:24.781022] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.127 [2024-07-16 00:27:24.781031] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.127 [2024-07-16 00:27:24.781037] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.127 [2024-07-16 00:27:24.783858] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.127 [2024-07-16 00:27:24.793279] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.127 [2024-07-16 00:27:24.793726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.127 [2024-07-16 00:27:24.793747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:06.127 [2024-07-16 00:27:24.793754] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:06.127 [2024-07-16 00:27:24.793927] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:06.127 [2024-07-16 00:27:24.794102] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.127 [2024-07-16 00:27:24.794111] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.127 [2024-07-16 00:27:24.794119] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.127 [2024-07-16 00:27:24.796932] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.127 [2024-07-16 00:27:24.806299] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.127 [2024-07-16 00:27:24.806709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.127 [2024-07-16 00:27:24.806727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:06.127 [2024-07-16 00:27:24.806735] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:06.127 [2024-07-16 00:27:24.806923] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:06.127 [2024-07-16 00:27:24.807103] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.127 [2024-07-16 00:27:24.807113] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.127 [2024-07-16 00:27:24.807119] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.127 [2024-07-16 00:27:24.809920] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.127 [2024-07-16 00:27:24.819360] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.127 [2024-07-16 00:27:24.819858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.127 [2024-07-16 00:27:24.819875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:06.127 [2024-07-16 00:27:24.819882] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:06.128 [2024-07-16 00:27:24.820060] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:06.128 [2024-07-16 00:27:24.820248] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.128 [2024-07-16 00:27:24.820258] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.128 [2024-07-16 00:27:24.820265] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.128 [2024-07-16 00:27:24.823090] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.128 [2024-07-16 00:27:24.830019] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:06.128 [2024-07-16 00:27:24.830046] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:06.128 [2024-07-16 00:27:24.830053] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:06.128 [2024-07-16 00:27:24.830059] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:06.128 [2024-07-16 00:27:24.830064] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:06.128 [2024-07-16 00:27:24.830101] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:06.128 [2024-07-16 00:27:24.830188] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:06.128 [2024-07-16 00:27:24.830189] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:06.128 [2024-07-16 00:27:24.832455] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.128 [2024-07-16 00:27:24.832879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.128 [2024-07-16 00:27:24.832898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:06.128 [2024-07-16 00:27:24.832906] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:06.128 [2024-07-16 00:27:24.833084] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:06.128 [2024-07-16 00:27:24.833268] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.128 [2024-07-16 00:27:24.833278] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.128 [2024-07-16 00:27:24.833285] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.128 [2024-07-16 00:27:24.836113] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.128 [2024-07-16 00:27:24.845638] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.128 [2024-07-16 00:27:24.846142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.128 [2024-07-16 00:27:24.846162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:06.128 [2024-07-16 00:27:24.846170] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:06.128 [2024-07-16 00:27:24.846352] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:06.128 [2024-07-16 00:27:24.846530] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.128 [2024-07-16 00:27:24.846540] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.128 [2024-07-16 00:27:24.846547] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.128 [2024-07-16 00:27:24.849379] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.128 [2024-07-16 00:27:24.858732] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.128 [2024-07-16 00:27:24.859168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.128 [2024-07-16 00:27:24.859195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:06.128 [2024-07-16 00:27:24.859203] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:06.128 [2024-07-16 00:27:24.859387] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:06.128 [2024-07-16 00:27:24.859567] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.128 [2024-07-16 00:27:24.859577] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.128 [2024-07-16 00:27:24.859583] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.128 [2024-07-16 00:27:24.862410] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.128 [2024-07-16 00:27:24.871928] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.128 [2024-07-16 00:27:24.872435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.128 [2024-07-16 00:27:24.872454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:06.128 [2024-07-16 00:27:24.872462] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:06.128 [2024-07-16 00:27:24.872640] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:06.128 [2024-07-16 00:27:24.872819] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.128 [2024-07-16 00:27:24.872829] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.128 [2024-07-16 00:27:24.872836] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.128 [2024-07-16 00:27:24.875667] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.128 [2024-07-16 00:27:24.885032] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.128 [2024-07-16 00:27:24.885468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.128 [2024-07-16 00:27:24.885488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:06.128 [2024-07-16 00:27:24.885496] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:06.128 [2024-07-16 00:27:24.885674] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:06.128 [2024-07-16 00:27:24.885853] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.128 [2024-07-16 00:27:24.885863] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.128 [2024-07-16 00:27:24.885870] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.128 [2024-07-16 00:27:24.888704] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.128 [2024-07-16 00:27:24.898221] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.128 [2024-07-16 00:27:24.898566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.128 [2024-07-16 00:27:24.898583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:06.128 [2024-07-16 00:27:24.898590] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:06.128 [2024-07-16 00:27:24.898767] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:06.128 [2024-07-16 00:27:24.898951] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.128 [2024-07-16 00:27:24.898960] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.128 [2024-07-16 00:27:24.898967] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.128 [2024-07-16 00:27:24.901793] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.128 [2024-07-16 00:27:24.911315] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.128 [2024-07-16 00:27:24.911806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.128 [2024-07-16 00:27:24.911823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:06.128 [2024-07-16 00:27:24.911831] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:06.128 [2024-07-16 00:27:24.912008] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:06.128 [2024-07-16 00:27:24.912187] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.128 [2024-07-16 00:27:24.912196] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.128 [2024-07-16 00:27:24.912202] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.128 [2024-07-16 00:27:24.915031] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.128 [2024-07-16 00:27:24.924380] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.128 [2024-07-16 00:27:24.924853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.128 [2024-07-16 00:27:24.924870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:06.128 [2024-07-16 00:27:24.924877] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:06.128 [2024-07-16 00:27:24.925054] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:06.128 [2024-07-16 00:27:24.925236] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.128 [2024-07-16 00:27:24.925246] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.128 [2024-07-16 00:27:24.925253] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.128 [2024-07-16 00:27:24.928077] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.128 [2024-07-16 00:27:24.937425] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.128 [2024-07-16 00:27:24.937844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.128 [2024-07-16 00:27:24.937860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:06.128 [2024-07-16 00:27:24.937868] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:06.128 [2024-07-16 00:27:24.938045] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:06.128 [2024-07-16 00:27:24.938229] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.128 [2024-07-16 00:27:24.938238] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.128 [2024-07-16 00:27:24.938245] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.128 [2024-07-16 00:27:24.941074] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.128 [2024-07-16 00:27:24.950593] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.128 [2024-07-16 00:27:24.951049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.128 [2024-07-16 00:27:24.951065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:06.128 [2024-07-16 00:27:24.951073] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:06.128 [2024-07-16 00:27:24.951254] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:06.129 [2024-07-16 00:27:24.951434] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.129 [2024-07-16 00:27:24.951443] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.129 [2024-07-16 00:27:24.951450] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.129 [2024-07-16 00:27:24.954278] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.129 [2024-07-16 00:27:24.963631] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.129 [2024-07-16 00:27:24.964012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.129 [2024-07-16 00:27:24.964030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:06.129 [2024-07-16 00:27:24.964037] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:06.129 [2024-07-16 00:27:24.964214] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:06.129 [2024-07-16 00:27:24.964397] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.129 [2024-07-16 00:27:24.964408] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.129 [2024-07-16 00:27:24.964414] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.129 [2024-07-16 00:27:24.967241] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.129 [2024-07-16 00:27:24.976758] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.129 [2024-07-16 00:27:24.977244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.129 [2024-07-16 00:27:24.977262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:06.129 [2024-07-16 00:27:24.977269] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:06.129 [2024-07-16 00:27:24.977447] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:06.129 [2024-07-16 00:27:24.977623] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.129 [2024-07-16 00:27:24.977633] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.129 [2024-07-16 00:27:24.977640] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.388 [2024-07-16 00:27:24.980469] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.388 [2024-07-16 00:27:24.989829] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.388 [2024-07-16 00:27:24.990316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.388 [2024-07-16 00:27:24.990332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:06.388 [2024-07-16 00:27:24.990344] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:06.388 [2024-07-16 00:27:24.990522] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:06.388 [2024-07-16 00:27:24.990701] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.388 [2024-07-16 00:27:24.990710] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.388 [2024-07-16 00:27:24.990717] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.388 [2024-07-16 00:27:24.993547] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.388 [2024-07-16 00:27:25.002895] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.388 [2024-07-16 00:27:25.003378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.388 [2024-07-16 00:27:25.003394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:06.388 [2024-07-16 00:27:25.003402] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:06.388 [2024-07-16 00:27:25.003579] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:06.388 [2024-07-16 00:27:25.003756] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.388 [2024-07-16 00:27:25.003766] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.388 [2024-07-16 00:27:25.003772] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.388 [2024-07-16 00:27:25.006602] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.388 [2024-07-16 00:27:25.015951] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.388 [2024-07-16 00:27:25.016362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.388 [2024-07-16 00:27:25.016380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:06.388 [2024-07-16 00:27:25.016387] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:06.388 [2024-07-16 00:27:25.016564] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:06.389 [2024-07-16 00:27:25.016742] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.389 [2024-07-16 00:27:25.016752] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.389 [2024-07-16 00:27:25.016759] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.389 [2024-07-16 00:27:25.019589] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.389 [2024-07-16 00:27:25.029108] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.389 [2024-07-16 00:27:25.029601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.389 [2024-07-16 00:27:25.029618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:06.389 [2024-07-16 00:27:25.029625] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:06.389 [2024-07-16 00:27:25.029803] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:06.389 [2024-07-16 00:27:25.029980] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.389 [2024-07-16 00:27:25.029992] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.389 [2024-07-16 00:27:25.029999] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.389 [2024-07-16 00:27:25.032830] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.389 [2024-07-16 00:27:25.042381] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.389 [2024-07-16 00:27:25.042807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.389 [2024-07-16 00:27:25.042825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:06.389 [2024-07-16 00:27:25.042833] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:06.389 [2024-07-16 00:27:25.043010] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:06.389 [2024-07-16 00:27:25.043189] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.389 [2024-07-16 00:27:25.043200] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.389 [2024-07-16 00:27:25.043209] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.389 [2024-07-16 00:27:25.046042] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.389 [2024-07-16 00:27:25.055559] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.389 [2024-07-16 00:27:25.055961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.389 [2024-07-16 00:27:25.055978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:06.389 [2024-07-16 00:27:25.055986] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:06.389 [2024-07-16 00:27:25.056163] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:06.389 [2024-07-16 00:27:25.056347] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.389 [2024-07-16 00:27:25.056358] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.389 [2024-07-16 00:27:25.056365] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.389 [2024-07-16 00:27:25.059189] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.389 [2024-07-16 00:27:25.068714] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.389 [2024-07-16 00:27:25.069181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.389 [2024-07-16 00:27:25.069199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:06.389 [2024-07-16 00:27:25.069206] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:06.389 [2024-07-16 00:27:25.069389] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:06.389 [2024-07-16 00:27:25.069567] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.389 [2024-07-16 00:27:25.069577] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.389 [2024-07-16 00:27:25.069584] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.389 [2024-07-16 00:27:25.072412] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.389 [2024-07-16 00:27:25.081778] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.389 [2024-07-16 00:27:25.082211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.389 [2024-07-16 00:27:25.082238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:06.389 [2024-07-16 00:27:25.082247] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:06.389 [2024-07-16 00:27:25.082425] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:06.389 [2024-07-16 00:27:25.082603] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.389 [2024-07-16 00:27:25.082613] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.389 [2024-07-16 00:27:25.082619] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.389 [2024-07-16 00:27:25.085453] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.389 [2024-07-16 00:27:25.094980] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.389 [2024-07-16 00:27:25.095320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.389 [2024-07-16 00:27:25.095338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:06.389 [2024-07-16 00:27:25.095346] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:06.389 [2024-07-16 00:27:25.095523] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:06.389 [2024-07-16 00:27:25.095701] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.389 [2024-07-16 00:27:25.095711] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.389 [2024-07-16 00:27:25.095717] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.389 [2024-07-16 00:27:25.098553] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.389 [2024-07-16 00:27:25.108071] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.389 [2024-07-16 00:27:25.108422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.389 [2024-07-16 00:27:25.108439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:06.389 [2024-07-16 00:27:25.108447] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:06.389 [2024-07-16 00:27:25.108625] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:06.389 [2024-07-16 00:27:25.108804] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.389 [2024-07-16 00:27:25.108813] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.389 [2024-07-16 00:27:25.108820] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.389 [2024-07-16 00:27:25.111656] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.389 [2024-07-16 00:27:25.121181] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.389 [2024-07-16 00:27:25.121590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.389 [2024-07-16 00:27:25.121607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:06.389 [2024-07-16 00:27:25.121615] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:06.389 [2024-07-16 00:27:25.121795] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:06.389 [2024-07-16 00:27:25.121974] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.389 [2024-07-16 00:27:25.121984] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.389 [2024-07-16 00:27:25.121990] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.389 [2024-07-16 00:27:25.124821] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.389 [2024-07-16 00:27:25.134348] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.389 [2024-07-16 00:27:25.134746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.389 [2024-07-16 00:27:25.134763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:06.389 [2024-07-16 00:27:25.134770] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:06.389 [2024-07-16 00:27:25.134947] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:06.389 [2024-07-16 00:27:25.135126] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.389 [2024-07-16 00:27:25.135136] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.389 [2024-07-16 00:27:25.135142] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.389 [2024-07-16 00:27:25.137973] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.389 [2024-07-16 00:27:25.147511] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.389 [2024-07-16 00:27:25.147975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.389 [2024-07-16 00:27:25.147992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:06.389 [2024-07-16 00:27:25.148000] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:06.389 [2024-07-16 00:27:25.148177] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:06.389 [2024-07-16 00:27:25.148361] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.389 [2024-07-16 00:27:25.148371] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.389 [2024-07-16 00:27:25.148378] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.389 [2024-07-16 00:27:25.151202] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.389 [2024-07-16 00:27:25.160571] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.389 [2024-07-16 00:27:25.161026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.389 [2024-07-16 00:27:25.161043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:06.389 [2024-07-16 00:27:25.161051] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:06.390 [2024-07-16 00:27:25.161233] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:06.390 [2024-07-16 00:27:25.161412] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.390 [2024-07-16 00:27:25.161422] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.390 [2024-07-16 00:27:25.161432] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.390 [2024-07-16 00:27:25.164264] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.390 [2024-07-16 00:27:25.173624] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.390 [2024-07-16 00:27:25.174092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.390 [2024-07-16 00:27:25.174109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:06.390 [2024-07-16 00:27:25.174117] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:06.390 [2024-07-16 00:27:25.174299] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:06.390 [2024-07-16 00:27:25.174477] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.390 [2024-07-16 00:27:25.174487] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.390 [2024-07-16 00:27:25.174494] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.390 [2024-07-16 00:27:25.177323] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.390 [2024-07-16 00:27:25.186685] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.390 [2024-07-16 00:27:25.187033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.390 [2024-07-16 00:27:25.187049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:06.390 [2024-07-16 00:27:25.187056] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:06.390 [2024-07-16 00:27:25.187238] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:06.390 [2024-07-16 00:27:25.187418] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.390 [2024-07-16 00:27:25.187428] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.390 [2024-07-16 00:27:25.187434] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.390 [2024-07-16 00:27:25.190265] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.390 [2024-07-16 00:27:25.199788] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.390 [2024-07-16 00:27:25.200255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.390 [2024-07-16 00:27:25.200273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:06.390 [2024-07-16 00:27:25.200280] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:06.390 [2024-07-16 00:27:25.200458] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:06.390 [2024-07-16 00:27:25.200636] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.390 [2024-07-16 00:27:25.200646] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.390 [2024-07-16 00:27:25.200652] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.390 [2024-07-16 00:27:25.203489] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.390 [2024-07-16 00:27:25.212852] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.390 [2024-07-16 00:27:25.213320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.390 [2024-07-16 00:27:25.213341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:06.390 [2024-07-16 00:27:25.213349] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:06.390 [2024-07-16 00:27:25.213526] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:06.390 [2024-07-16 00:27:25.213704] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.390 [2024-07-16 00:27:25.213714] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.390 [2024-07-16 00:27:25.213721] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.390 [2024-07-16 00:27:25.216578] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.390 [2024-07-16 00:27:25.225938] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.390 [2024-07-16 00:27:25.226336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.390 [2024-07-16 00:27:25.226353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:06.390 [2024-07-16 00:27:25.226360] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:06.390 [2024-07-16 00:27:25.226538] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:06.390 [2024-07-16 00:27:25.226717] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.390 [2024-07-16 00:27:25.226727] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.390 [2024-07-16 00:27:25.226734] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.390 [2024-07-16 00:27:25.229568] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.390 [2024-07-16 00:27:25.239090] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.390 [2024-07-16 00:27:25.239560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.390 [2024-07-16 00:27:25.239577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:06.390 [2024-07-16 00:27:25.239585] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:06.390 [2024-07-16 00:27:25.239762] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:06.390 [2024-07-16 00:27:25.239941] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.390 [2024-07-16 00:27:25.239951] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.390 [2024-07-16 00:27:25.239957] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.650 [2024-07-16 00:27:25.242792] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.650 [2024-07-16 00:27:25.252155] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.650 [2024-07-16 00:27:25.252509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.650 [2024-07-16 00:27:25.252526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:06.650 [2024-07-16 00:27:25.252534] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:06.650 [2024-07-16 00:27:25.252712] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:06.650 [2024-07-16 00:27:25.252894] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.650 [2024-07-16 00:27:25.252905] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.650 [2024-07-16 00:27:25.252911] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.650 [2024-07-16 00:27:25.255742] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.650 [2024-07-16 00:27:25.265271] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.650 [2024-07-16 00:27:25.265713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.650 [2024-07-16 00:27:25.265730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:06.650 [2024-07-16 00:27:25.265737] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:06.650 [2024-07-16 00:27:25.265914] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:06.650 [2024-07-16 00:27:25.266092] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.650 [2024-07-16 00:27:25.266102] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.650 [2024-07-16 00:27:25.266108] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.650 [2024-07-16 00:27:25.268945] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.650 [2024-07-16 00:27:25.278310] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.650 [2024-07-16 00:27:25.278698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.650 [2024-07-16 00:27:25.278715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:06.650 [2024-07-16 00:27:25.278723] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:06.650 [2024-07-16 00:27:25.278900] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:06.650 [2024-07-16 00:27:25.279079] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.650 [2024-07-16 00:27:25.279089] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.650 [2024-07-16 00:27:25.279096] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.650 [2024-07-16 00:27:25.281926] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.650 [2024-07-16 00:27:25.291469] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.650 [2024-07-16 00:27:25.291824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.650 [2024-07-16 00:27:25.291842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:06.650 [2024-07-16 00:27:25.291849] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:06.650 [2024-07-16 00:27:25.292025] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:06.650 [2024-07-16 00:27:25.292202] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.650 [2024-07-16 00:27:25.292212] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.650 [2024-07-16 00:27:25.292218] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.650 [2024-07-16 00:27:25.295049] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.650 [2024-07-16 00:27:25.304587] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.650 [2024-07-16 00:27:25.304981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.650 [2024-07-16 00:27:25.304998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:06.650 [2024-07-16 00:27:25.305005] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:06.650 [2024-07-16 00:27:25.305182] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:06.650 [2024-07-16 00:27:25.305367] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.650 [2024-07-16 00:27:25.305376] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.650 [2024-07-16 00:27:25.305383] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.650 [2024-07-16 00:27:25.308210] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.650 [2024-07-16 00:27:25.317712] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.650 [2024-07-16 00:27:25.318062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.650 [2024-07-16 00:27:25.318078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:06.650 [2024-07-16 00:27:25.318085] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:06.650 [2024-07-16 00:27:25.318268] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:06.650 [2024-07-16 00:27:25.318447] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.650 [2024-07-16 00:27:25.318456] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.650 [2024-07-16 00:27:25.318463] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.650 [2024-07-16 00:27:25.321294] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.650 [2024-07-16 00:27:25.330818] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.650 [2024-07-16 00:27:25.331230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.650 [2024-07-16 00:27:25.331248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:06.650 [2024-07-16 00:27:25.331255] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:06.650 [2024-07-16 00:27:25.331432] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:06.650 [2024-07-16 00:27:25.331610] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.650 [2024-07-16 00:27:25.331620] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.650 [2024-07-16 00:27:25.331626] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.650 [2024-07-16 00:27:25.334458] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.650 [2024-07-16 00:27:25.343984] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.651 [2024-07-16 00:27:25.344370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.651 [2024-07-16 00:27:25.344387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:06.651 [2024-07-16 00:27:25.344398] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:06.651 [2024-07-16 00:27:25.344575] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:06.651 [2024-07-16 00:27:25.344754] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.651 [2024-07-16 00:27:25.344763] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.651 [2024-07-16 00:27:25.344770] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.651 [2024-07-16 00:27:25.347600] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.651 [2024-07-16 00:27:25.357125] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.651 [2024-07-16 00:27:25.357474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.651 [2024-07-16 00:27:25.357491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:06.651 [2024-07-16 00:27:25.357499] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:06.651 [2024-07-16 00:27:25.357675] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:06.651 [2024-07-16 00:27:25.357854] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.651 [2024-07-16 00:27:25.357864] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.651 [2024-07-16 00:27:25.357871] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.651 [2024-07-16 00:27:25.360710] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.651 [2024-07-16 00:27:25.370240] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.651 [2024-07-16 00:27:25.370588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.651 [2024-07-16 00:27:25.370605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:06.651 [2024-07-16 00:27:25.370613] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:06.651 [2024-07-16 00:27:25.370789] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:06.651 [2024-07-16 00:27:25.370967] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.651 [2024-07-16 00:27:25.370976] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.651 [2024-07-16 00:27:25.370983] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.651 [2024-07-16 00:27:25.373817] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.651 [2024-07-16 00:27:25.383346] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.651 [2024-07-16 00:27:25.383759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.651 [2024-07-16 00:27:25.383778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:06.651 [2024-07-16 00:27:25.383785] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:06.651 [2024-07-16 00:27:25.383962] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:06.651 [2024-07-16 00:27:25.384145] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.651 [2024-07-16 00:27:25.384159] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.651 [2024-07-16 00:27:25.384166] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.651 [2024-07-16 00:27:25.386998] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.651 [2024-07-16 00:27:25.396522] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.651 [2024-07-16 00:27:25.396995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.651 [2024-07-16 00:27:25.397011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:06.651 [2024-07-16 00:27:25.397019] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:06.651 [2024-07-16 00:27:25.397197] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:06.651 [2024-07-16 00:27:25.397381] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.651 [2024-07-16 00:27:25.397392] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.651 [2024-07-16 00:27:25.397399] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.651 [2024-07-16 00:27:25.400229] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.651 [2024-07-16 00:27:25.409618] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.651 [2024-07-16 00:27:25.410099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.651 [2024-07-16 00:27:25.410117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:06.651 [2024-07-16 00:27:25.410124] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:06.651 [2024-07-16 00:27:25.410305] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:06.651 [2024-07-16 00:27:25.410483] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.651 [2024-07-16 00:27:25.410493] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.651 [2024-07-16 00:27:25.410499] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.651 [2024-07-16 00:27:25.413332] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.651 [2024-07-16 00:27:25.422687] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.651 [2024-07-16 00:27:25.423024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.651 [2024-07-16 00:27:25.423041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:06.651 [2024-07-16 00:27:25.423048] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:06.651 [2024-07-16 00:27:25.423230] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:06.651 [2024-07-16 00:27:25.423409] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.651 [2024-07-16 00:27:25.423418] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.651 [2024-07-16 00:27:25.423425] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.651 [2024-07-16 00:27:25.426254] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.651 [2024-07-16 00:27:25.435792] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.651 [2024-07-16 00:27:25.436264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.651 [2024-07-16 00:27:25.436282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:06.651 [2024-07-16 00:27:25.436290] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:06.651 [2024-07-16 00:27:25.436468] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:06.651 [2024-07-16 00:27:25.436648] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.651 [2024-07-16 00:27:25.436658] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.651 [2024-07-16 00:27:25.436666] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.651 [2024-07-16 00:27:25.439500] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.651 [2024-07-16 00:27:25.448854] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.651 [2024-07-16 00:27:25.449190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.651 [2024-07-16 00:27:25.449207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:06.651 [2024-07-16 00:27:25.449216] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:06.651 [2024-07-16 00:27:25.449399] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:06.651 [2024-07-16 00:27:25.449578] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.651 [2024-07-16 00:27:25.449588] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.651 [2024-07-16 00:27:25.449597] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.651 [2024-07-16 00:27:25.452424] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.651 [2024-07-16 00:27:25.461953] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.651 [2024-07-16 00:27:25.462374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.651 [2024-07-16 00:27:25.462391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:06.651 [2024-07-16 00:27:25.462398] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:06.651 [2024-07-16 00:27:25.462575] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:06.651 [2024-07-16 00:27:25.462754] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.651 [2024-07-16 00:27:25.462764] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.651 [2024-07-16 00:27:25.462772] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.651 [2024-07-16 00:27:25.465608] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.651 [2024-07-16 00:27:25.475122] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.651 [2024-07-16 00:27:25.475592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.651 [2024-07-16 00:27:25.475610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:06.651 [2024-07-16 00:27:25.475618] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:06.651 [2024-07-16 00:27:25.475798] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:06.651 [2024-07-16 00:27:25.475978] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.651 [2024-07-16 00:27:25.475988] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.651 [2024-07-16 00:27:25.475994] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.651 [2024-07-16 00:27:25.478825] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.651 [2024-07-16 00:27:25.488182] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.652 [2024-07-16 00:27:25.488673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.652 [2024-07-16 00:27:25.488691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:06.652 [2024-07-16 00:27:25.488699] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:06.652 [2024-07-16 00:27:25.488876] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:06.652 [2024-07-16 00:27:25.489052] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.652 [2024-07-16 00:27:25.489062] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.652 [2024-07-16 00:27:25.489069] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.652 [2024-07-16 00:27:25.491897] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.652 [2024-07-16 00:27:25.501247] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.652 [2024-07-16 00:27:25.501615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.652 [2024-07-16 00:27:25.501631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:06.652 [2024-07-16 00:27:25.501639] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:06.911 [2024-07-16 00:27:25.501816] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:06.911 [2024-07-16 00:27:25.501996] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.911 [2024-07-16 00:27:25.502007] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.911 [2024-07-16 00:27:25.502014] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.911 [2024-07-16 00:27:25.504841] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.911 00:27:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:26:06.911 00:27:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@856 -- # return 0 00:26:06.911 00:27:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:06.911 00:27:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:06.911 00:27:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:06.911 [2024-07-16 00:27:25.514358] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.911 [2024-07-16 00:27:25.514848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.911 [2024-07-16 00:27:25.514865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:06.911 [2024-07-16 00:27:25.514873] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:06.911 [2024-07-16 00:27:25.515054] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:06.911 [2024-07-16 00:27:25.515241] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.911 [2024-07-16 00:27:25.515253] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.911 [2024-07-16 00:27:25.515259] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.911 [2024-07-16 00:27:25.518086] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.911 [2024-07-16 00:27:25.527441] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.911 [2024-07-16 00:27:25.527875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.911 [2024-07-16 00:27:25.527892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:06.911 [2024-07-16 00:27:25.527900] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:06.911 [2024-07-16 00:27:25.528078] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:06.911 [2024-07-16 00:27:25.528263] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.911 [2024-07-16 00:27:25.528273] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.911 [2024-07-16 00:27:25.528280] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.911 [2024-07-16 00:27:25.531104] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.911 [2024-07-16 00:27:25.540630] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.911 [2024-07-16 00:27:25.541047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.911 [2024-07-16 00:27:25.541064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:06.911 [2024-07-16 00:27:25.541072] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:06.911 [2024-07-16 00:27:25.541254] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:06.911 [2024-07-16 00:27:25.541432] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.911 [2024-07-16 00:27:25.541444] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.911 [2024-07-16 00:27:25.541451] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.911 00:27:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:06.911 00:27:25 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:06.911 [2024-07-16 00:27:25.544286] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.911 00:27:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:06.911 00:27:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:06.911 [2024-07-16 00:27:25.549566] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:06.911 [2024-07-16 00:27:25.553811] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.911 [2024-07-16 00:27:25.554142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.911 [2024-07-16 00:27:25.554159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:06.911 [2024-07-16 00:27:25.554166] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:06.911 [2024-07-16 00:27:25.554352] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:06.911 00:27:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:06.911 [2024-07-16 00:27:25.554530] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.911 [2024-07-16 00:27:25.554540] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.911 [2024-07-16 00:27:25.554546] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.911 00:27:25 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:06.911 00:27:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:06.911 00:27:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:06.911 [2024-07-16 00:27:25.557373] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.911 [2024-07-16 00:27:25.566902] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.911 [2024-07-16 00:27:25.567296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.911 [2024-07-16 00:27:25.567314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:06.911 [2024-07-16 00:27:25.567322] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:06.911 [2024-07-16 00:27:25.567499] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:06.911 [2024-07-16 00:27:25.567677] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.911 [2024-07-16 00:27:25.567687] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.911 [2024-07-16 00:27:25.567694] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.911 [2024-07-16 00:27:25.570530] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.911 [2024-07-16 00:27:25.580052] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.911 [2024-07-16 00:27:25.580517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.911 [2024-07-16 00:27:25.580537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:06.911 [2024-07-16 00:27:25.580545] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:06.911 [2024-07-16 00:27:25.580723] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:06.911 [2024-07-16 00:27:25.580902] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.911 [2024-07-16 00:27:25.580912] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.911 [2024-07-16 00:27:25.580920] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.911 Malloc0 00:26:06.911 [2024-07-16 00:27:25.583762] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.911 00:27:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:06.911 00:27:25 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:06.911 00:27:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:06.911 00:27:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:06.911 [2024-07-16 00:27:25.593116] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.911 [2024-07-16 00:27:25.593584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.911 [2024-07-16 00:27:25.593606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:06.911 [2024-07-16 00:27:25.593613] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:06.911 [2024-07-16 00:27:25.593792] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:06.911 [2024-07-16 00:27:25.593971] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.911 [2024-07-16 00:27:25.593981] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.911 [2024-07-16 00:27:25.593988] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.911 00:27:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:06.911 00:27:25 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:06.911 00:27:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:06.911 00:27:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:06.911 [2024-07-16 00:27:25.596818] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.911 00:27:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:06.911 00:27:25 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:06.911 00:27:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:06.911 00:27:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:06.911 [2024-07-16 00:27:25.606169] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.911 [2024-07-16 00:27:25.606512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.911 [2024-07-16 00:27:25.606529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb04980 with addr=10.0.0.2, port=4420 00:26:06.911 [2024-07-16 00:27:25.606536] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb04980 is same with the state(5) to be set 00:26:06.911 [2024-07-16 00:27:25.606713] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb04980 (9): Bad file descriptor 00:26:06.911 [2024-07-16 00:27:25.606891] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.912 [2024-07-16 00:27:25.606901] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.912 [2024-07-16 00:27:25.606907] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.912 [2024-07-16 00:27:25.607171] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:06.912 [2024-07-16 00:27:25.609738] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.912 00:27:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:06.912 00:27:25 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1658628 00:26:06.912 [2024-07-16 00:27:25.619258] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.912 [2024-07-16 00:27:25.650597] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:16.879 00:26:16.879 Latency(us) 00:26:16.879 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:16.879 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:16.879 Verification LBA range: start 0x0 length 0x4000 00:26:16.879 Nvme1n1 : 15.01 8044.88 31.43 12573.55 0.00 6188.42 644.67 19831.76 00:26:16.879 =================================================================================================================== 00:26:16.879 Total : 8044.88 31.43 12573.55 0.00 6188.42 644.67 19831.76 00:26:16.879 00:27:34 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:26:16.879 00:27:34 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:16.879 00:27:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:16.879 00:27:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:16.879 00:27:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:16.879 00:27:34 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:26:16.879 00:27:34 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:26:16.879 00:27:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:16.879 00:27:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:26:16.879 00:27:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:16.879 00:27:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:26:16.879 00:27:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:16.879 00:27:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:16.879 rmmod nvme_tcp 00:26:16.879 rmmod nvme_fabrics 00:26:16.879 rmmod nvme_keyring 00:26:16.879 00:27:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:16.879 00:27:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:26:16.879 00:27:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:26:16.879 00:27:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 1659559 ']' 00:26:16.879 00:27:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 1659559 00:26:16.879 00:27:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@942 -- # '[' -z 1659559 ']' 00:26:16.879 00:27:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@946 -- # kill -0 1659559 00:26:16.879 00:27:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@947 -- # uname 00:26:16.879 00:27:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:26:16.879 00:27:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1659559 00:26:16.879 00:27:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:26:16.879 00:27:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:26:16.879 00:27:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1659559' 00:26:16.879 killing process with pid 1659559 00:26:16.879 00:27:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@961 -- # kill 1659559 00:26:16.879 00:27:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@966 -- # wait 1659559 00:26:16.879 00:27:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:16.879 00:27:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:16.879 00:27:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:16.879 00:27:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:16.879 00:27:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:16.879 00:27:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:16.879 00:27:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:16.879 00:27:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:17.813 00:27:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:17.813 00:26:17.813 real 0m25.428s 00:26:17.813 user 1m2.433s 00:26:17.813 sys 0m5.720s 00:26:17.813 00:27:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1118 -- # xtrace_disable 00:26:17.813 00:27:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:17.813 ************************************ 00:26:17.813 END TEST nvmf_bdevperf 00:26:17.813 ************************************ 00:26:17.813 00:27:36 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:26:17.813 00:27:36 nvmf_tcp -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:26:17.813 00:27:36 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:26:17.813 00:27:36 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:26:17.813 00:27:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:18.071 ************************************ 00:26:18.071 START TEST nvmf_target_disconnect 00:26:18.071 ************************************ 00:26:18.071 00:27:36 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:26:18.071 * Looking for test storage... 00:26:18.071 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:18.071 00:27:36 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:18.071 00:27:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:26:18.072 00:27:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:18.072 00:27:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:18.072 00:27:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:18.072 00:27:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:18.072 00:27:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:18.072 00:27:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:18.072 00:27:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:18.072 00:27:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:18.072 00:27:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:18.072 00:27:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:18.072 00:27:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:18.072 00:27:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:18.072 00:27:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:18.072 00:27:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:18.072 00:27:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:18.072 00:27:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:18.072 00:27:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:18.072 00:27:36 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:18.072 00:27:36 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:18.072 00:27:36 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:18.072 00:27:36 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:18.072 00:27:36 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:18.072 00:27:36 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:18.072 00:27:36 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:26:18.072 00:27:36 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:18.072 00:27:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:26:18.072 00:27:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:18.072 00:27:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:18.072 00:27:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:18.072 00:27:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:18.072 00:27:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:18.072 00:27:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:18.072 00:27:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:18.072 00:27:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:18.072 00:27:36 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:26:18.072 00:27:36 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:26:18.072 00:27:36 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:26:18.072 00:27:36 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:26:18.072 00:27:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:18.072 00:27:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:18.072 00:27:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:18.072 00:27:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:18.072 00:27:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:18.072 00:27:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:18.072 00:27:36 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:18.072 00:27:36 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:18.072 00:27:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:18.072 00:27:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:18.072 00:27:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:26:18.072 00:27:36 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:23.370 00:27:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:23.370 00:27:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:26:23.370 00:27:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:23.370 00:27:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:23.370 00:27:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:23.370 00:27:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:23.370 00:27:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:23.370 00:27:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:26:23.370 00:27:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:23.370 00:27:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:26:23.370 00:27:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:26:23.370 00:27:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:26:23.370 00:27:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:26:23.370 00:27:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:26:23.370 00:27:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:26:23.370 00:27:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:23.370 00:27:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:23.370 00:27:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:23.370 00:27:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:23.370 00:27:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:23.370 00:27:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:23.370 00:27:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:23.370 00:27:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:23.370 00:27:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:23.371 00:27:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:23.371 00:27:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:23.371 00:27:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:23.371 00:27:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:23.371 00:27:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:23.371 00:27:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:23.371 00:27:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:23.371 00:27:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:23.371 00:27:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:23.371 00:27:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:23.371 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:23.371 00:27:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:23.371 00:27:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:23.371 00:27:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:23.371 00:27:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:23.371 00:27:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:23.371 00:27:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:23.371 00:27:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:23.371 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:23.371 00:27:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:23.371 00:27:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:23.371 00:27:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:23.371 00:27:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:23.371 00:27:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:23.371 00:27:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:23.371 00:27:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:23.371 00:27:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:23.371 00:27:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:23.371 00:27:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:23.371 00:27:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:23.371 00:27:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:23.371 00:27:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:23.371 00:27:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:23.371 00:27:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:23.371 00:27:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:23.371 Found net devices under 0000:86:00.0: cvl_0_0 00:26:23.371 00:27:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:23.371 00:27:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:23.371 00:27:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:23.371 00:27:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:23.371 00:27:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:23.371 00:27:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:23.371 00:27:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:23.371 00:27:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:23.371 00:27:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:23.371 Found net devices under 0000:86:00.1: cvl_0_1 00:26:23.371 00:27:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:23.371 00:27:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:23.371 00:27:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:26:23.371 00:27:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:23.371 00:27:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:23.371 00:27:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:23.371 00:27:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:23.371 00:27:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:23.371 00:27:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:23.371 00:27:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:23.371 00:27:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:23.371 00:27:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:23.371 00:27:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:23.371 00:27:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:23.371 00:27:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:23.371 00:27:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:23.371 00:27:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:23.371 00:27:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:23.371 00:27:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:23.629 00:27:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:23.629 00:27:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:23.629 00:27:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:23.629 00:27:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:23.629 00:27:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:23.629 00:27:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:23.629 00:27:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:23.629 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:23.629 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.257 ms 00:26:23.629 00:26:23.629 --- 10.0.0.2 ping statistics --- 00:26:23.629 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:23.629 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:26:23.629 00:27:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:23.629 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:23.629 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.172 ms 00:26:23.629 00:26:23.629 --- 10.0.0.1 ping statistics --- 00:26:23.629 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:23.629 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:26:23.629 00:27:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:23.629 00:27:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:26:23.629 00:27:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:23.629 00:27:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:23.629 00:27:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:23.629 00:27:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:23.629 00:27:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:23.629 00:27:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:23.629 00:27:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:23.629 00:27:42 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:26:23.629 00:27:42 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:26:23.629 00:27:42 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # xtrace_disable 00:26:23.629 00:27:42 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:23.629 ************************************ 00:26:23.629 START TEST nvmf_target_disconnect_tc1 00:26:23.629 ************************************ 00:26:23.629 00:27:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1117 -- # nvmf_target_disconnect_tc1 00:26:23.629 00:27:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:23.629 00:27:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # local es=0 00:26:23.629 00:27:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:23.629 00:27:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@630 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:26:23.629 00:27:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:26:23.629 00:27:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@634 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:26:23.629 00:27:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:26:23.629 00:27:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:26:23.629 00:27:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:26:23.629 00:27:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:26:23.629 00:27:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:26:23.629 00:27:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@645 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:23.888 [2024-07-16 00:27:42.547478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.888 [2024-07-16 00:27:42.547516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x157de60 with addr=10.0.0.2, port=4420 00:26:23.888 [2024-07-16 00:27:42.547532] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:23.888 [2024-07-16 00:27:42.547540] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:23.888 [2024-07-16 00:27:42.547546] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:26:23.888 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:26:23.888 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:26:23.888 Initializing NVMe Controllers 00:26:23.888 00:27:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@645 -- # es=1 00:26:23.888 00:27:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:26:23.888 00:27:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:26:23.888 00:27:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:26:23.888 00:26:23.888 real 0m0.098s 00:26:23.888 user 0m0.039s 00:26:23.888 sys 0m0.059s 00:26:23.888 00:27:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1118 -- # xtrace_disable 00:26:23.888 00:27:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:23.888 ************************************ 00:26:23.888 END TEST nvmf_target_disconnect_tc1 00:26:23.888 ************************************ 00:26:23.888 00:27:42 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1136 -- # return 0 00:26:23.888 00:27:42 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:26:23.888 00:27:42 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:26:23.888 00:27:42 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # xtrace_disable 00:26:23.888 00:27:42 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:23.888 ************************************ 00:26:23.888 START TEST nvmf_target_disconnect_tc2 00:26:23.888 ************************************ 00:26:23.888 00:27:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1117 -- # nvmf_target_disconnect_tc2 00:26:23.888 00:27:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:26:23.888 00:27:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:26:23.888 00:27:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:23.888 00:27:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:23.888 00:27:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:23.888 00:27:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1664709 00:26:23.888 00:27:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1664709 00:26:23.888 00:27:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:26:23.888 00:27:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@823 -- # '[' -z 1664709 ']' 00:26:23.888 00:27:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:23.888 00:27:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@828 -- # local max_retries=100 00:26:23.888 00:27:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:23.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:23.888 00:27:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@832 -- # xtrace_disable 00:26:23.888 00:27:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:23.888 [2024-07-16 00:27:42.677887] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:26:23.888 [2024-07-16 00:27:42.677928] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:24.145 [2024-07-16 00:27:42.748730] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:24.145 [2024-07-16 00:27:42.826377] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:24.145 [2024-07-16 00:27:42.826413] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:24.146 [2024-07-16 00:27:42.826420] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:24.146 [2024-07-16 00:27:42.826426] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:24.146 [2024-07-16 00:27:42.826430] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:24.146 [2024-07-16 00:27:42.826538] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:26:24.146 [2024-07-16 00:27:42.826650] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:26:24.146 [2024-07-16 00:27:42.826756] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:26:24.146 [2024-07-16 00:27:42.826758] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:26:24.709 00:27:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:26:24.709 00:27:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@856 -- # return 0 00:26:24.709 00:27:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:24.709 00:27:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:24.709 00:27:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:24.709 00:27:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:24.709 00:27:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:24.709 00:27:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:24.709 00:27:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:24.709 Malloc0 00:26:24.709 00:27:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:24.709 00:27:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:26:24.709 00:27:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:24.709 00:27:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:24.709 [2024-07-16 00:27:43.544277] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:24.709 00:27:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:24.709 00:27:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:24.709 00:27:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:24.709 00:27:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:24.709 00:27:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:24.709 00:27:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:24.709 00:27:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:24.709 00:27:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:24.967 00:27:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:24.967 00:27:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:24.967 00:27:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:24.967 00:27:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:24.967 [2024-07-16 00:27:43.569349] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:24.967 00:27:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:24.967 00:27:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:24.967 00:27:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:24.967 00:27:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:24.967 00:27:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:24.967 00:27:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1664745 00:26:24.967 00:27:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:26:24.967 00:27:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:26.870 00:27:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1664709 00:26:26.870 00:27:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:26:26.870 Read completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed 00:26:26.870 Read completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed 00:26:26.870 Read completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed 00:26:26.870 Read completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed 00:26:26.870 Read completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed 00:26:26.870 Read completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed 00:26:26.870 Write completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed 00:26:26.870 Read completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed 00:26:26.870 Read completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed 00:26:26.870 Read completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed 00:26:26.870 Write completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed 00:26:26.870 Read completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed 00:26:26.870 Read completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed 00:26:26.870 Read completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed 00:26:26.870 Read completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed 00:26:26.870 Read completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed 00:26:26.870 Read completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed 00:26:26.870 Read completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed 00:26:26.870 Read completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed 00:26:26.870 Write completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed 00:26:26.870 Write completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed 00:26:26.870 Read completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed 00:26:26.870 Write completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed 00:26:26.870 Read completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed 00:26:26.870 Write completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed 00:26:26.870 Read completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed 00:26:26.870 Read completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed 00:26:26.870 Write completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed 00:26:26.870 Read completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed 00:26:26.870 Write completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed 00:26:26.870 Read completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed 00:26:26.870 Write completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed 00:26:26.870 Read completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed 00:26:26.870 Read completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed 00:26:26.870 [2024-07-16 00:27:45.596848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:26.870 Read completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed 00:26:26.870 Read completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed 00:26:26.870 Read completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed 00:26:26.870 Read completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed 00:26:26.870 Read completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed 00:26:26.870 Read completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed 00:26:26.870 Read completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed 00:26:26.870 Write completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed 00:26:26.870 Read completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed 00:26:26.870 Read completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed 00:26:26.870 Read completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed 00:26:26.870 Write completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed 00:26:26.870 Write completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed 00:26:26.870 Write completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed 00:26:26.870 Write completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed 00:26:26.870 Write completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed 00:26:26.870 Write completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed 00:26:26.870 Read completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed 00:26:26.870 Write completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed 00:26:26.870 Read completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed 00:26:26.870 Write completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed 00:26:26.870 Write completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed 00:26:26.870 Read completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed 00:26:26.870 Write completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed 00:26:26.870 Write completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed 00:26:26.870 Write completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed 00:26:26.870 Write completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed 00:26:26.870 Read completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed 00:26:26.870 Write completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed 00:26:26.870 Read completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed 00:26:26.870 [2024-07-16 00:27:45.597049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:26.870 Read completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed 00:26:26.870 Read completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed 00:26:26.870 Read completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed 00:26:26.870 Read completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed 00:26:26.870 Read completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed 00:26:26.870 Read completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed 00:26:26.870 Read completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed 00:26:26.870 Write completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed 00:26:26.870 Write completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed 00:26:26.870 Read completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed 00:26:26.870 Write completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed 00:26:26.870 Write completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed 00:26:26.870 Write completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed 00:26:26.870 Read completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed 00:26:26.870 Read completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed 00:26:26.870 Read completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed 00:26:26.870 Write completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed 00:26:26.870 Write completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed 00:26:26.870 Read completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed 00:26:26.870 Read completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed 00:26:26.870 Read completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed 00:26:26.870 Read completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed 00:26:26.870 Write completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed 00:26:26.870 Write completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed 00:26:26.870 Write completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed 00:26:26.870 Read completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed 00:26:26.870 Read completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed 00:26:26.870 Write completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed 00:26:26.870 Read completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed 00:26:26.870 Read completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed 00:26:26.870 Read completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed 00:26:26.870 Read completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed 00:26:26.870 [2024-07-16 00:27:45.597248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.870 [2024-07-16 00:27:45.597548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.870 [2024-07-16 00:27:45.597567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:26.870 qpair failed and we were unable to recover it. 00:26:26.870 [2024-07-16 00:27:45.597831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.870 [2024-07-16 00:27:45.597863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:26.870 qpair failed and we were unable to recover it. 00:26:26.870 [2024-07-16 00:27:45.598099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.870 [2024-07-16 00:27:45.598132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:26.870 qpair failed and we were unable to recover it. 00:26:26.871 [2024-07-16 00:27:45.598451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.871 [2024-07-16 00:27:45.598484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:26.871 qpair failed and we were unable to recover it. 00:26:26.871 [2024-07-16 00:27:45.598671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.871 [2024-07-16 00:27:45.598702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:26.871 qpair failed and we were unable to recover it. 00:26:26.871 [2024-07-16 00:27:45.598880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.871 [2024-07-16 00:27:45.598912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:26.871 qpair failed and we were unable to recover it. 00:26:26.871 [2024-07-16 00:27:45.599240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.871 [2024-07-16 00:27:45.599273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:26.871 qpair failed and we were unable to recover it. 00:26:26.871 [2024-07-16 00:27:45.599522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.871 [2024-07-16 00:27:45.599554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:26.871 qpair failed and we were unable to recover it. 00:26:26.871 [2024-07-16 00:27:45.599848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.871 [2024-07-16 00:27:45.599880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:26.871 qpair failed and we were unable to recover it. 00:26:26.871 [2024-07-16 00:27:45.600199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.871 [2024-07-16 00:27:45.600247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:26.871 qpair failed and we were unable to recover it. 00:26:26.871 [2024-07-16 00:27:45.600584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.871 [2024-07-16 00:27:45.600616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:26.871 qpair failed and we were unable to recover it. 00:26:26.871 [2024-07-16 00:27:45.600946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.871 [2024-07-16 00:27:45.600977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:26.871 qpair failed and we were unable to recover it. 00:26:26.871 [2024-07-16 00:27:45.601300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.871 [2024-07-16 00:27:45.601333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:26.871 qpair failed and we were unable to recover it. 00:26:26.871 [2024-07-16 00:27:45.601585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.871 [2024-07-16 00:27:45.601616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:26.871 qpair failed and we were unable to recover it. 00:26:26.871 [2024-07-16 00:27:45.601846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.871 [2024-07-16 00:27:45.601858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:26.871 qpair failed and we were unable to recover it. 00:26:26.871 [2024-07-16 00:27:45.602066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.871 [2024-07-16 00:27:45.602097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:26.871 qpair failed and we were unable to recover it. 00:26:26.871 [2024-07-16 00:27:45.602290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.871 [2024-07-16 00:27:45.602322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:26.871 qpair failed and we were unable to recover it. 00:26:26.871 [2024-07-16 00:27:45.602571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.871 [2024-07-16 00:27:45.602602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:26.871 qpair failed and we were unable to recover it. 00:26:26.871 [2024-07-16 00:27:45.602835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.871 [2024-07-16 00:27:45.602866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:26.871 qpair failed and we were unable to recover it. 00:26:26.871 [2024-07-16 00:27:45.603136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.871 [2024-07-16 00:27:45.603167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:26.871 qpair failed and we were unable to recover it. 00:26:26.871 [2024-07-16 00:27:45.603407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.871 [2024-07-16 00:27:45.603440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:26.871 qpair failed and we were unable to recover it. 00:26:26.871 [2024-07-16 00:27:45.603754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.871 [2024-07-16 00:27:45.603785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:26.871 qpair failed and we were unable to recover it. 00:26:26.871 [2024-07-16 00:27:45.604026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.871 [2024-07-16 00:27:45.604057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:26.871 qpair failed and we were unable to recover it. 00:26:26.871 [2024-07-16 00:27:45.604361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.871 [2024-07-16 00:27:45.604393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:26.871 qpair failed and we were unable to recover it. 00:26:26.871 [2024-07-16 00:27:45.604701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.871 [2024-07-16 00:27:45.604733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:26.871 qpair failed and we were unable to recover it. 00:26:26.871 [2024-07-16 00:27:45.605064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.871 [2024-07-16 00:27:45.605095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:26.871 qpair failed and we were unable to recover it. 00:26:26.871 [2024-07-16 00:27:45.605267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.871 [2024-07-16 00:27:45.605300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:26.871 qpair failed and we were unable to recover it. 00:26:26.871 [2024-07-16 00:27:45.605530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.871 [2024-07-16 00:27:45.605562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:26.871 qpair failed and we were unable to recover it. 00:26:26.871 [2024-07-16 00:27:45.605858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.871 [2024-07-16 00:27:45.605890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:26.871 qpair failed and we were unable to recover it. 00:26:26.871 [2024-07-16 00:27:45.606166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.871 [2024-07-16 00:27:45.606197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:26.871 qpair failed and we were unable to recover it. 00:26:26.871 [2024-07-16 00:27:45.606556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.871 [2024-07-16 00:27:45.606603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.871 qpair failed and we were unable to recover it. 00:26:26.871 [2024-07-16 00:27:45.606863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.871 [2024-07-16 00:27:45.606895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.871 qpair failed and we were unable to recover it. 00:26:26.871 [2024-07-16 00:27:45.607223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.871 [2024-07-16 00:27:45.607237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.871 qpair failed and we were unable to recover it. 00:26:26.871 [2024-07-16 00:27:45.607503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.871 [2024-07-16 00:27:45.607515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.871 qpair failed and we were unable to recover it. 00:26:26.871 [2024-07-16 00:27:45.607790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.871 [2024-07-16 00:27:45.607802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.871 qpair failed and we were unable to recover it. 00:26:26.871 [2024-07-16 00:27:45.607938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.871 [2024-07-16 00:27:45.607950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.871 qpair failed and we were unable to recover it. 00:26:26.871 [2024-07-16 00:27:45.608171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.871 [2024-07-16 00:27:45.608187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.871 qpair failed and we were unable to recover it. 00:26:26.871 [2024-07-16 00:27:45.608370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.871 [2024-07-16 00:27:45.608383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.871 qpair failed and we were unable to recover it. 00:26:26.871 [2024-07-16 00:27:45.608631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.871 [2024-07-16 00:27:45.608643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.871 qpair failed and we were unable to recover it. 00:26:26.871 [2024-07-16 00:27:45.608852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.871 [2024-07-16 00:27:45.608864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.871 qpair failed and we were unable to recover it. 00:26:26.871 [2024-07-16 00:27:45.609016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.871 [2024-07-16 00:27:45.609027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.871 qpair failed and we were unable to recover it. 00:26:26.871 [2024-07-16 00:27:45.609262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.871 [2024-07-16 00:27:45.609275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.871 qpair failed and we were unable to recover it. 00:26:26.871 [2024-07-16 00:27:45.609524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.871 [2024-07-16 00:27:45.609536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.871 qpair failed and we were unable to recover it. 00:26:26.871 [2024-07-16 00:27:45.609720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.871 [2024-07-16 00:27:45.609732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.871 qpair failed and we were unable to recover it. 00:26:26.871 [2024-07-16 00:27:45.610002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.872 [2024-07-16 00:27:45.610013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.872 qpair failed and we were unable to recover it. 00:26:26.872 [2024-07-16 00:27:45.610220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.872 [2024-07-16 00:27:45.610236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.872 qpair failed and we were unable to recover it. 00:26:26.872 [2024-07-16 00:27:45.610544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.872 [2024-07-16 00:27:45.610555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.872 qpair failed and we were unable to recover it. 00:26:26.872 [2024-07-16 00:27:45.610814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.872 [2024-07-16 00:27:45.610845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.872 qpair failed and we were unable to recover it. 00:26:26.872 [2024-07-16 00:27:45.611090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.872 [2024-07-16 00:27:45.611120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.872 qpair failed and we were unable to recover it. 00:26:26.872 [2024-07-16 00:27:45.611425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.872 [2024-07-16 00:27:45.611457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.872 qpair failed and we were unable to recover it. 00:26:26.872 [2024-07-16 00:27:45.611727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.872 [2024-07-16 00:27:45.611758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.872 qpair failed and we were unable to recover it. 00:26:26.872 [2024-07-16 00:27:45.612017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.872 [2024-07-16 00:27:45.612048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.872 qpair failed and we were unable to recover it. 00:26:26.872 [2024-07-16 00:27:45.612367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.872 [2024-07-16 00:27:45.612399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.872 qpair failed and we were unable to recover it. 00:26:26.872 [2024-07-16 00:27:45.612622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.872 [2024-07-16 00:27:45.612635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.872 qpair failed and we were unable to recover it. 00:26:26.872 [2024-07-16 00:27:45.612898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.872 [2024-07-16 00:27:45.612910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.872 qpair failed and we were unable to recover it. 00:26:26.872 [2024-07-16 00:27:45.613138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.872 [2024-07-16 00:27:45.613149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.872 qpair failed and we were unable to recover it. 00:26:26.872 [2024-07-16 00:27:45.613419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.872 [2024-07-16 00:27:45.613431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.872 qpair failed and we were unable to recover it. 00:26:26.872 [2024-07-16 00:27:45.613643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.872 [2024-07-16 00:27:45.613655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.872 qpair failed and we were unable to recover it. 00:26:26.872 [2024-07-16 00:27:45.613933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.872 [2024-07-16 00:27:45.613964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.872 qpair failed and we were unable to recover it. 00:26:26.872 [2024-07-16 00:27:45.614287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.872 [2024-07-16 00:27:45.614318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.872 qpair failed and we were unable to recover it. 00:26:26.872 [2024-07-16 00:27:45.614602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.872 [2024-07-16 00:27:45.614615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.872 qpair failed and we were unable to recover it. 00:26:26.872 [2024-07-16 00:27:45.614900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.872 [2024-07-16 00:27:45.614912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.872 qpair failed and we were unable to recover it. 00:26:26.872 [2024-07-16 00:27:45.615115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.872 [2024-07-16 00:27:45.615147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.872 qpair failed and we were unable to recover it. 00:26:26.872 [2024-07-16 00:27:45.615472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.872 [2024-07-16 00:27:45.615504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.872 qpair failed and we were unable to recover it. 00:26:26.872 [2024-07-16 00:27:45.615814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.872 [2024-07-16 00:27:45.615845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.872 qpair failed and we were unable to recover it. 00:26:26.872 [2024-07-16 00:27:45.616162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.872 [2024-07-16 00:27:45.616194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.872 qpair failed and we were unable to recover it. 00:26:26.872 [2024-07-16 00:27:45.616550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.872 [2024-07-16 00:27:45.616583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.872 qpair failed and we were unable to recover it. 00:26:26.872 [2024-07-16 00:27:45.616895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.872 [2024-07-16 00:27:45.616926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.872 qpair failed and we were unable to recover it. 00:26:26.872 [2024-07-16 00:27:45.617220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.872 [2024-07-16 00:27:45.617263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.872 qpair failed and we were unable to recover it. 00:26:26.872 [2024-07-16 00:27:45.617577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.872 [2024-07-16 00:27:45.617608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.872 qpair failed and we were unable to recover it. 00:26:26.872 [2024-07-16 00:27:45.617942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.872 [2024-07-16 00:27:45.617972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.872 qpair failed and we were unable to recover it. 00:26:26.872 [2024-07-16 00:27:45.618263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.872 [2024-07-16 00:27:45.618295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.872 qpair failed and we were unable to recover it. 00:26:26.872 [2024-07-16 00:27:45.618630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.872 [2024-07-16 00:27:45.618660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.872 qpair failed and we were unable to recover it. 00:26:26.872 [2024-07-16 00:27:45.618919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.872 [2024-07-16 00:27:45.618950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.872 qpair failed and we were unable to recover it. 00:26:26.872 [2024-07-16 00:27:45.619267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.872 [2024-07-16 00:27:45.619299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.872 qpair failed and we were unable to recover it. 00:26:26.872 [2024-07-16 00:27:45.619601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.872 [2024-07-16 00:27:45.619632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.872 qpair failed and we were unable to recover it. 00:26:26.872 [2024-07-16 00:27:45.619869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.872 [2024-07-16 00:27:45.619911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.872 qpair failed and we were unable to recover it. 00:26:26.872 [2024-07-16 00:27:45.620233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.872 [2024-07-16 00:27:45.620265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.872 qpair failed and we were unable to recover it. 00:26:26.872 [2024-07-16 00:27:45.620493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.872 [2024-07-16 00:27:45.620523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.872 qpair failed and we were unable to recover it. 00:26:26.872 [2024-07-16 00:27:45.620864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.872 [2024-07-16 00:27:45.620894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.872 qpair failed and we were unable to recover it. 00:26:26.872 [2024-07-16 00:27:45.621205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.872 [2024-07-16 00:27:45.621243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.872 qpair failed and we were unable to recover it. 00:26:26.872 [2024-07-16 00:27:45.621407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.872 [2024-07-16 00:27:45.621437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.872 qpair failed and we were unable to recover it. 00:26:26.872 [2024-07-16 00:27:45.621680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.872 [2024-07-16 00:27:45.621711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.872 qpair failed and we were unable to recover it. 00:26:26.872 [2024-07-16 00:27:45.621914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.872 [2024-07-16 00:27:45.621926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.872 qpair failed and we were unable to recover it. 00:26:26.872 [2024-07-16 00:27:45.622180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.872 [2024-07-16 00:27:45.622211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.872 qpair failed and we were unable to recover it. 00:26:26.873 [2024-07-16 00:27:45.622538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.873 [2024-07-16 00:27:45.622576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.873 qpair failed and we were unable to recover it. 00:26:26.873 [2024-07-16 00:27:45.622859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.873 [2024-07-16 00:27:45.622890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.873 qpair failed and we were unable to recover it. 00:26:26.873 [2024-07-16 00:27:45.623183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.873 [2024-07-16 00:27:45.623214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.873 qpair failed and we were unable to recover it. 00:26:26.873 [2024-07-16 00:27:45.623543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.873 [2024-07-16 00:27:45.623574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.873 qpair failed and we were unable to recover it. 00:26:26.873 [2024-07-16 00:27:45.623916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.873 [2024-07-16 00:27:45.623946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.873 qpair failed and we were unable to recover it. 00:26:26.873 [2024-07-16 00:27:45.624266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.873 [2024-07-16 00:27:45.624299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.873 qpair failed and we were unable to recover it. 00:26:26.873 [2024-07-16 00:27:45.624558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.873 [2024-07-16 00:27:45.624589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.873 qpair failed and we were unable to recover it. 00:26:26.873 [2024-07-16 00:27:45.624837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.873 [2024-07-16 00:27:45.624868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.873 qpair failed and we were unable to recover it. 00:26:26.873 [2024-07-16 00:27:45.625183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.873 [2024-07-16 00:27:45.625215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.873 qpair failed and we were unable to recover it. 00:26:26.873 [2024-07-16 00:27:45.625518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.873 [2024-07-16 00:27:45.625549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.873 qpair failed and we were unable to recover it. 00:26:26.873 [2024-07-16 00:27:45.625782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.873 [2024-07-16 00:27:45.625793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.873 qpair failed and we were unable to recover it. 00:26:26.873 [2024-07-16 00:27:45.626107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.873 [2024-07-16 00:27:45.626138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.873 qpair failed and we were unable to recover it. 00:26:26.873 [2024-07-16 00:27:45.626459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.873 [2024-07-16 00:27:45.626490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.873 qpair failed and we were unable to recover it. 00:26:26.873 [2024-07-16 00:27:45.626797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.873 [2024-07-16 00:27:45.626808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.873 qpair failed and we were unable to recover it. 00:26:26.873 [2024-07-16 00:27:45.627056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.873 [2024-07-16 00:27:45.627079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.873 qpair failed and we were unable to recover it. 00:26:26.873 [2024-07-16 00:27:45.627265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.873 [2024-07-16 00:27:45.627277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.873 qpair failed and we were unable to recover it. 00:26:26.873 [2024-07-16 00:27:45.627530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.873 [2024-07-16 00:27:45.627561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.873 qpair failed and we were unable to recover it. 00:26:26.873 [2024-07-16 00:27:45.627735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.873 [2024-07-16 00:27:45.627766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.873 qpair failed and we were unable to recover it. 00:26:26.873 [2024-07-16 00:27:45.628075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.873 [2024-07-16 00:27:45.628106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.873 qpair failed and we were unable to recover it. 00:26:26.873 [2024-07-16 00:27:45.628418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.873 [2024-07-16 00:27:45.628450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.873 qpair failed and we were unable to recover it. 00:26:26.873 [2024-07-16 00:27:45.628697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.873 [2024-07-16 00:27:45.628728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.873 qpair failed and we were unable to recover it. 00:26:26.873 [2024-07-16 00:27:45.628907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.873 [2024-07-16 00:27:45.628938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.873 qpair failed and we were unable to recover it. 00:26:26.873 [2024-07-16 00:27:45.629182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.873 [2024-07-16 00:27:45.629213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.873 qpair failed and we were unable to recover it. 00:26:26.873 [2024-07-16 00:27:45.629469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.873 [2024-07-16 00:27:45.629501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.873 qpair failed and we were unable to recover it. 00:26:26.873 [2024-07-16 00:27:45.629727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.873 [2024-07-16 00:27:45.629739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.873 qpair failed and we were unable to recover it. 00:26:26.873 [2024-07-16 00:27:45.629930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.873 [2024-07-16 00:27:45.629942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.873 qpair failed and we were unable to recover it. 00:26:26.873 [2024-07-16 00:27:45.630195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.873 [2024-07-16 00:27:45.630235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.873 qpair failed and we were unable to recover it. 00:26:26.873 [2024-07-16 00:27:45.630472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.873 [2024-07-16 00:27:45.630503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.873 qpair failed and we were unable to recover it. 00:26:26.873 [2024-07-16 00:27:45.630731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.873 [2024-07-16 00:27:45.630762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.873 qpair failed and we were unable to recover it. 00:26:26.873 [2024-07-16 00:27:45.631056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.873 [2024-07-16 00:27:45.631087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.873 qpair failed and we were unable to recover it. 00:26:26.873 [2024-07-16 00:27:45.631320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.873 [2024-07-16 00:27:45.631352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.873 qpair failed and we were unable to recover it. 00:26:26.873 [2024-07-16 00:27:45.631582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.873 [2024-07-16 00:27:45.631596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.873 qpair failed and we were unable to recover it. 00:26:26.873 [2024-07-16 00:27:45.631817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.873 [2024-07-16 00:27:45.631829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.873 qpair failed and we were unable to recover it. 00:26:26.873 [2024-07-16 00:27:45.632003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.873 [2024-07-16 00:27:45.632034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.873 qpair failed and we were unable to recover it. 00:26:26.873 [2024-07-16 00:27:45.632337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.873 [2024-07-16 00:27:45.632369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.873 qpair failed and we were unable to recover it. 00:26:26.873 [2024-07-16 00:27:45.632644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.873 [2024-07-16 00:27:45.632655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.873 qpair failed and we were unable to recover it. 00:26:26.873 [2024-07-16 00:27:45.632903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.873 [2024-07-16 00:27:45.632934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.873 qpair failed and we were unable to recover it. 00:26:26.873 [2024-07-16 00:27:45.633158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.873 [2024-07-16 00:27:45.633189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.873 qpair failed and we were unable to recover it. 00:26:26.873 [2024-07-16 00:27:45.633510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.873 [2024-07-16 00:27:45.633542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.873 qpair failed and we were unable to recover it. 00:26:26.873 [2024-07-16 00:27:45.633834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.873 [2024-07-16 00:27:45.633858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.873 qpair failed and we were unable to recover it. 00:26:26.873 [2024-07-16 00:27:45.634056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.873 [2024-07-16 00:27:45.634067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.873 qpair failed and we were unable to recover it. 00:26:26.874 [2024-07-16 00:27:45.634263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.874 [2024-07-16 00:27:45.634274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.874 qpair failed and we were unable to recover it. 00:26:26.874 [2024-07-16 00:27:45.634419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.874 [2024-07-16 00:27:45.634430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.874 qpair failed and we were unable to recover it. 00:26:26.874 [2024-07-16 00:27:45.634630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.874 [2024-07-16 00:27:45.634642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.874 qpair failed and we were unable to recover it. 00:26:26.874 [2024-07-16 00:27:45.634771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.874 [2024-07-16 00:27:45.634783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.874 qpair failed and we were unable to recover it. 00:26:26.874 [2024-07-16 00:27:45.635039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.874 [2024-07-16 00:27:45.635070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.874 qpair failed and we were unable to recover it. 00:26:26.874 [2024-07-16 00:27:45.635299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.874 [2024-07-16 00:27:45.635331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.874 qpair failed and we were unable to recover it. 00:26:26.874 [2024-07-16 00:27:45.635517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.874 [2024-07-16 00:27:45.635548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.874 qpair failed and we were unable to recover it. 00:26:26.874 [2024-07-16 00:27:45.635861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.874 [2024-07-16 00:27:45.635892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.874 qpair failed and we were unable to recover it. 00:26:26.874 [2024-07-16 00:27:45.636205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.874 [2024-07-16 00:27:45.636245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.874 qpair failed and we were unable to recover it. 00:26:26.874 [2024-07-16 00:27:45.636545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.874 [2024-07-16 00:27:45.636575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.874 qpair failed and we were unable to recover it. 00:26:26.874 [2024-07-16 00:27:45.636909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.874 [2024-07-16 00:27:45.636939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.874 qpair failed and we were unable to recover it. 00:26:26.874 [2024-07-16 00:27:45.637237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.874 [2024-07-16 00:27:45.637269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.874 qpair failed and we were unable to recover it. 00:26:26.874 [2024-07-16 00:27:45.637517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.874 [2024-07-16 00:27:45.637548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.874 qpair failed and we were unable to recover it. 00:26:26.874 [2024-07-16 00:27:45.637876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.874 [2024-07-16 00:27:45.637908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.874 qpair failed and we were unable to recover it. 00:26:26.874 [2024-07-16 00:27:45.638151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.874 [2024-07-16 00:27:45.638182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.874 qpair failed and we were unable to recover it. 00:26:26.874 [2024-07-16 00:27:45.638453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.874 [2024-07-16 00:27:45.638486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.874 qpair failed and we were unable to recover it. 00:26:26.874 [2024-07-16 00:27:45.638660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.874 [2024-07-16 00:27:45.638672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.874 qpair failed and we were unable to recover it. 00:26:26.874 [2024-07-16 00:27:45.638870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.874 [2024-07-16 00:27:45.638901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.874 qpair failed and we were unable to recover it. 00:26:26.874 [2024-07-16 00:27:45.639058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.874 [2024-07-16 00:27:45.639089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.874 qpair failed and we were unable to recover it. 00:26:26.874 [2024-07-16 00:27:45.639417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.874 [2024-07-16 00:27:45.639449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.874 qpair failed and we were unable to recover it. 00:26:26.874 [2024-07-16 00:27:45.639792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.874 [2024-07-16 00:27:45.639824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.874 qpair failed and we were unable to recover it. 00:26:26.874 [2024-07-16 00:27:45.640113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.874 [2024-07-16 00:27:45.640144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.874 qpair failed and we were unable to recover it. 00:26:26.874 [2024-07-16 00:27:45.640475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.874 [2024-07-16 00:27:45.640507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.874 qpair failed and we were unable to recover it. 00:26:26.874 [2024-07-16 00:27:45.640752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.874 [2024-07-16 00:27:45.640783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.874 qpair failed and we were unable to recover it. 00:26:26.874 [2024-07-16 00:27:45.640960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.874 [2024-07-16 00:27:45.640972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.874 qpair failed and we were unable to recover it. 00:26:26.874 [2024-07-16 00:27:45.641109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.874 [2024-07-16 00:27:45.641139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.874 qpair failed and we were unable to recover it. 00:26:26.874 [2024-07-16 00:27:45.641447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.874 [2024-07-16 00:27:45.641478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.874 qpair failed and we were unable to recover it. 00:26:26.874 [2024-07-16 00:27:45.641738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.874 [2024-07-16 00:27:45.641769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.874 qpair failed and we were unable to recover it. 00:26:26.874 [2024-07-16 00:27:45.642115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.874 [2024-07-16 00:27:45.642146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.874 qpair failed and we were unable to recover it. 00:26:26.874 [2024-07-16 00:27:45.642464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.874 [2024-07-16 00:27:45.642495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.874 qpair failed and we were unable to recover it. 00:26:26.874 [2024-07-16 00:27:45.642723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.874 [2024-07-16 00:27:45.642759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.874 qpair failed and we were unable to recover it. 00:26:26.874 [2024-07-16 00:27:45.642971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.875 [2024-07-16 00:27:45.642982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.875 qpair failed and we were unable to recover it. 00:26:26.875 [2024-07-16 00:27:45.643171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.875 [2024-07-16 00:27:45.643202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.875 qpair failed and we were unable to recover it. 00:26:26.875 [2024-07-16 00:27:45.643533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.875 [2024-07-16 00:27:45.643565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.875 qpair failed and we were unable to recover it. 00:26:26.875 [2024-07-16 00:27:45.643872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.875 [2024-07-16 00:27:45.643883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.875 qpair failed and we were unable to recover it. 00:26:26.875 [2024-07-16 00:27:45.644146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.875 [2024-07-16 00:27:45.644177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.875 qpair failed and we were unable to recover it. 00:26:26.875 [2024-07-16 00:27:45.644425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.875 [2024-07-16 00:27:45.644456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.875 qpair failed and we were unable to recover it. 00:26:26.875 [2024-07-16 00:27:45.644811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.875 [2024-07-16 00:27:45.644841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.875 qpair failed and we were unable to recover it. 00:26:26.875 [2024-07-16 00:27:45.645155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.875 [2024-07-16 00:27:45.645186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.875 qpair failed and we were unable to recover it. 00:26:26.875 [2024-07-16 00:27:45.645428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.875 [2024-07-16 00:27:45.645460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.875 qpair failed and we were unable to recover it. 00:26:26.875 [2024-07-16 00:27:45.645728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.875 [2024-07-16 00:27:45.645758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.875 qpair failed and we were unable to recover it. 00:26:26.875 [2024-07-16 00:27:45.646010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.875 [2024-07-16 00:27:45.646022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.875 qpair failed and we were unable to recover it. 00:26:26.875 [2024-07-16 00:27:45.646306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.875 [2024-07-16 00:27:45.646339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.875 qpair failed and we were unable to recover it. 00:26:26.875 [2024-07-16 00:27:45.646520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.875 [2024-07-16 00:27:45.646551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.875 qpair failed and we were unable to recover it. 00:26:26.875 [2024-07-16 00:27:45.646798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.875 [2024-07-16 00:27:45.646829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.875 qpair failed and we were unable to recover it. 00:26:26.875 [2024-07-16 00:27:45.647087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.875 [2024-07-16 00:27:45.647098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.875 qpair failed and we were unable to recover it. 00:26:26.875 [2024-07-16 00:27:45.647376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.875 [2024-07-16 00:27:45.647388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.875 qpair failed and we were unable to recover it. 00:26:26.875 [2024-07-16 00:27:45.647571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.875 [2024-07-16 00:27:45.647582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.875 qpair failed and we were unable to recover it. 00:26:26.875 [2024-07-16 00:27:45.647729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.875 [2024-07-16 00:27:45.647741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.875 qpair failed and we were unable to recover it. 00:26:26.875 [2024-07-16 00:27:45.647936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.875 [2024-07-16 00:27:45.647948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.875 qpair failed and we were unable to recover it. 00:26:26.875 [2024-07-16 00:27:45.648244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.875 [2024-07-16 00:27:45.648292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.875 qpair failed and we were unable to recover it. 00:26:26.875 [2024-07-16 00:27:45.648536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.875 [2024-07-16 00:27:45.648566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.875 qpair failed and we were unable to recover it. 00:26:26.875 [2024-07-16 00:27:45.648832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.875 [2024-07-16 00:27:45.648862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.875 qpair failed and we were unable to recover it. 00:26:26.875 [2024-07-16 00:27:45.649203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.875 [2024-07-16 00:27:45.649242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.875 qpair failed and we were unable to recover it. 00:26:26.875 [2024-07-16 00:27:45.649535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.875 [2024-07-16 00:27:45.649566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.875 qpair failed and we were unable to recover it. 00:26:26.875 [2024-07-16 00:27:45.649801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.875 [2024-07-16 00:27:45.649832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.875 qpair failed and we were unable to recover it. 00:26:26.875 [2024-07-16 00:27:45.650050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.875 [2024-07-16 00:27:45.650061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.875 qpair failed and we were unable to recover it. 00:26:26.875 [2024-07-16 00:27:45.650340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.875 [2024-07-16 00:27:45.650372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.875 qpair failed and we were unable to recover it. 00:26:26.875 [2024-07-16 00:27:45.650663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.875 [2024-07-16 00:27:45.650693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.875 qpair failed and we were unable to recover it. 00:26:26.875 [2024-07-16 00:27:45.650930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.875 [2024-07-16 00:27:45.650960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.875 qpair failed and we were unable to recover it. 00:26:26.875 [2024-07-16 00:27:45.651282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.875 [2024-07-16 00:27:45.651313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.875 qpair failed and we were unable to recover it. 00:26:26.875 [2024-07-16 00:27:45.651636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.875 [2024-07-16 00:27:45.651647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.875 qpair failed and we were unable to recover it. 00:26:26.875 [2024-07-16 00:27:45.651830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.875 [2024-07-16 00:27:45.651843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.875 qpair failed and we were unable to recover it. 00:26:26.875 [2024-07-16 00:27:45.652004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.875 [2024-07-16 00:27:45.652035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.875 qpair failed and we were unable to recover it. 00:26:26.875 [2024-07-16 00:27:45.652261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.875 [2024-07-16 00:27:45.652293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.875 qpair failed and we were unable to recover it. 00:26:26.875 [2024-07-16 00:27:45.652518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.875 [2024-07-16 00:27:45.652549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.875 qpair failed and we were unable to recover it. 00:26:26.875 [2024-07-16 00:27:45.652870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.875 [2024-07-16 00:27:45.652901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.875 qpair failed and we were unable to recover it. 00:26:26.875 [2024-07-16 00:27:45.653200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.875 [2024-07-16 00:27:45.653241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.875 qpair failed and we were unable to recover it. 00:26:26.875 [2024-07-16 00:27:45.653558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.875 [2024-07-16 00:27:45.653589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.875 qpair failed and we were unable to recover it. 00:26:26.875 [2024-07-16 00:27:45.653885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.875 [2024-07-16 00:27:45.653916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.875 qpair failed and we were unable to recover it. 00:26:26.875 [2024-07-16 00:27:45.654245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.875 [2024-07-16 00:27:45.654282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.875 qpair failed and we were unable to recover it. 00:26:26.875 [2024-07-16 00:27:45.654575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.875 [2024-07-16 00:27:45.654606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.875 qpair failed and we were unable to recover it. 00:26:26.875 [2024-07-16 00:27:45.654840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.876 [2024-07-16 00:27:45.654871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.876 qpair failed and we were unable to recover it. 00:26:26.876 [2024-07-16 00:27:45.655111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.876 [2024-07-16 00:27:45.655123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.876 qpair failed and we were unable to recover it. 00:26:26.876 [2024-07-16 00:27:45.655324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.876 [2024-07-16 00:27:45.655356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.876 qpair failed and we were unable to recover it. 00:26:26.876 [2024-07-16 00:27:45.655640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.876 [2024-07-16 00:27:45.655671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.876 qpair failed and we were unable to recover it. 00:26:26.876 [2024-07-16 00:27:45.655996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.876 [2024-07-16 00:27:45.656007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.876 qpair failed and we were unable to recover it. 00:26:26.876 [2024-07-16 00:27:45.656292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.876 [2024-07-16 00:27:45.656304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.876 qpair failed and we were unable to recover it. 00:26:26.876 [2024-07-16 00:27:45.656582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.876 [2024-07-16 00:27:45.656614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.876 qpair failed and we were unable to recover it. 00:26:26.876 [2024-07-16 00:27:45.656906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.876 [2024-07-16 00:27:45.656937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.876 qpair failed and we were unable to recover it. 00:26:26.876 [2024-07-16 00:27:45.657179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.876 [2024-07-16 00:27:45.657209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.876 qpair failed and we were unable to recover it. 00:26:26.876 [2024-07-16 00:27:45.657531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.876 [2024-07-16 00:27:45.657562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.876 qpair failed and we were unable to recover it. 00:26:26.876 [2024-07-16 00:27:45.657861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.876 [2024-07-16 00:27:45.657892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.876 qpair failed and we were unable to recover it. 00:26:26.876 [2024-07-16 00:27:45.658068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.876 [2024-07-16 00:27:45.658099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.876 qpair failed and we were unable to recover it. 00:26:26.876 [2024-07-16 00:27:45.658344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.876 [2024-07-16 00:27:45.658375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.876 qpair failed and we were unable to recover it. 00:26:26.876 [2024-07-16 00:27:45.658678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.876 [2024-07-16 00:27:45.658709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.876 qpair failed and we were unable to recover it. 00:26:26.876 [2024-07-16 00:27:45.658968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.876 [2024-07-16 00:27:45.658999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.876 qpair failed and we were unable to recover it. 00:26:26.876 [2024-07-16 00:27:45.659247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.876 [2024-07-16 00:27:45.659279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.876 qpair failed and we were unable to recover it. 00:26:26.876 [2024-07-16 00:27:45.659617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.876 [2024-07-16 00:27:45.659648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.876 qpair failed and we were unable to recover it. 00:26:26.876 [2024-07-16 00:27:45.659965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.876 [2024-07-16 00:27:45.659996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.876 qpair failed and we were unable to recover it. 00:26:26.876 [2024-07-16 00:27:45.660289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.876 [2024-07-16 00:27:45.660320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.876 qpair failed and we were unable to recover it. 00:26:26.876 [2024-07-16 00:27:45.660498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.876 [2024-07-16 00:27:45.660529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.876 qpair failed and we were unable to recover it. 00:26:26.876 [2024-07-16 00:27:45.660851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.876 [2024-07-16 00:27:45.660883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.876 qpair failed and we were unable to recover it. 00:26:26.876 [2024-07-16 00:27:45.661184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.876 [2024-07-16 00:27:45.661215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.876 qpair failed and we were unable to recover it. 00:26:26.876 [2024-07-16 00:27:45.661467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.876 [2024-07-16 00:27:45.661499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.876 qpair failed and we were unable to recover it. 00:26:26.876 [2024-07-16 00:27:45.661732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.876 [2024-07-16 00:27:45.661764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.876 qpair failed and we were unable to recover it. 00:26:26.876 [2024-07-16 00:27:45.662083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.876 [2024-07-16 00:27:45.662114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.876 qpair failed and we were unable to recover it. 00:26:26.876 Read completed with error (sct=0, sc=8) 00:26:26.876 starting I/O failed 00:26:26.876 Read completed with error (sct=0, sc=8) 00:26:26.876 starting I/O failed 00:26:26.876 Read completed with error (sct=0, sc=8) 00:26:26.876 starting I/O failed 00:26:26.876 Read completed with error (sct=0, sc=8) 00:26:26.876 starting I/O failed 00:26:26.876 Read completed with error (sct=0, sc=8) 00:26:26.876 starting I/O failed 00:26:26.876 Read completed with error (sct=0, sc=8) 00:26:26.876 starting I/O failed 00:26:26.876 Read completed with error (sct=0, sc=8) 00:26:26.876 starting I/O failed 00:26:26.876 Read completed with error (sct=0, sc=8) 00:26:26.876 starting I/O failed 00:26:26.876 Read completed with error (sct=0, sc=8) 00:26:26.876 starting I/O failed 00:26:26.876 Read completed with error (sct=0, sc=8) 00:26:26.876 starting I/O failed 00:26:26.876 Read completed with error (sct=0, sc=8) 00:26:26.876 starting I/O failed 00:26:26.876 Read completed with error (sct=0, sc=8) 00:26:26.876 starting I/O failed 00:26:26.876 Read completed with error (sct=0, sc=8) 00:26:26.876 starting I/O failed 00:26:26.876 Read completed with error (sct=0, sc=8) 00:26:26.876 starting I/O failed 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 starting I/O failed 00:26:26.876 Read completed with error (sct=0, sc=8) 00:26:26.876 starting I/O failed 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 starting I/O failed 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 starting I/O failed 00:26:26.876 Read completed with error (sct=0, sc=8) 00:26:26.876 starting I/O failed 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 starting I/O failed 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 starting I/O failed 00:26:26.876 Read completed with error (sct=0, sc=8) 00:26:26.876 starting I/O failed 00:26:26.876 Read completed with error (sct=0, sc=8) 00:26:26.876 starting I/O failed 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 starting I/O failed 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 starting I/O failed 00:26:26.876 Read completed with error (sct=0, sc=8) 00:26:26.876 starting I/O failed 00:26:26.876 Read completed with error (sct=0, sc=8) 00:26:26.876 starting I/O failed 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 starting I/O failed 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 starting I/O failed 00:26:26.876 Read completed with error (sct=0, sc=8) 00:26:26.876 starting I/O failed 00:26:26.876 Read completed with error (sct=0, sc=8) 00:26:26.876 starting I/O failed 00:26:26.876 Read completed with error (sct=0, sc=8) 00:26:26.876 starting I/O failed 00:26:26.876 [2024-07-16 00:27:45.662612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:26.876 [2024-07-16 00:27:45.662932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.876 [2024-07-16 00:27:45.663000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:26.876 qpair failed and we were unable to recover it. 00:26:26.876 [2024-07-16 00:27:45.663211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.876 [2024-07-16 00:27:45.663264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:26.876 qpair failed and we were unable to recover it. 00:26:26.876 [2024-07-16 00:27:45.663579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.876 [2024-07-16 00:27:45.663612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:26.876 qpair failed and we were unable to recover it. 00:26:26.876 [2024-07-16 00:27:45.663905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.876 [2024-07-16 00:27:45.663937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:26.876 qpair failed and we were unable to recover it. 00:26:26.876 [2024-07-16 00:27:45.664178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.876 [2024-07-16 00:27:45.664210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:26.876 qpair failed and we were unable to recover it. 00:26:26.877 [2024-07-16 00:27:45.664576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.877 [2024-07-16 00:27:45.664607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:26.877 qpair failed and we were unable to recover it. 00:26:26.877 [2024-07-16 00:27:45.664868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.877 [2024-07-16 00:27:45.664899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:26.877 qpair failed and we were unable to recover it. 00:26:26.877 [2024-07-16 00:27:45.665210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.877 [2024-07-16 00:27:45.665253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:26.877 qpair failed and we were unable to recover it. 00:26:26.877 [2024-07-16 00:27:45.665579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.877 [2024-07-16 00:27:45.665610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:26.877 qpair failed and we were unable to recover it. 00:26:26.877 [2024-07-16 00:27:45.665870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.877 [2024-07-16 00:27:45.665902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:26.877 qpair failed and we were unable to recover it. 00:26:26.877 [2024-07-16 00:27:45.666194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.877 [2024-07-16 00:27:45.666235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:26.877 qpair failed and we were unable to recover it. 00:26:26.877 [2024-07-16 00:27:45.666556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.877 [2024-07-16 00:27:45.666587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:26.877 qpair failed and we were unable to recover it. 00:26:26.877 [2024-07-16 00:27:45.666903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.877 [2024-07-16 00:27:45.666935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:26.877 qpair failed and we were unable to recover it. 00:26:26.877 [2024-07-16 00:27:45.667186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.877 [2024-07-16 00:27:45.667218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:26.877 qpair failed and we were unable to recover it. 00:26:26.877 [2024-07-16 00:27:45.667582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.877 [2024-07-16 00:27:45.667617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.877 qpair failed and we were unable to recover it. 00:26:26.877 [2024-07-16 00:27:45.667911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.877 [2024-07-16 00:27:45.667942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.877 qpair failed and we were unable to recover it. 00:26:26.877 [2024-07-16 00:27:45.668258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.877 [2024-07-16 00:27:45.668290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.877 qpair failed and we were unable to recover it. 00:26:26.877 [2024-07-16 00:27:45.668594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.877 [2024-07-16 00:27:45.668625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.877 qpair failed and we were unable to recover it. 00:26:26.877 [2024-07-16 00:27:45.668974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.877 [2024-07-16 00:27:45.669005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.877 qpair failed and we were unable to recover it. 00:26:26.877 [2024-07-16 00:27:45.669297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.877 [2024-07-16 00:27:45.669329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.877 qpair failed and we were unable to recover it. 00:26:26.877 [2024-07-16 00:27:45.669578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.877 [2024-07-16 00:27:45.669609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.877 qpair failed and we were unable to recover it. 00:26:26.877 [2024-07-16 00:27:45.669920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.877 [2024-07-16 00:27:45.669951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.877 qpair failed and we were unable to recover it. 00:26:26.877 [2024-07-16 00:27:45.670268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.877 [2024-07-16 00:27:45.670300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.877 qpair failed and we were unable to recover it. 00:26:26.877 [2024-07-16 00:27:45.670604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.877 [2024-07-16 00:27:45.670635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.877 qpair failed and we were unable to recover it. 00:26:26.877 [2024-07-16 00:27:45.670795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.877 [2024-07-16 00:27:45.670825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.877 qpair failed and we were unable to recover it. 00:26:26.877 [2024-07-16 00:27:45.671113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.877 [2024-07-16 00:27:45.671143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.877 qpair failed and we were unable to recover it. 00:26:26.877 [2024-07-16 00:27:45.671459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.877 [2024-07-16 00:27:45.671490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.877 qpair failed and we were unable to recover it. 00:26:26.877 [2024-07-16 00:27:45.671708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.877 [2024-07-16 00:27:45.671720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.877 qpair failed and we were unable to recover it. 00:26:26.877 [2024-07-16 00:27:45.671917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.877 [2024-07-16 00:27:45.671929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.877 qpair failed and we were unable to recover it. 00:26:26.877 [2024-07-16 00:27:45.672112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.877 [2024-07-16 00:27:45.672125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.877 qpair failed and we were unable to recover it. 00:26:26.877 [2024-07-16 00:27:45.672324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.877 [2024-07-16 00:27:45.672337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.877 qpair failed and we were unable to recover it. 00:26:26.877 [2024-07-16 00:27:45.672524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.877 [2024-07-16 00:27:45.672535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.877 qpair failed and we were unable to recover it. 00:26:26.877 [2024-07-16 00:27:45.672829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.877 [2024-07-16 00:27:45.672860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.877 qpair failed and we were unable to recover it. 00:26:26.877 [2024-07-16 00:27:45.673131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.877 [2024-07-16 00:27:45.673167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.877 qpair failed and we were unable to recover it. 00:26:26.877 [2024-07-16 00:27:45.673440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.877 [2024-07-16 00:27:45.673472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.877 qpair failed and we were unable to recover it. 00:26:26.877 [2024-07-16 00:27:45.673805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.877 [2024-07-16 00:27:45.673836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.877 qpair failed and we were unable to recover it. 00:26:26.877 [2024-07-16 00:27:45.674012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.877 [2024-07-16 00:27:45.674042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.877 qpair failed and we were unable to recover it. 00:26:26.877 [2024-07-16 00:27:45.674309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.877 [2024-07-16 00:27:45.674340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.877 qpair failed and we were unable to recover it. 00:26:26.877 [2024-07-16 00:27:45.674636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.877 [2024-07-16 00:27:45.674667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.877 qpair failed and we were unable to recover it. 00:26:26.877 [2024-07-16 00:27:45.674992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.877 [2024-07-16 00:27:45.675022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.877 qpair failed and we were unable to recover it. 00:26:26.877 [2024-07-16 00:27:45.675287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.877 [2024-07-16 00:27:45.675319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.877 qpair failed and we were unable to recover it. 00:26:26.877 [2024-07-16 00:27:45.675658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.877 [2024-07-16 00:27:45.675689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.877 qpair failed and we were unable to recover it. 00:26:26.877 [2024-07-16 00:27:45.675925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.877 [2024-07-16 00:27:45.675956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.877 qpair failed and we were unable to recover it. 00:26:26.877 [2024-07-16 00:27:45.676181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.877 [2024-07-16 00:27:45.676212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.877 qpair failed and we were unable to recover it. 00:26:26.877 [2024-07-16 00:27:45.676464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.877 [2024-07-16 00:27:45.676495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.877 qpair failed and we were unable to recover it. 00:26:26.877 [2024-07-16 00:27:45.676744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.877 [2024-07-16 00:27:45.676776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.877 qpair failed and we were unable to recover it. 00:26:26.878 [2024-07-16 00:27:45.677004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.878 [2024-07-16 00:27:45.677035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.878 qpair failed and we were unable to recover it. 00:26:26.878 [2024-07-16 00:27:45.677357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.878 [2024-07-16 00:27:45.677388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.878 qpair failed and we were unable to recover it. 00:26:26.878 [2024-07-16 00:27:45.677686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.878 [2024-07-16 00:27:45.677717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.878 qpair failed and we were unable to recover it. 00:26:26.878 [2024-07-16 00:27:45.677955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.878 [2024-07-16 00:27:45.677986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.878 qpair failed and we were unable to recover it. 00:26:26.878 [2024-07-16 00:27:45.678303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.878 [2024-07-16 00:27:45.678351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.878 qpair failed and we were unable to recover it. 00:26:26.878 [2024-07-16 00:27:45.678597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.878 [2024-07-16 00:27:45.678628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.878 qpair failed and we were unable to recover it. 00:26:26.878 [2024-07-16 00:27:45.678838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.878 [2024-07-16 00:27:45.678850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.878 qpair failed and we were unable to recover it. 00:26:26.878 [2024-07-16 00:27:45.679047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.878 [2024-07-16 00:27:45.679078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.878 qpair failed and we were unable to recover it. 00:26:26.878 [2024-07-16 00:27:45.679369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.878 [2024-07-16 00:27:45.679401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.878 qpair failed and we were unable to recover it. 00:26:26.878 [2024-07-16 00:27:45.679642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.878 [2024-07-16 00:27:45.679673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.878 qpair failed and we were unable to recover it. 00:26:26.878 [2024-07-16 00:27:45.679856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.878 [2024-07-16 00:27:45.679886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.878 qpair failed and we were unable to recover it. 00:26:26.878 [2024-07-16 00:27:45.680218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.878 [2024-07-16 00:27:45.680235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.878 qpair failed and we were unable to recover it. 00:26:26.878 [2024-07-16 00:27:45.680436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.878 [2024-07-16 00:27:45.680448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.878 qpair failed and we were unable to recover it. 00:26:26.878 [2024-07-16 00:27:45.680702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.878 [2024-07-16 00:27:45.680714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.878 qpair failed and we were unable to recover it. 00:26:26.878 [2024-07-16 00:27:45.680944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.878 [2024-07-16 00:27:45.680975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.878 qpair failed and we were unable to recover it. 00:26:26.878 [2024-07-16 00:27:45.681247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.878 [2024-07-16 00:27:45.681279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.878 qpair failed and we were unable to recover it. 00:26:26.878 [2024-07-16 00:27:45.681520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.878 [2024-07-16 00:27:45.681551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.878 qpair failed and we were unable to recover it. 00:26:26.878 [2024-07-16 00:27:45.681855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.878 [2024-07-16 00:27:45.681867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.878 qpair failed and we were unable to recover it. 00:26:26.878 [2024-07-16 00:27:45.682109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.878 [2024-07-16 00:27:45.682139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.878 qpair failed and we were unable to recover it. 00:26:26.878 [2024-07-16 00:27:45.682436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.878 [2024-07-16 00:27:45.682468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.878 qpair failed and we were unable to recover it. 00:26:26.878 [2024-07-16 00:27:45.682801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.878 [2024-07-16 00:27:45.682831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.878 qpair failed and we were unable to recover it. 00:26:26.878 [2024-07-16 00:27:45.683141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.878 [2024-07-16 00:27:45.683173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.878 qpair failed and we were unable to recover it. 00:26:26.878 [2024-07-16 00:27:45.683441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.878 [2024-07-16 00:27:45.683473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.878 qpair failed and we were unable to recover it. 00:26:26.878 [2024-07-16 00:27:45.683784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.878 [2024-07-16 00:27:45.683815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.878 qpair failed and we were unable to recover it. 00:26:26.878 [2024-07-16 00:27:45.683997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.878 [2024-07-16 00:27:45.684028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.878 qpair failed and we were unable to recover it. 00:26:26.878 [2024-07-16 00:27:45.684317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.878 [2024-07-16 00:27:45.684349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.878 qpair failed and we were unable to recover it. 00:26:26.878 [2024-07-16 00:27:45.684577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.878 [2024-07-16 00:27:45.684608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.878 qpair failed and we were unable to recover it. 00:26:26.878 [2024-07-16 00:27:45.684928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.878 [2024-07-16 00:27:45.684966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.878 qpair failed and we were unable to recover it. 00:26:26.878 [2024-07-16 00:27:45.685245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.878 [2024-07-16 00:27:45.685279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.878 qpair failed and we were unable to recover it. 00:26:26.878 [2024-07-16 00:27:45.685515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.878 [2024-07-16 00:27:45.685546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.878 qpair failed and we were unable to recover it. 00:26:26.878 [2024-07-16 00:27:45.685855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.878 [2024-07-16 00:27:45.685886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.878 qpair failed and we were unable to recover it. 00:26:26.878 [2024-07-16 00:27:45.686113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.878 [2024-07-16 00:27:45.686145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.878 qpair failed and we were unable to recover it. 00:26:26.878 [2024-07-16 00:27:45.686339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.878 [2024-07-16 00:27:45.686370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.878 qpair failed and we were unable to recover it. 00:26:26.878 [2024-07-16 00:27:45.686681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.878 [2024-07-16 00:27:45.686712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.878 qpair failed and we were unable to recover it. 00:26:26.878 [2024-07-16 00:27:45.687026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.878 [2024-07-16 00:27:45.687037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.878 qpair failed and we were unable to recover it. 00:26:26.878 [2024-07-16 00:27:45.687190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.878 [2024-07-16 00:27:45.687202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.878 qpair failed and we were unable to recover it. 00:26:26.879 [2024-07-16 00:27:45.687480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.879 [2024-07-16 00:27:45.687491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.879 qpair failed and we were unable to recover it. 00:26:26.879 [2024-07-16 00:27:45.687728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.879 [2024-07-16 00:27:45.687759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.879 qpair failed and we were unable to recover it. 00:26:26.879 [2024-07-16 00:27:45.687922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.879 [2024-07-16 00:27:45.687954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.879 qpair failed and we were unable to recover it. 00:26:26.879 [2024-07-16 00:27:45.688273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.879 [2024-07-16 00:27:45.688305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.879 qpair failed and we were unable to recover it. 00:26:26.879 [2024-07-16 00:27:45.688572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.879 [2024-07-16 00:27:45.688603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.879 qpair failed and we were unable to recover it. 00:26:26.879 [2024-07-16 00:27:45.688946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.879 [2024-07-16 00:27:45.688977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.879 qpair failed and we were unable to recover it. 00:26:26.879 [2024-07-16 00:27:45.689246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.879 [2024-07-16 00:27:45.689279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.879 qpair failed and we were unable to recover it. 00:26:26.879 [2024-07-16 00:27:45.689627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.879 [2024-07-16 00:27:45.689657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.879 qpair failed and we were unable to recover it. 00:26:26.879 [2024-07-16 00:27:45.689902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.879 [2024-07-16 00:27:45.689933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.879 qpair failed and we were unable to recover it. 00:26:26.879 [2024-07-16 00:27:45.690233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.879 [2024-07-16 00:27:45.690266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.879 qpair failed and we were unable to recover it. 00:26:26.879 [2024-07-16 00:27:45.690608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.879 [2024-07-16 00:27:45.690639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.879 qpair failed and we were unable to recover it. 00:26:26.879 [2024-07-16 00:27:45.690947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.879 [2024-07-16 00:27:45.690959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.879 qpair failed and we were unable to recover it. 00:26:26.879 [2024-07-16 00:27:45.691167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.879 [2024-07-16 00:27:45.691179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.879 qpair failed and we were unable to recover it. 00:26:26.879 [2024-07-16 00:27:45.691430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.879 [2024-07-16 00:27:45.691462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.879 qpair failed and we were unable to recover it. 00:26:26.879 [2024-07-16 00:27:45.691695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.879 [2024-07-16 00:27:45.691727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.879 qpair failed and we were unable to recover it. 00:26:26.879 [2024-07-16 00:27:45.692043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.879 [2024-07-16 00:27:45.692074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.879 qpair failed and we were unable to recover it. 00:26:26.879 [2024-07-16 00:27:45.692338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.879 [2024-07-16 00:27:45.692371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.879 qpair failed and we were unable to recover it. 00:26:26.879 [2024-07-16 00:27:45.692665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.879 [2024-07-16 00:27:45.692697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.879 qpair failed and we were unable to recover it. 00:26:26.879 [2024-07-16 00:27:45.693017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.879 [2024-07-16 00:27:45.693048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.879 qpair failed and we were unable to recover it. 00:26:26.879 [2024-07-16 00:27:45.693289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.879 [2024-07-16 00:27:45.693321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.879 qpair failed and we were unable to recover it. 00:26:26.879 [2024-07-16 00:27:45.693572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.879 [2024-07-16 00:27:45.693603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.879 qpair failed and we were unable to recover it. 00:26:26.879 [2024-07-16 00:27:45.693909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.879 [2024-07-16 00:27:45.693920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.879 qpair failed and we were unable to recover it. 00:26:26.879 [2024-07-16 00:27:45.694197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.879 [2024-07-16 00:27:45.694208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.879 qpair failed and we were unable to recover it. 00:26:26.879 [2024-07-16 00:27:45.694537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.879 [2024-07-16 00:27:45.694569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.879 qpair failed and we were unable to recover it. 00:26:26.879 [2024-07-16 00:27:45.694883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.879 [2024-07-16 00:27:45.694914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.879 qpair failed and we were unable to recover it. 00:26:26.879 [2024-07-16 00:27:45.695223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.879 [2024-07-16 00:27:45.695265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.879 qpair failed and we were unable to recover it. 00:26:26.879 [2024-07-16 00:27:45.695566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.879 [2024-07-16 00:27:45.695597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.879 qpair failed and we were unable to recover it. 00:26:26.879 [2024-07-16 00:27:45.695842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.879 [2024-07-16 00:27:45.695873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.879 qpair failed and we were unable to recover it. 00:26:26.879 [2024-07-16 00:27:45.696161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.879 [2024-07-16 00:27:45.696192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.879 qpair failed and we were unable to recover it. 00:26:26.879 [2024-07-16 00:27:45.696523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.879 [2024-07-16 00:27:45.696555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.879 qpair failed and we were unable to recover it. 00:26:26.879 [2024-07-16 00:27:45.696870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.879 [2024-07-16 00:27:45.696902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.879 qpair failed and we were unable to recover it. 00:26:26.879 [2024-07-16 00:27:45.697238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.879 [2024-07-16 00:27:45.697277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.879 qpair failed and we were unable to recover it. 00:26:26.879 [2024-07-16 00:27:45.697520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.879 [2024-07-16 00:27:45.697551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.879 qpair failed and we were unable to recover it. 00:26:26.879 [2024-07-16 00:27:45.697896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.879 [2024-07-16 00:27:45.697927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.879 qpair failed and we were unable to recover it. 00:26:26.879 [2024-07-16 00:27:45.698263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.879 [2024-07-16 00:27:45.698296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.879 qpair failed and we were unable to recover it. 00:26:26.879 [2024-07-16 00:27:45.698618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.879 [2024-07-16 00:27:45.698650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.879 qpair failed and we were unable to recover it. 00:26:26.879 [2024-07-16 00:27:45.698877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.879 [2024-07-16 00:27:45.698890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.879 qpair failed and we were unable to recover it. 00:26:26.879 [2024-07-16 00:27:45.699043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.879 [2024-07-16 00:27:45.699075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.879 qpair failed and we were unable to recover it. 00:26:26.879 [2024-07-16 00:27:45.699388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.879 [2024-07-16 00:27:45.699420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.879 qpair failed and we were unable to recover it. 00:26:26.879 [2024-07-16 00:27:45.699734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.879 [2024-07-16 00:27:45.699765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.879 qpair failed and we were unable to recover it. 00:26:26.879 [2024-07-16 00:27:45.699986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.879 [2024-07-16 00:27:45.699998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.879 qpair failed and we were unable to recover it. 00:26:26.880 [2024-07-16 00:27:45.700295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.880 [2024-07-16 00:27:45.700308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.880 qpair failed and we were unable to recover it. 00:26:26.880 [2024-07-16 00:27:45.700493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.880 [2024-07-16 00:27:45.700505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.880 qpair failed and we were unable to recover it. 00:26:26.880 [2024-07-16 00:27:45.700729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.880 [2024-07-16 00:27:45.700760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.880 qpair failed and we were unable to recover it. 00:26:26.880 [2024-07-16 00:27:45.701100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.880 [2024-07-16 00:27:45.701132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.880 qpair failed and we were unable to recover it. 00:26:26.880 [2024-07-16 00:27:45.701436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.880 [2024-07-16 00:27:45.701468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.880 qpair failed and we were unable to recover it. 00:26:26.880 [2024-07-16 00:27:45.701730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.880 [2024-07-16 00:27:45.701742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.880 qpair failed and we were unable to recover it. 00:26:26.880 [2024-07-16 00:27:45.702019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.880 [2024-07-16 00:27:45.702050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.880 qpair failed and we were unable to recover it. 00:26:26.880 [2024-07-16 00:27:45.702367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.880 [2024-07-16 00:27:45.702399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.880 qpair failed and we were unable to recover it. 00:26:26.880 [2024-07-16 00:27:45.702705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.880 [2024-07-16 00:27:45.702736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.880 qpair failed and we were unable to recover it. 00:26:26.880 [2024-07-16 00:27:45.703047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.880 [2024-07-16 00:27:45.703079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.880 qpair failed and we were unable to recover it. 00:26:26.880 [2024-07-16 00:27:45.703393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.880 [2024-07-16 00:27:45.703425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.880 qpair failed and we were unable to recover it. 00:26:26.880 [2024-07-16 00:27:45.703737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.880 [2024-07-16 00:27:45.703767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.880 qpair failed and we were unable to recover it. 00:26:26.880 [2024-07-16 00:27:45.704007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.880 [2024-07-16 00:27:45.704031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.880 qpair failed and we were unable to recover it. 00:26:26.880 [2024-07-16 00:27:45.704223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.880 [2024-07-16 00:27:45.704239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.880 qpair failed and we were unable to recover it. 00:26:26.880 [2024-07-16 00:27:45.704466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.880 [2024-07-16 00:27:45.704478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.880 qpair failed and we were unable to recover it. 00:26:26.880 [2024-07-16 00:27:45.704748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.880 [2024-07-16 00:27:45.704759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.880 qpair failed and we were unable to recover it. 00:26:26.880 [2024-07-16 00:27:45.705006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.880 [2024-07-16 00:27:45.705018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.880 qpair failed and we were unable to recover it. 00:26:26.880 [2024-07-16 00:27:45.705221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.880 [2024-07-16 00:27:45.705236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.880 qpair failed and we were unable to recover it. 00:26:26.880 [2024-07-16 00:27:45.705463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.880 [2024-07-16 00:27:45.705493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.880 qpair failed and we were unable to recover it. 00:26:26.880 [2024-07-16 00:27:45.705730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.880 [2024-07-16 00:27:45.705761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.880 qpair failed and we were unable to recover it. 00:26:26.880 [2024-07-16 00:27:45.706056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.880 [2024-07-16 00:27:45.706087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.880 qpair failed and we were unable to recover it. 00:26:26.880 [2024-07-16 00:27:45.706394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.880 [2024-07-16 00:27:45.706427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.880 qpair failed and we were unable to recover it. 00:26:26.880 [2024-07-16 00:27:45.706616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.880 [2024-07-16 00:27:45.706648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.880 qpair failed and we were unable to recover it. 00:26:26.880 [2024-07-16 00:27:45.706965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.880 [2024-07-16 00:27:45.706996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.880 qpair failed and we were unable to recover it. 00:26:26.880 [2024-07-16 00:27:45.707297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.880 [2024-07-16 00:27:45.707328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.880 qpair failed and we were unable to recover it. 00:26:26.880 [2024-07-16 00:27:45.707623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.880 [2024-07-16 00:27:45.707655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.880 qpair failed and we were unable to recover it. 00:26:26.880 [2024-07-16 00:27:45.707884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.880 [2024-07-16 00:27:45.707916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.880 qpair failed and we were unable to recover it. 00:26:26.880 [2024-07-16 00:27:45.708251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.880 [2024-07-16 00:27:45.708262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.880 qpair failed and we were unable to recover it. 00:26:26.880 [2024-07-16 00:27:45.708538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.880 [2024-07-16 00:27:45.708569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.880 qpair failed and we were unable to recover it. 00:26:26.880 [2024-07-16 00:27:45.708864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.880 [2024-07-16 00:27:45.708896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.880 qpair failed and we were unable to recover it. 00:26:26.880 [2024-07-16 00:27:45.709219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.880 [2024-07-16 00:27:45.709265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.880 qpair failed and we were unable to recover it. 00:26:26.880 [2024-07-16 00:27:45.709586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.880 [2024-07-16 00:27:45.709617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.880 qpair failed and we were unable to recover it. 00:26:26.880 [2024-07-16 00:27:45.709911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.880 [2024-07-16 00:27:45.709943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.880 qpair failed and we were unable to recover it. 00:26:26.880 [2024-07-16 00:27:45.710204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.880 [2024-07-16 00:27:45.710246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.880 qpair failed and we were unable to recover it. 00:26:26.880 [2024-07-16 00:27:45.710502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.880 [2024-07-16 00:27:45.710534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.880 qpair failed and we were unable to recover it. 00:26:26.880 [2024-07-16 00:27:45.710884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.880 [2024-07-16 00:27:45.710923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.880 qpair failed and we were unable to recover it. 00:26:26.880 [2024-07-16 00:27:45.711153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.880 [2024-07-16 00:27:45.711165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.880 qpair failed and we were unable to recover it. 00:26:26.880 [2024-07-16 00:27:45.711427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.880 [2024-07-16 00:27:45.711440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.880 qpair failed and we were unable to recover it. 00:26:26.880 [2024-07-16 00:27:45.711690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.880 [2024-07-16 00:27:45.711705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.880 qpair failed and we were unable to recover it. 00:26:26.880 [2024-07-16 00:27:45.711981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.880 [2024-07-16 00:27:45.712013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.880 qpair failed and we were unable to recover it. 00:26:26.881 [2024-07-16 00:27:45.712345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.881 [2024-07-16 00:27:45.712377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.881 qpair failed and we were unable to recover it. 00:26:26.881 [2024-07-16 00:27:45.712623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.881 [2024-07-16 00:27:45.712666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.881 qpair failed and we were unable to recover it. 00:26:26.881 [2024-07-16 00:27:45.713022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.881 [2024-07-16 00:27:45.713070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.881 qpair failed and we were unable to recover it. 00:26:26.881 [2024-07-16 00:27:45.713281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.881 [2024-07-16 00:27:45.713294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.881 qpair failed and we were unable to recover it. 00:26:26.881 [2024-07-16 00:27:45.713488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.881 [2024-07-16 00:27:45.713502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.881 qpair failed and we were unable to recover it. 00:26:26.881 [2024-07-16 00:27:45.713791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.881 [2024-07-16 00:27:45.713803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.881 qpair failed and we were unable to recover it. 00:26:26.881 [2024-07-16 00:27:45.714005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.881 [2024-07-16 00:27:45.714017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.881 qpair failed and we were unable to recover it. 00:26:26.881 [2024-07-16 00:27:45.714204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.881 [2024-07-16 00:27:45.714215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.881 qpair failed and we were unable to recover it. 00:26:26.881 [2024-07-16 00:27:45.714508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.881 [2024-07-16 00:27:45.714522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.881 qpair failed and we were unable to recover it. 00:26:26.881 [2024-07-16 00:27:45.714712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.881 [2024-07-16 00:27:45.714727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.881 qpair failed and we were unable to recover it. 00:26:26.881 [2024-07-16 00:27:45.715007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.881 [2024-07-16 00:27:45.715019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:26.881 qpair failed and we were unable to recover it. 00:26:27.152 [2024-07-16 00:27:45.715298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.152 [2024-07-16 00:27:45.715311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.152 qpair failed and we were unable to recover it. 00:26:27.152 [2024-07-16 00:27:45.715604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.153 [2024-07-16 00:27:45.715617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.153 qpair failed and we were unable to recover it. 00:26:27.153 [2024-07-16 00:27:45.715870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.153 [2024-07-16 00:27:45.715882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.153 qpair failed and we were unable to recover it. 00:26:27.153 [2024-07-16 00:27:45.716102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.153 [2024-07-16 00:27:45.716114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.153 qpair failed and we were unable to recover it. 00:26:27.153 [2024-07-16 00:27:45.716368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.153 [2024-07-16 00:27:45.716402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.153 qpair failed and we were unable to recover it. 00:26:27.153 [2024-07-16 00:27:45.716727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.153 [2024-07-16 00:27:45.716760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.153 qpair failed and we were unable to recover it. 00:26:27.153 [2024-07-16 00:27:45.716957] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6c000 is same with the state(5) to be set 00:26:27.153 [2024-07-16 00:27:45.717345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.153 [2024-07-16 00:27:45.717384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.153 qpair failed and we were unable to recover it. 00:26:27.153 [2024-07-16 00:27:45.717640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.153 [2024-07-16 00:27:45.717688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.153 qpair failed and we were unable to recover it. 00:26:27.153 [2024-07-16 00:27:45.717923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.153 [2024-07-16 00:27:45.717956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.153 qpair failed and we were unable to recover it. 00:26:27.153 [2024-07-16 00:27:45.718255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.153 [2024-07-16 00:27:45.718289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.153 qpair failed and we were unable to recover it. 00:26:27.153 [2024-07-16 00:27:45.718638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.153 [2024-07-16 00:27:45.718670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.153 qpair failed and we were unable to recover it. 00:26:27.153 [2024-07-16 00:27:45.718995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.153 [2024-07-16 00:27:45.719027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.153 qpair failed and we were unable to recover it. 00:26:27.153 [2024-07-16 00:27:45.719340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.153 [2024-07-16 00:27:45.719373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.153 qpair failed and we were unable to recover it. 00:26:27.153 [2024-07-16 00:27:45.719682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.153 [2024-07-16 00:27:45.719713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.153 qpair failed and we were unable to recover it. 00:26:27.153 [2024-07-16 00:27:45.720049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.153 [2024-07-16 00:27:45.720081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.153 qpair failed and we were unable to recover it. 00:26:27.153 [2024-07-16 00:27:45.720401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.153 [2024-07-16 00:27:45.720433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.153 qpair failed and we were unable to recover it. 00:26:27.153 [2024-07-16 00:27:45.720738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.153 [2024-07-16 00:27:45.720769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.153 qpair failed and we were unable to recover it. 00:26:27.153 [2024-07-16 00:27:45.721094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.153 [2024-07-16 00:27:45.721125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.153 qpair failed and we were unable to recover it. 00:26:27.153 [2024-07-16 00:27:45.721410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.153 [2024-07-16 00:27:45.721444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.153 qpair failed and we were unable to recover it. 00:26:27.153 [2024-07-16 00:27:45.721823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.153 [2024-07-16 00:27:45.721893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.153 qpair failed and we were unable to recover it. 00:26:27.153 [2024-07-16 00:27:45.722248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.153 [2024-07-16 00:27:45.722286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.153 qpair failed and we were unable to recover it. 00:26:27.153 [2024-07-16 00:27:45.722530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.153 [2024-07-16 00:27:45.722561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.153 qpair failed and we were unable to recover it. 00:26:27.153 [2024-07-16 00:27:45.722808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.153 [2024-07-16 00:27:45.722839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.153 qpair failed and we were unable to recover it. 00:26:27.153 [2024-07-16 00:27:45.723043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.153 [2024-07-16 00:27:45.723059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.153 qpair failed and we were unable to recover it. 00:26:27.153 [2024-07-16 00:27:45.723351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.153 [2024-07-16 00:27:45.723387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.153 qpair failed and we were unable to recover it. 00:26:27.153 [2024-07-16 00:27:45.723632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.153 [2024-07-16 00:27:45.723663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.153 qpair failed and we were unable to recover it. 00:26:27.153 [2024-07-16 00:27:45.723984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.153 [2024-07-16 00:27:45.724015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.153 qpair failed and we were unable to recover it. 00:26:27.153 [2024-07-16 00:27:45.724318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.153 [2024-07-16 00:27:45.724351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.153 qpair failed and we were unable to recover it. 00:26:27.153 [2024-07-16 00:27:45.724674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.153 [2024-07-16 00:27:45.724706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.153 qpair failed and we were unable to recover it. 00:26:27.153 [2024-07-16 00:27:45.724957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.153 [2024-07-16 00:27:45.724989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.153 qpair failed and we were unable to recover it. 00:26:27.153 [2024-07-16 00:27:45.725283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.153 [2024-07-16 00:27:45.725316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.153 qpair failed and we were unable to recover it. 00:26:27.153 [2024-07-16 00:27:45.725611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.153 [2024-07-16 00:27:45.725643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.153 qpair failed and we were unable to recover it. 00:26:27.153 [2024-07-16 00:27:45.725904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.153 [2024-07-16 00:27:45.725924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.153 qpair failed and we were unable to recover it. 00:26:27.153 [2024-07-16 00:27:45.726215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.153 [2024-07-16 00:27:45.726255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.153 qpair failed and we were unable to recover it. 00:26:27.153 [2024-07-16 00:27:45.726594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.153 [2024-07-16 00:27:45.726626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.153 qpair failed and we were unable to recover it. 00:26:27.153 [2024-07-16 00:27:45.726944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.153 [2024-07-16 00:27:45.726975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.153 qpair failed and we were unable to recover it. 00:26:27.153 [2024-07-16 00:27:45.727220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.153 [2024-07-16 00:27:45.727272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.153 qpair failed and we were unable to recover it. 00:26:27.153 [2024-07-16 00:27:45.727623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.153 [2024-07-16 00:27:45.727656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.153 qpair failed and we were unable to recover it. 00:26:27.153 [2024-07-16 00:27:45.727901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.153 [2024-07-16 00:27:45.727933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.153 qpair failed and we were unable to recover it. 00:26:27.153 [2024-07-16 00:27:45.728173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.153 [2024-07-16 00:27:45.728205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.153 qpair failed and we were unable to recover it. 00:26:27.153 [2024-07-16 00:27:45.728513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.153 [2024-07-16 00:27:45.728545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.153 qpair failed and we were unable to recover it. 00:26:27.153 [2024-07-16 00:27:45.728859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.154 [2024-07-16 00:27:45.728890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.154 qpair failed and we were unable to recover it. 00:26:27.154 [2024-07-16 00:27:45.729166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.154 [2024-07-16 00:27:45.729198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.154 qpair failed and we were unable to recover it. 00:26:27.154 [2024-07-16 00:27:45.729516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.154 [2024-07-16 00:27:45.729554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.154 qpair failed and we were unable to recover it. 00:26:27.154 [2024-07-16 00:27:45.729873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.154 [2024-07-16 00:27:45.729905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.154 qpair failed and we were unable to recover it. 00:26:27.154 [2024-07-16 00:27:45.730174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.154 [2024-07-16 00:27:45.730190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.154 qpair failed and we were unable to recover it. 00:26:27.154 [2024-07-16 00:27:45.730407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.154 [2024-07-16 00:27:45.730423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.154 qpair failed and we were unable to recover it. 00:26:27.154 [2024-07-16 00:27:45.730734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.154 [2024-07-16 00:27:45.730766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.154 qpair failed and we were unable to recover it. 00:26:27.154 [2024-07-16 00:27:45.731111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.154 [2024-07-16 00:27:45.731143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.154 qpair failed and we were unable to recover it. 00:26:27.154 [2024-07-16 00:27:45.731462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.154 [2024-07-16 00:27:45.731494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.154 qpair failed and we were unable to recover it. 00:26:27.154 [2024-07-16 00:27:45.731771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.154 [2024-07-16 00:27:45.731802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.154 qpair failed and we were unable to recover it. 00:26:27.154 [2024-07-16 00:27:45.732144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.154 [2024-07-16 00:27:45.732175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.154 qpair failed and we were unable to recover it. 00:26:27.154 [2024-07-16 00:27:45.732509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.154 [2024-07-16 00:27:45.732541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.154 qpair failed and we were unable to recover it. 00:26:27.154 [2024-07-16 00:27:45.732855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.154 [2024-07-16 00:27:45.732886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.154 qpair failed and we were unable to recover it. 00:26:27.154 [2024-07-16 00:27:45.733109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.154 [2024-07-16 00:27:45.733125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.154 qpair failed and we were unable to recover it. 00:26:27.154 [2024-07-16 00:27:45.733273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.154 [2024-07-16 00:27:45.733289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.154 qpair failed and we were unable to recover it. 00:26:27.154 [2024-07-16 00:27:45.733501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.154 [2024-07-16 00:27:45.733533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.154 qpair failed and we were unable to recover it. 00:26:27.154 [2024-07-16 00:27:45.733850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.154 [2024-07-16 00:27:45.733882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.154 qpair failed and we were unable to recover it. 00:26:27.154 [2024-07-16 00:27:45.734058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.154 [2024-07-16 00:27:45.734074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.154 qpair failed and we were unable to recover it. 00:26:27.154 [2024-07-16 00:27:45.734306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.154 [2024-07-16 00:27:45.734344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.154 qpair failed and we were unable to recover it. 00:26:27.154 [2024-07-16 00:27:45.734672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.154 [2024-07-16 00:27:45.734703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.154 qpair failed and we were unable to recover it. 00:26:27.154 [2024-07-16 00:27:45.734945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.154 [2024-07-16 00:27:45.734977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.154 qpair failed and we were unable to recover it. 00:26:27.154 [2024-07-16 00:27:45.735284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.154 [2024-07-16 00:27:45.735300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.154 qpair failed and we were unable to recover it. 00:26:27.154 [2024-07-16 00:27:45.735528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.154 [2024-07-16 00:27:45.735559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.154 qpair failed and we were unable to recover it. 00:26:27.154 [2024-07-16 00:27:45.735862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.154 [2024-07-16 00:27:45.735893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.154 qpair failed and we were unable to recover it. 00:26:27.154 [2024-07-16 00:27:45.736185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.154 [2024-07-16 00:27:45.736216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.154 qpair failed and we were unable to recover it. 00:26:27.154 [2024-07-16 00:27:45.736500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.154 [2024-07-16 00:27:45.736533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.154 qpair failed and we were unable to recover it. 00:26:27.154 [2024-07-16 00:27:45.736764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.154 [2024-07-16 00:27:45.736795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.154 qpair failed and we were unable to recover it. 00:26:27.154 [2024-07-16 00:27:45.737144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.154 [2024-07-16 00:27:45.737176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.154 qpair failed and we were unable to recover it. 00:26:27.154 [2024-07-16 00:27:45.737502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.154 [2024-07-16 00:27:45.737535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.154 qpair failed and we were unable to recover it. 00:26:27.154 [2024-07-16 00:27:45.737830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.154 [2024-07-16 00:27:45.737861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.154 qpair failed and we were unable to recover it. 00:26:27.154 [2024-07-16 00:27:45.738088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.154 [2024-07-16 00:27:45.738104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.154 qpair failed and we were unable to recover it. 00:26:27.154 [2024-07-16 00:27:45.738388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.154 [2024-07-16 00:27:45.738421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.154 qpair failed and we were unable to recover it. 00:26:27.154 [2024-07-16 00:27:45.738766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.154 [2024-07-16 00:27:45.738798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.154 qpair failed and we were unable to recover it. 00:26:27.154 [2024-07-16 00:27:45.739110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.154 [2024-07-16 00:27:45.739126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.154 qpair failed and we were unable to recover it. 00:26:27.154 [2024-07-16 00:27:45.739408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.154 [2024-07-16 00:27:45.739441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.154 qpair failed and we were unable to recover it. 00:26:27.154 [2024-07-16 00:27:45.739735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.154 [2024-07-16 00:27:45.739767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.154 qpair failed and we were unable to recover it. 00:26:27.154 [2024-07-16 00:27:45.740011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.154 [2024-07-16 00:27:45.740043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.154 qpair failed and we were unable to recover it. 00:26:27.154 [2024-07-16 00:27:45.740309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.154 [2024-07-16 00:27:45.740342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.154 qpair failed and we were unable to recover it. 00:26:27.154 [2024-07-16 00:27:45.740640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.155 [2024-07-16 00:27:45.740671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.155 qpair failed and we were unable to recover it. 00:26:27.155 [2024-07-16 00:27:45.741014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.155 [2024-07-16 00:27:45.741046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.155 qpair failed and we were unable to recover it. 00:26:27.155 [2024-07-16 00:27:45.741364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.155 [2024-07-16 00:27:45.741380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.155 qpair failed and we were unable to recover it. 00:26:27.155 [2024-07-16 00:27:45.741688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.155 [2024-07-16 00:27:45.741704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.155 qpair failed and we were unable to recover it. 00:26:27.155 [2024-07-16 00:27:45.741913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.155 [2024-07-16 00:27:45.741945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.155 qpair failed and we were unable to recover it. 00:26:27.155 [2024-07-16 00:27:45.742261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.155 [2024-07-16 00:27:45.742294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.155 qpair failed and we were unable to recover it. 00:26:27.155 [2024-07-16 00:27:45.742609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.155 [2024-07-16 00:27:45.742641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.155 qpair failed and we were unable to recover it. 00:26:27.155 [2024-07-16 00:27:45.742888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.155 [2024-07-16 00:27:45.742920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.155 qpair failed and we were unable to recover it. 00:26:27.155 [2024-07-16 00:27:45.743215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.155 [2024-07-16 00:27:45.743235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.155 qpair failed and we were unable to recover it. 00:26:27.155 [2024-07-16 00:27:45.743495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.155 [2024-07-16 00:27:45.743511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.155 qpair failed and we were unable to recover it. 00:26:27.155 [2024-07-16 00:27:45.743706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.155 [2024-07-16 00:27:45.743722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.155 qpair failed and we were unable to recover it. 00:26:27.155 [2024-07-16 00:27:45.743924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.155 [2024-07-16 00:27:45.743940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.155 qpair failed and we were unable to recover it. 00:26:27.155 [2024-07-16 00:27:45.744147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.155 [2024-07-16 00:27:45.744179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.155 qpair failed and we were unable to recover it. 00:26:27.155 [2024-07-16 00:27:45.744508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.155 [2024-07-16 00:27:45.744540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.155 qpair failed and we were unable to recover it. 00:26:27.155 [2024-07-16 00:27:45.744834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.155 [2024-07-16 00:27:45.744865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.155 qpair failed and we were unable to recover it. 00:26:27.155 [2024-07-16 00:27:45.745186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.155 [2024-07-16 00:27:45.745219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.155 qpair failed and we were unable to recover it. 00:26:27.155 [2024-07-16 00:27:45.745482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.155 [2024-07-16 00:27:45.745514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.155 qpair failed and we were unable to recover it. 00:26:27.155 [2024-07-16 00:27:45.745757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.155 [2024-07-16 00:27:45.745788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.155 qpair failed and we were unable to recover it. 00:26:27.155 [2024-07-16 00:27:45.746100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.155 [2024-07-16 00:27:45.746116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.155 qpair failed and we were unable to recover it. 00:26:27.155 [2024-07-16 00:27:45.746376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.155 [2024-07-16 00:27:45.746420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.155 qpair failed and we were unable to recover it. 00:26:27.155 [2024-07-16 00:27:45.746678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.155 [2024-07-16 00:27:45.746710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.155 qpair failed and we were unable to recover it. 00:26:27.155 [2024-07-16 00:27:45.747051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.155 [2024-07-16 00:27:45.747084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.155 qpair failed and we were unable to recover it. 00:26:27.155 [2024-07-16 00:27:45.747377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.155 [2024-07-16 00:27:45.747410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.155 qpair failed and we were unable to recover it. 00:26:27.155 [2024-07-16 00:27:45.747701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.155 [2024-07-16 00:27:45.747733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.155 qpair failed and we were unable to recover it. 00:26:27.155 [2024-07-16 00:27:45.748029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.155 [2024-07-16 00:27:45.748060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.155 qpair failed and we were unable to recover it. 00:26:27.155 [2024-07-16 00:27:45.748377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.155 [2024-07-16 00:27:45.748409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.155 qpair failed and we were unable to recover it. 00:26:27.155 [2024-07-16 00:27:45.748646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.155 [2024-07-16 00:27:45.748677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.155 qpair failed and we were unable to recover it. 00:26:27.155 [2024-07-16 00:27:45.748984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.155 [2024-07-16 00:27:45.749016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.155 qpair failed and we were unable to recover it. 00:26:27.155 [2024-07-16 00:27:45.749281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.155 [2024-07-16 00:27:45.749313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.155 qpair failed and we were unable to recover it. 00:26:27.155 [2024-07-16 00:27:45.749680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.155 [2024-07-16 00:27:45.749711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.155 qpair failed and we were unable to recover it. 00:26:27.155 [2024-07-16 00:27:45.750035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.155 [2024-07-16 00:27:45.750068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.155 qpair failed and we were unable to recover it. 00:26:27.155 [2024-07-16 00:27:45.750382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.155 [2024-07-16 00:27:45.750415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.155 qpair failed and we were unable to recover it. 00:26:27.155 [2024-07-16 00:27:45.750741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.155 [2024-07-16 00:27:45.750772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.155 qpair failed and we were unable to recover it. 00:26:27.155 [2024-07-16 00:27:45.751071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.155 [2024-07-16 00:27:45.751104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.155 qpair failed and we were unable to recover it. 00:26:27.155 [2024-07-16 00:27:45.751339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.155 [2024-07-16 00:27:45.751371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.155 qpair failed and we were unable to recover it. 00:26:27.155 [2024-07-16 00:27:45.751631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.155 [2024-07-16 00:27:45.751663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.155 qpair failed and we were unable to recover it. 00:26:27.155 [2024-07-16 00:27:45.751921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.155 [2024-07-16 00:27:45.751937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.155 qpair failed and we were unable to recover it. 00:26:27.155 [2024-07-16 00:27:45.752142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.155 [2024-07-16 00:27:45.752158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.155 qpair failed and we were unable to recover it. 00:26:27.155 [2024-07-16 00:27:45.752383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.155 [2024-07-16 00:27:45.752399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.156 qpair failed and we were unable to recover it. 00:26:27.156 [2024-07-16 00:27:45.752608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.156 [2024-07-16 00:27:45.752624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.156 qpair failed and we were unable to recover it. 00:26:27.156 [2024-07-16 00:27:45.752899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.156 [2024-07-16 00:27:45.752914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.156 qpair failed and we were unable to recover it. 00:26:27.156 [2024-07-16 00:27:45.753200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.156 [2024-07-16 00:27:45.753217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.156 qpair failed and we were unable to recover it. 00:26:27.156 [2024-07-16 00:27:45.753521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.156 [2024-07-16 00:27:45.753553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.156 qpair failed and we were unable to recover it. 00:26:27.156 [2024-07-16 00:27:45.753823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.156 [2024-07-16 00:27:45.753854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.156 qpair failed and we were unable to recover it. 00:26:27.156 [2024-07-16 00:27:45.754104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.156 [2024-07-16 00:27:45.754120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.156 qpair failed and we were unable to recover it. 00:26:27.156 [2024-07-16 00:27:45.754406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.156 [2024-07-16 00:27:45.754439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.156 qpair failed and we were unable to recover it. 00:26:27.156 [2024-07-16 00:27:45.754603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.156 [2024-07-16 00:27:45.754634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.156 qpair failed and we were unable to recover it. 00:26:27.156 [2024-07-16 00:27:45.754928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.156 [2024-07-16 00:27:45.754960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.156 qpair failed and we were unable to recover it. 00:26:27.156 [2024-07-16 00:27:45.755242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.156 [2024-07-16 00:27:45.755279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.156 qpair failed and we were unable to recover it. 00:26:27.156 [2024-07-16 00:27:45.755519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.156 [2024-07-16 00:27:45.755551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.156 qpair failed and we were unable to recover it. 00:26:27.156 [2024-07-16 00:27:45.755850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.156 [2024-07-16 00:27:45.755882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.156 qpair failed and we were unable to recover it. 00:26:27.156 [2024-07-16 00:27:45.756121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.156 [2024-07-16 00:27:45.756152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.156 qpair failed and we were unable to recover it. 00:26:27.156 [2024-07-16 00:27:45.756389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.156 [2024-07-16 00:27:45.756422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.156 qpair failed and we were unable to recover it. 00:26:27.156 [2024-07-16 00:27:45.756598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.156 [2024-07-16 00:27:45.756629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.156 qpair failed and we were unable to recover it. 00:26:27.156 [2024-07-16 00:27:45.756960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.156 [2024-07-16 00:27:45.756991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.156 qpair failed and we were unable to recover it. 00:26:27.156 [2024-07-16 00:27:45.757308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.156 [2024-07-16 00:27:45.757340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.156 qpair failed and we were unable to recover it. 00:26:27.156 [2024-07-16 00:27:45.757581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.156 [2024-07-16 00:27:45.757613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.156 qpair failed and we were unable to recover it. 00:26:27.156 [2024-07-16 00:27:45.757874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.156 [2024-07-16 00:27:45.757889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.156 qpair failed and we were unable to recover it. 00:26:27.156 [2024-07-16 00:27:45.758091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.156 [2024-07-16 00:27:45.758106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.156 qpair failed and we were unable to recover it. 00:26:27.156 [2024-07-16 00:27:45.758305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.156 [2024-07-16 00:27:45.758321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.156 qpair failed and we were unable to recover it. 00:26:27.156 [2024-07-16 00:27:45.758608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.156 [2024-07-16 00:27:45.758623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.156 qpair failed and we were unable to recover it. 00:26:27.156 [2024-07-16 00:27:45.758906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.156 [2024-07-16 00:27:45.758922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.156 qpair failed and we were unable to recover it. 00:26:27.156 [2024-07-16 00:27:45.759241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.156 [2024-07-16 00:27:45.759275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.156 qpair failed and we were unable to recover it. 00:26:27.156 [2024-07-16 00:27:45.759540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.156 [2024-07-16 00:27:45.759572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.156 qpair failed and we were unable to recover it. 00:26:27.156 [2024-07-16 00:27:45.759892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.156 [2024-07-16 00:27:45.759924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.156 qpair failed and we were unable to recover it. 00:26:27.156 [2024-07-16 00:27:45.760222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.156 [2024-07-16 00:27:45.760290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.156 qpair failed and we were unable to recover it. 00:26:27.156 [2024-07-16 00:27:45.760572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.156 [2024-07-16 00:27:45.760588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.156 qpair failed and we were unable to recover it. 00:26:27.156 [2024-07-16 00:27:45.760854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.156 [2024-07-16 00:27:45.760886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.156 qpair failed and we were unable to recover it. 00:26:27.156 [2024-07-16 00:27:45.761199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.156 [2024-07-16 00:27:45.761239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.156 qpair failed and we were unable to recover it. 00:26:27.156 [2024-07-16 00:27:45.761489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.156 [2024-07-16 00:27:45.761521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.156 qpair failed and we were unable to recover it. 00:26:27.156 [2024-07-16 00:27:45.761868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.156 [2024-07-16 00:27:45.761899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.156 qpair failed and we were unable to recover it. 00:26:27.156 [2024-07-16 00:27:45.762138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.156 [2024-07-16 00:27:45.762154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.156 qpair failed and we were unable to recover it. 00:26:27.156 [2024-07-16 00:27:45.762439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.156 [2024-07-16 00:27:45.762456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.156 qpair failed and we were unable to recover it. 00:26:27.156 [2024-07-16 00:27:45.762746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.156 [2024-07-16 00:27:45.762777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.156 qpair failed and we were unable to recover it. 00:26:27.156 [2024-07-16 00:27:45.763121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.156 [2024-07-16 00:27:45.763153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.156 qpair failed and we were unable to recover it. 00:26:27.156 [2024-07-16 00:27:45.763398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.156 [2024-07-16 00:27:45.763449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.156 qpair failed and we were unable to recover it. 00:26:27.156 [2024-07-16 00:27:45.763796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.156 [2024-07-16 00:27:45.763828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.156 qpair failed and we were unable to recover it. 00:26:27.156 [2024-07-16 00:27:45.764142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.156 [2024-07-16 00:27:45.764173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.156 qpair failed and we were unable to recover it. 00:26:27.156 [2024-07-16 00:27:45.764503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.156 [2024-07-16 00:27:45.764536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.156 qpair failed and we were unable to recover it. 00:26:27.156 [2024-07-16 00:27:45.764858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.156 [2024-07-16 00:27:45.764889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.156 qpair failed and we were unable to recover it. 00:26:27.157 [2024-07-16 00:27:45.765181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.157 [2024-07-16 00:27:45.765212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.157 qpair failed and we were unable to recover it. 00:26:27.157 [2024-07-16 00:27:45.765541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.157 [2024-07-16 00:27:45.765572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.157 qpair failed and we were unable to recover it. 00:26:27.157 [2024-07-16 00:27:45.765891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.157 [2024-07-16 00:27:45.765922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.157 qpair failed and we were unable to recover it. 00:26:27.157 [2024-07-16 00:27:45.766240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.157 [2024-07-16 00:27:45.766274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.157 qpair failed and we were unable to recover it. 00:26:27.157 [2024-07-16 00:27:45.766575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.157 [2024-07-16 00:27:45.766607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.157 qpair failed and we were unable to recover it. 00:26:27.157 [2024-07-16 00:27:45.766868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.157 [2024-07-16 00:27:45.766900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.157 qpair failed and we were unable to recover it. 00:26:27.157 [2024-07-16 00:27:45.767127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.157 [2024-07-16 00:27:45.767166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.157 qpair failed and we were unable to recover it. 00:26:27.157 [2024-07-16 00:27:45.767386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.157 [2024-07-16 00:27:45.767402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.157 qpair failed and we were unable to recover it. 00:26:27.157 [2024-07-16 00:27:45.767681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.157 [2024-07-16 00:27:45.767698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.157 qpair failed and we were unable to recover it. 00:26:27.157 [2024-07-16 00:27:45.767997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.157 [2024-07-16 00:27:45.768029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.157 qpair failed and we were unable to recover it. 00:26:27.157 [2024-07-16 00:27:45.768340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.157 [2024-07-16 00:27:45.768374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.157 qpair failed and we were unable to recover it. 00:26:27.157 [2024-07-16 00:27:45.768559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.157 [2024-07-16 00:27:45.768591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.157 qpair failed and we were unable to recover it. 00:26:27.157 [2024-07-16 00:27:45.768930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.157 [2024-07-16 00:27:45.768961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.157 qpair failed and we were unable to recover it. 00:26:27.157 [2024-07-16 00:27:45.769278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.157 [2024-07-16 00:27:45.769311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.157 qpair failed and we were unable to recover it. 00:26:27.157 [2024-07-16 00:27:45.769616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.157 [2024-07-16 00:27:45.769648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.157 qpair failed and we were unable to recover it. 00:26:27.157 [2024-07-16 00:27:45.769964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.157 [2024-07-16 00:27:45.769996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.157 qpair failed and we were unable to recover it. 00:26:27.157 [2024-07-16 00:27:45.770252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.157 [2024-07-16 00:27:45.770285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.157 qpair failed and we were unable to recover it. 00:26:27.157 [2024-07-16 00:27:45.770513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.157 [2024-07-16 00:27:45.770544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.157 qpair failed and we were unable to recover it. 00:26:27.157 [2024-07-16 00:27:45.770927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.157 [2024-07-16 00:27:45.770959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.157 qpair failed and we were unable to recover it. 00:26:27.157 [2024-07-16 00:27:45.771199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.157 [2024-07-16 00:27:45.771255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.157 qpair failed and we were unable to recover it. 00:26:27.157 [2024-07-16 00:27:45.771564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.157 [2024-07-16 00:27:45.771595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.157 qpair failed and we were unable to recover it. 00:26:27.157 [2024-07-16 00:27:45.771888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.157 [2024-07-16 00:27:45.771920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.157 qpair failed and we were unable to recover it. 00:26:27.158 [2024-07-16 00:27:45.772106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.158 [2024-07-16 00:27:45.772143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.158 qpair failed and we were unable to recover it. 00:26:27.158 [2024-07-16 00:27:45.772430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.158 [2024-07-16 00:27:45.772446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.158 qpair failed and we were unable to recover it. 00:26:27.158 [2024-07-16 00:27:45.772729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.158 [2024-07-16 00:27:45.772760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.158 qpair failed and we were unable to recover it. 00:26:27.158 [2024-07-16 00:27:45.773055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.158 [2024-07-16 00:27:45.773086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.158 qpair failed and we were unable to recover it. 00:26:27.158 [2024-07-16 00:27:45.773353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.158 [2024-07-16 00:27:45.773385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.158 qpair failed and we were unable to recover it. 00:26:27.158 [2024-07-16 00:27:45.773684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.158 [2024-07-16 00:27:45.773715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.158 qpair failed and we were unable to recover it. 00:26:27.158 [2024-07-16 00:27:45.773971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.158 [2024-07-16 00:27:45.774002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.158 qpair failed and we were unable to recover it. 00:26:27.158 [2024-07-16 00:27:45.774245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.158 [2024-07-16 00:27:45.774261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.158 qpair failed and we were unable to recover it. 00:26:27.158 [2024-07-16 00:27:45.774524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.158 [2024-07-16 00:27:45.774540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.158 qpair failed and we were unable to recover it. 00:26:27.158 [2024-07-16 00:27:45.774744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.158 [2024-07-16 00:27:45.774760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.158 qpair failed and we were unable to recover it. 00:26:27.158 [2024-07-16 00:27:45.774973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.158 [2024-07-16 00:27:45.774989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.158 qpair failed and we were unable to recover it. 00:26:27.158 [2024-07-16 00:27:45.775246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.158 [2024-07-16 00:27:45.775262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.158 qpair failed and we were unable to recover it. 00:26:27.158 [2024-07-16 00:27:45.775529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.158 [2024-07-16 00:27:45.775575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.158 qpair failed and we were unable to recover it. 00:26:27.158 [2024-07-16 00:27:45.775826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.158 [2024-07-16 00:27:45.775858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.158 qpair failed and we were unable to recover it. 00:26:27.158 [2024-07-16 00:27:45.776161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.158 [2024-07-16 00:27:45.776192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.158 qpair failed and we were unable to recover it. 00:26:27.158 [2024-07-16 00:27:45.776457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.158 [2024-07-16 00:27:45.776489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.158 qpair failed and we were unable to recover it. 00:26:27.158 [2024-07-16 00:27:45.776838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.158 [2024-07-16 00:27:45.776869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.158 qpair failed and we were unable to recover it. 00:26:27.158 [2024-07-16 00:27:45.777129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.158 [2024-07-16 00:27:45.777161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.158 qpair failed and we were unable to recover it. 00:26:27.158 [2024-07-16 00:27:45.777496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.158 [2024-07-16 00:27:45.777529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.158 qpair failed and we were unable to recover it. 00:26:27.158 [2024-07-16 00:27:45.777850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.158 [2024-07-16 00:27:45.777882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.158 qpair failed and we were unable to recover it. 00:26:27.158 [2024-07-16 00:27:45.778201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.158 [2024-07-16 00:27:45.778243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.158 qpair failed and we were unable to recover it. 00:26:27.158 [2024-07-16 00:27:45.778582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.158 [2024-07-16 00:27:45.778614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.158 qpair failed and we were unable to recover it. 00:26:27.158 [2024-07-16 00:27:45.778873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.158 [2024-07-16 00:27:45.778904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.158 qpair failed and we were unable to recover it. 00:26:27.158 [2024-07-16 00:27:45.779247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.158 [2024-07-16 00:27:45.779280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.158 qpair failed and we were unable to recover it. 00:26:27.158 [2024-07-16 00:27:45.779522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.158 [2024-07-16 00:27:45.779553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.158 qpair failed and we were unable to recover it. 00:26:27.158 [2024-07-16 00:27:45.779781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.158 [2024-07-16 00:27:45.779797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.158 qpair failed and we were unable to recover it. 00:26:27.158 [2024-07-16 00:27:45.780020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.158 [2024-07-16 00:27:45.780036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.158 qpair failed and we were unable to recover it. 00:26:27.158 [2024-07-16 00:27:45.780250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.158 [2024-07-16 00:27:45.780266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.158 qpair failed and we were unable to recover it. 00:26:27.158 [2024-07-16 00:27:45.780472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.158 [2024-07-16 00:27:45.780488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.158 qpair failed and we were unable to recover it. 00:26:27.158 [2024-07-16 00:27:45.780764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.158 [2024-07-16 00:27:45.780795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.158 qpair failed and we were unable to recover it. 00:26:27.158 [2024-07-16 00:27:45.780970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.158 [2024-07-16 00:27:45.780986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.158 qpair failed and we were unable to recover it. 00:26:27.158 [2024-07-16 00:27:45.781205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.158 [2024-07-16 00:27:45.781248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.158 qpair failed and we were unable to recover it. 00:26:27.158 [2024-07-16 00:27:45.781574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.158 [2024-07-16 00:27:45.781606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.158 qpair failed and we were unable to recover it. 00:26:27.158 [2024-07-16 00:27:45.781873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.158 [2024-07-16 00:27:45.781905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.158 qpair failed and we were unable to recover it. 00:26:27.158 [2024-07-16 00:27:45.782138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.158 [2024-07-16 00:27:45.782154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.158 qpair failed and we were unable to recover it. 00:26:27.158 [2024-07-16 00:27:45.782380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.158 [2024-07-16 00:27:45.782396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.158 qpair failed and we were unable to recover it. 00:26:27.158 [2024-07-16 00:27:45.782687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.158 [2024-07-16 00:27:45.782719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.158 qpair failed and we were unable to recover it. 00:26:27.158 [2024-07-16 00:27:45.782945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.158 [2024-07-16 00:27:45.782977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.158 qpair failed and we were unable to recover it. 00:26:27.158 [2024-07-16 00:27:45.783287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.158 [2024-07-16 00:27:45.783320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.158 qpair failed and we were unable to recover it. 00:26:27.158 [2024-07-16 00:27:45.783582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.158 [2024-07-16 00:27:45.783614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.158 qpair failed and we were unable to recover it. 00:26:27.158 [2024-07-16 00:27:45.783841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.158 [2024-07-16 00:27:45.783872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.159 qpair failed and we were unable to recover it. 00:26:27.159 [2024-07-16 00:27:45.784215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.159 [2024-07-16 00:27:45.784261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.159 qpair failed and we were unable to recover it. 00:26:27.159 [2024-07-16 00:27:45.784560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.159 [2024-07-16 00:27:45.784592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.159 qpair failed and we were unable to recover it. 00:26:27.159 [2024-07-16 00:27:45.784903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.159 [2024-07-16 00:27:45.784935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.159 qpair failed and we were unable to recover it. 00:26:27.159 [2024-07-16 00:27:45.785246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.159 [2024-07-16 00:27:45.785279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.159 qpair failed and we were unable to recover it. 00:26:27.159 [2024-07-16 00:27:45.785445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.159 [2024-07-16 00:27:45.785476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.159 qpair failed and we were unable to recover it. 00:26:27.159 [2024-07-16 00:27:45.785801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.159 [2024-07-16 00:27:45.785832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.159 qpair failed and we were unable to recover it. 00:26:27.159 [2024-07-16 00:27:45.786169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.159 [2024-07-16 00:27:45.786202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.159 qpair failed and we were unable to recover it. 00:26:27.159 [2024-07-16 00:27:45.786475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.159 [2024-07-16 00:27:45.786507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.159 qpair failed and we were unable to recover it. 00:26:27.159 [2024-07-16 00:27:45.786766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.159 [2024-07-16 00:27:45.786798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.159 qpair failed and we were unable to recover it. 00:26:27.159 [2024-07-16 00:27:45.787156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.159 [2024-07-16 00:27:45.787188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.159 qpair failed and we were unable to recover it. 00:26:27.159 [2024-07-16 00:27:45.787456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.159 [2024-07-16 00:27:45.787488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.159 qpair failed and we were unable to recover it. 00:26:27.159 [2024-07-16 00:27:45.787810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.159 [2024-07-16 00:27:45.787841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.159 qpair failed and we were unable to recover it. 00:26:27.159 [2024-07-16 00:27:45.788092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.159 [2024-07-16 00:27:45.788123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.159 qpair failed and we were unable to recover it. 00:26:27.159 [2024-07-16 00:27:45.788420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.159 [2024-07-16 00:27:45.788453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.159 qpair failed and we were unable to recover it. 00:26:27.159 [2024-07-16 00:27:45.788699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.159 [2024-07-16 00:27:45.788730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.159 qpair failed and we were unable to recover it. 00:26:27.159 [2024-07-16 00:27:45.788986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.159 [2024-07-16 00:27:45.789017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.159 qpair failed and we were unable to recover it. 00:26:27.159 [2024-07-16 00:27:45.789264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.159 [2024-07-16 00:27:45.789297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.159 qpair failed and we were unable to recover it. 00:26:27.159 [2024-07-16 00:27:45.789605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.159 [2024-07-16 00:27:45.789636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.159 qpair failed and we were unable to recover it. 00:26:27.159 [2024-07-16 00:27:45.789871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.159 [2024-07-16 00:27:45.789911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.159 qpair failed and we were unable to recover it. 00:26:27.159 [2024-07-16 00:27:45.790217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.159 [2024-07-16 00:27:45.790256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.159 qpair failed and we were unable to recover it. 00:26:27.159 [2024-07-16 00:27:45.790593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.159 [2024-07-16 00:27:45.790625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.159 qpair failed and we were unable to recover it. 00:26:27.159 [2024-07-16 00:27:45.790919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.159 [2024-07-16 00:27:45.790951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.159 qpair failed and we were unable to recover it. 00:26:27.159 [2024-07-16 00:27:45.791278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.159 [2024-07-16 00:27:45.791311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.159 qpair failed and we were unable to recover it. 00:26:27.159 [2024-07-16 00:27:45.791632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.159 [2024-07-16 00:27:45.791664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.159 qpair failed and we were unable to recover it. 00:26:27.159 [2024-07-16 00:27:45.791846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.159 [2024-07-16 00:27:45.791877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.159 qpair failed and we were unable to recover it. 00:26:27.159 [2024-07-16 00:27:45.792151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.159 [2024-07-16 00:27:45.792183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.159 qpair failed and we were unable to recover it. 00:26:27.159 [2024-07-16 00:27:45.792521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.159 [2024-07-16 00:27:45.792554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.159 qpair failed and we were unable to recover it. 00:26:27.159 [2024-07-16 00:27:45.792731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.159 [2024-07-16 00:27:45.792768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.159 qpair failed and we were unable to recover it. 00:26:27.159 [2024-07-16 00:27:45.793064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.159 [2024-07-16 00:27:45.793081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.159 qpair failed and we were unable to recover it. 00:26:27.159 [2024-07-16 00:27:45.793296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.159 [2024-07-16 00:27:45.793313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.159 qpair failed and we were unable to recover it. 00:26:27.159 [2024-07-16 00:27:45.793463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.159 [2024-07-16 00:27:45.793494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.159 qpair failed and we were unable to recover it. 00:26:27.159 [2024-07-16 00:27:45.793811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.159 [2024-07-16 00:27:45.793844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.159 qpair failed and we were unable to recover it. 00:26:27.159 [2024-07-16 00:27:45.794068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.159 [2024-07-16 00:27:45.794083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.159 qpair failed and we were unable to recover it. 00:26:27.159 [2024-07-16 00:27:45.794293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.159 [2024-07-16 00:27:45.794309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.159 qpair failed and we were unable to recover it. 00:26:27.159 [2024-07-16 00:27:45.794616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.159 [2024-07-16 00:27:45.794648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.159 qpair failed and we were unable to recover it. 00:26:27.159 [2024-07-16 00:27:45.794877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.159 [2024-07-16 00:27:45.794909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.159 qpair failed and we were unable to recover it. 00:26:27.159 [2024-07-16 00:27:45.795265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.159 [2024-07-16 00:27:45.795298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.159 qpair failed and we were unable to recover it. 00:26:27.159 [2024-07-16 00:27:45.795540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.159 [2024-07-16 00:27:45.795573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.159 qpair failed and we were unable to recover it. 00:26:27.159 [2024-07-16 00:27:45.795890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.159 [2024-07-16 00:27:45.795922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.159 qpair failed and we were unable to recover it. 00:26:27.159 [2024-07-16 00:27:45.796253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.159 [2024-07-16 00:27:45.796286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.159 qpair failed and we were unable to recover it. 00:26:27.160 [2024-07-16 00:27:45.796607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.160 [2024-07-16 00:27:45.796638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.160 qpair failed and we were unable to recover it. 00:26:27.160 [2024-07-16 00:27:45.796891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.160 [2024-07-16 00:27:45.796922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.160 qpair failed and we were unable to recover it. 00:26:27.160 [2024-07-16 00:27:45.797164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.160 [2024-07-16 00:27:45.797179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.160 qpair failed and we were unable to recover it. 00:26:27.160 [2024-07-16 00:27:45.797465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.160 [2024-07-16 00:27:45.797498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.160 qpair failed and we were unable to recover it. 00:26:27.160 [2024-07-16 00:27:45.797730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.160 [2024-07-16 00:27:45.797762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.160 qpair failed and we were unable to recover it. 00:26:27.160 [2024-07-16 00:27:45.798087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.160 [2024-07-16 00:27:45.798120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.160 qpair failed and we were unable to recover it. 00:26:27.160 [2024-07-16 00:27:45.798362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.160 [2024-07-16 00:27:45.798395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.160 qpair failed and we were unable to recover it. 00:26:27.160 [2024-07-16 00:27:45.798667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.160 [2024-07-16 00:27:45.798699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.160 qpair failed and we were unable to recover it. 00:26:27.160 [2024-07-16 00:27:45.798965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.160 [2024-07-16 00:27:45.799005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.160 qpair failed and we were unable to recover it. 00:26:27.160 [2024-07-16 00:27:45.799220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.160 [2024-07-16 00:27:45.799241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.160 qpair failed and we were unable to recover it. 00:26:27.160 [2024-07-16 00:27:45.799476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.160 [2024-07-16 00:27:45.799492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.160 qpair failed and we were unable to recover it. 00:26:27.160 [2024-07-16 00:27:45.799635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.160 [2024-07-16 00:27:45.799650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.160 qpair failed and we were unable to recover it. 00:26:27.160 [2024-07-16 00:27:45.799937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.160 [2024-07-16 00:27:45.799968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.160 qpair failed and we were unable to recover it. 00:26:27.160 [2024-07-16 00:27:45.800324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.160 [2024-07-16 00:27:45.800356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.160 qpair failed and we were unable to recover it. 00:26:27.160 [2024-07-16 00:27:45.800586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.160 [2024-07-16 00:27:45.800622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.160 qpair failed and we were unable to recover it. 00:26:27.160 [2024-07-16 00:27:45.800863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.160 [2024-07-16 00:27:45.800895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.160 qpair failed and we were unable to recover it. 00:26:27.160 [2024-07-16 00:27:45.801216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.160 [2024-07-16 00:27:45.801258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.160 qpair failed and we were unable to recover it. 00:26:27.160 [2024-07-16 00:27:45.801577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.160 [2024-07-16 00:27:45.801609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.160 qpair failed and we were unable to recover it. 00:26:27.160 [2024-07-16 00:27:45.801862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.160 [2024-07-16 00:27:45.801893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.160 qpair failed and we were unable to recover it. 00:26:27.160 [2024-07-16 00:27:45.802215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.160 [2024-07-16 00:27:45.802258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.160 qpair failed and we were unable to recover it. 00:26:27.160 [2024-07-16 00:27:45.802584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.160 [2024-07-16 00:27:45.802616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.160 qpair failed and we were unable to recover it. 00:26:27.160 [2024-07-16 00:27:45.802939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.160 [2024-07-16 00:27:45.802971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.160 qpair failed and we were unable to recover it. 00:26:27.160 [2024-07-16 00:27:45.803283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.160 [2024-07-16 00:27:45.803300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.160 qpair failed and we were unable to recover it. 00:26:27.160 [2024-07-16 00:27:45.803593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.160 [2024-07-16 00:27:45.803625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.160 qpair failed and we were unable to recover it. 00:26:27.160 [2024-07-16 00:27:45.803967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.160 [2024-07-16 00:27:45.803998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.160 qpair failed and we were unable to recover it. 00:26:27.160 [2024-07-16 00:27:45.804290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.160 [2024-07-16 00:27:45.804306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.160 qpair failed and we were unable to recover it. 00:26:27.160 [2024-07-16 00:27:45.804600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.160 [2024-07-16 00:27:45.804633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.160 qpair failed and we were unable to recover it. 00:26:27.160 [2024-07-16 00:27:45.804968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.160 [2024-07-16 00:27:45.805000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.160 qpair failed and we were unable to recover it. 00:26:27.160 [2024-07-16 00:27:45.805245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.160 [2024-07-16 00:27:45.805261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.160 qpair failed and we were unable to recover it. 00:26:27.160 [2024-07-16 00:27:45.805535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.160 [2024-07-16 00:27:45.805551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.160 qpair failed and we were unable to recover it. 00:26:27.160 [2024-07-16 00:27:45.805834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.160 [2024-07-16 00:27:45.805866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.160 qpair failed and we were unable to recover it. 00:26:27.160 [2024-07-16 00:27:45.806182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.160 [2024-07-16 00:27:45.806214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.160 qpair failed and we were unable to recover it. 00:26:27.160 [2024-07-16 00:27:45.806490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.160 [2024-07-16 00:27:45.806522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.160 qpair failed and we were unable to recover it. 00:26:27.160 [2024-07-16 00:27:45.806822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.160 [2024-07-16 00:27:45.806854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.160 qpair failed and we were unable to recover it. 00:26:27.160 [2024-07-16 00:27:45.807180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.160 [2024-07-16 00:27:45.807212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.160 qpair failed and we were unable to recover it. 00:26:27.160 [2024-07-16 00:27:45.807400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.160 [2024-07-16 00:27:45.807432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.160 qpair failed and we were unable to recover it. 00:26:27.160 [2024-07-16 00:27:45.807730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.160 [2024-07-16 00:27:45.807761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.160 qpair failed and we were unable to recover it. 00:26:27.160 [2024-07-16 00:27:45.808019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.160 [2024-07-16 00:27:45.808035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.160 qpair failed and we were unable to recover it. 00:26:27.160 [2024-07-16 00:27:45.808241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.160 [2024-07-16 00:27:45.808258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.160 qpair failed and we were unable to recover it. 00:26:27.160 [2024-07-16 00:27:45.808521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.160 [2024-07-16 00:27:45.808537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.161 qpair failed and we were unable to recover it. 00:26:27.161 [2024-07-16 00:27:45.808823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.161 [2024-07-16 00:27:45.808839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.161 qpair failed and we were unable to recover it. 00:26:27.161 [2024-07-16 00:27:45.808988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.161 [2024-07-16 00:27:45.809004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.161 qpair failed and we were unable to recover it. 00:26:27.161 [2024-07-16 00:27:45.809214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.161 [2024-07-16 00:27:45.809256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.161 qpair failed and we were unable to recover it. 00:26:27.161 [2024-07-16 00:27:45.809485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.161 [2024-07-16 00:27:45.809517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.161 qpair failed and we were unable to recover it. 00:26:27.161 [2024-07-16 00:27:45.809747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.161 [2024-07-16 00:27:45.809779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.161 qpair failed and we were unable to recover it. 00:26:27.161 [2024-07-16 00:27:45.810170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.161 [2024-07-16 00:27:45.810186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.161 qpair failed and we were unable to recover it. 00:26:27.161 [2024-07-16 00:27:45.810480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.161 [2024-07-16 00:27:45.810512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.161 qpair failed and we were unable to recover it. 00:26:27.161 [2024-07-16 00:27:45.810859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.161 [2024-07-16 00:27:45.810891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.161 qpair failed and we were unable to recover it. 00:26:27.161 [2024-07-16 00:27:45.811159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.161 [2024-07-16 00:27:45.811175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.161 qpair failed and we were unable to recover it. 00:26:27.161 [2024-07-16 00:27:45.811313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.161 [2024-07-16 00:27:45.811330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.161 qpair failed and we were unable to recover it. 00:26:27.161 [2024-07-16 00:27:45.811569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.161 [2024-07-16 00:27:45.811602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.161 qpair failed and we were unable to recover it. 00:26:27.161 [2024-07-16 00:27:45.811927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.161 [2024-07-16 00:27:45.811965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.161 qpair failed and we were unable to recover it. 00:26:27.161 [2024-07-16 00:27:45.812183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.161 [2024-07-16 00:27:45.812199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.161 qpair failed and we were unable to recover it. 00:26:27.161 [2024-07-16 00:27:45.812472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.161 [2024-07-16 00:27:45.812489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.161 qpair failed and we were unable to recover it. 00:26:27.161 [2024-07-16 00:27:45.812776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.161 [2024-07-16 00:27:45.812822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.161 qpair failed and we were unable to recover it. 00:26:27.161 [2024-07-16 00:27:45.813154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.161 [2024-07-16 00:27:45.813187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.161 qpair failed and we were unable to recover it. 00:26:27.161 [2024-07-16 00:27:45.813466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.161 [2024-07-16 00:27:45.813483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.161 qpair failed and we were unable to recover it. 00:26:27.161 [2024-07-16 00:27:45.813768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.161 [2024-07-16 00:27:45.813799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.161 qpair failed and we were unable to recover it. 00:26:27.161 [2024-07-16 00:27:45.814143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.161 [2024-07-16 00:27:45.814185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.161 qpair failed and we were unable to recover it. 00:26:27.161 [2024-07-16 00:27:45.814455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.161 [2024-07-16 00:27:45.814472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.161 qpair failed and we were unable to recover it. 00:26:27.161 [2024-07-16 00:27:45.814778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.161 [2024-07-16 00:27:45.814810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.161 qpair failed and we were unable to recover it. 00:26:27.161 [2024-07-16 00:27:45.815139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.161 [2024-07-16 00:27:45.815171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.161 qpair failed and we were unable to recover it. 00:26:27.161 [2024-07-16 00:27:45.815420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.161 [2024-07-16 00:27:45.815438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.161 qpair failed and we were unable to recover it. 00:26:27.161 [2024-07-16 00:27:45.815582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.161 [2024-07-16 00:27:45.815599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.161 qpair failed and we were unable to recover it. 00:26:27.161 [2024-07-16 00:27:45.815873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.161 [2024-07-16 00:27:45.815904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.161 qpair failed and we were unable to recover it. 00:26:27.161 [2024-07-16 00:27:45.816254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.161 [2024-07-16 00:27:45.816295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.161 qpair failed and we were unable to recover it. 00:26:27.161 [2024-07-16 00:27:45.816505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.161 [2024-07-16 00:27:45.816521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.161 qpair failed and we were unable to recover it. 00:26:27.161 [2024-07-16 00:27:45.816829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.161 [2024-07-16 00:27:45.816862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.161 qpair failed and we were unable to recover it. 00:26:27.161 [2024-07-16 00:27:45.817164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.161 [2024-07-16 00:27:45.817196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.161 qpair failed and we were unable to recover it. 00:26:27.161 [2024-07-16 00:27:45.817512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.161 [2024-07-16 00:27:45.817529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.161 qpair failed and we were unable to recover it. 00:26:27.161 [2024-07-16 00:27:45.817843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.161 [2024-07-16 00:27:45.817876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.161 qpair failed and we were unable to recover it. 00:26:27.161 [2024-07-16 00:27:45.818199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.161 [2024-07-16 00:27:45.818242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.161 qpair failed and we were unable to recover it. 00:26:27.161 [2024-07-16 00:27:45.818432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.161 [2024-07-16 00:27:45.818464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.161 qpair failed and we were unable to recover it. 00:26:27.161 [2024-07-16 00:27:45.818703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.161 [2024-07-16 00:27:45.818734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.161 qpair failed and we were unable to recover it. 00:26:27.161 [2024-07-16 00:27:45.818981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.161 [2024-07-16 00:27:45.818998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.161 qpair failed and we were unable to recover it. 00:26:27.161 [2024-07-16 00:27:45.819215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.161 [2024-07-16 00:27:45.819237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.161 qpair failed and we were unable to recover it. 00:26:27.161 [2024-07-16 00:27:45.819507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.161 [2024-07-16 00:27:45.819541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.161 qpair failed and we were unable to recover it. 00:26:27.161 [2024-07-16 00:27:45.819801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.161 [2024-07-16 00:27:45.819833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.161 qpair failed and we were unable to recover it. 00:26:27.161 [2024-07-16 00:27:45.820148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.161 [2024-07-16 00:27:45.820181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.161 qpair failed and we were unable to recover it. 00:26:27.161 [2024-07-16 00:27:45.820517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.162 [2024-07-16 00:27:45.820550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.162 qpair failed and we were unable to recover it. 00:26:27.162 [2024-07-16 00:27:45.820880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.162 [2024-07-16 00:27:45.820912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.162 qpair failed and we were unable to recover it. 00:26:27.162 [2024-07-16 00:27:45.821260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.162 [2024-07-16 00:27:45.821293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.162 qpair failed and we were unable to recover it. 00:26:27.162 [2024-07-16 00:27:45.821596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.162 [2024-07-16 00:27:45.821634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.162 qpair failed and we were unable to recover it. 00:26:27.162 [2024-07-16 00:27:45.821891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.162 [2024-07-16 00:27:45.821923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.162 qpair failed and we were unable to recover it. 00:26:27.162 [2024-07-16 00:27:45.822197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.162 [2024-07-16 00:27:45.822249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.162 qpair failed and we were unable to recover it. 00:26:27.162 [2024-07-16 00:27:45.822580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.162 [2024-07-16 00:27:45.822613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.162 qpair failed and we were unable to recover it. 00:26:27.162 [2024-07-16 00:27:45.822865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.162 [2024-07-16 00:27:45.822897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.162 qpair failed and we were unable to recover it. 00:26:27.162 [2024-07-16 00:27:45.823139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.162 [2024-07-16 00:27:45.823171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.162 qpair failed and we were unable to recover it. 00:26:27.162 [2024-07-16 00:27:45.823420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.162 [2024-07-16 00:27:45.823453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.162 qpair failed and we were unable to recover it. 00:26:27.162 [2024-07-16 00:27:45.823680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.162 [2024-07-16 00:27:45.823713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.162 qpair failed and we were unable to recover it. 00:26:27.162 [2024-07-16 00:27:45.823985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.162 [2024-07-16 00:27:45.824017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.162 qpair failed and we were unable to recover it. 00:26:27.162 [2024-07-16 00:27:45.824266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.162 [2024-07-16 00:27:45.824299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.162 qpair failed and we were unable to recover it. 00:26:27.162 [2024-07-16 00:27:45.824568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.162 [2024-07-16 00:27:45.824600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.162 qpair failed and we were unable to recover it. 00:26:27.162 [2024-07-16 00:27:45.824870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.162 [2024-07-16 00:27:45.824902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.162 qpair failed and we were unable to recover it. 00:26:27.162 [2024-07-16 00:27:45.825153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.162 [2024-07-16 00:27:45.825187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.162 qpair failed and we were unable to recover it. 00:26:27.162 [2024-07-16 00:27:45.825520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.162 [2024-07-16 00:27:45.825538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.162 qpair failed and we were unable to recover it. 00:26:27.162 [2024-07-16 00:27:45.825744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.162 [2024-07-16 00:27:45.825761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.162 qpair failed and we were unable to recover it. 00:26:27.162 [2024-07-16 00:27:45.826043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.162 [2024-07-16 00:27:45.826075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.162 qpair failed and we were unable to recover it. 00:26:27.162 [2024-07-16 00:27:45.826327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.162 [2024-07-16 00:27:45.826360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.162 qpair failed and we were unable to recover it. 00:26:27.162 [2024-07-16 00:27:45.826596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.162 [2024-07-16 00:27:45.826612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.162 qpair failed and we were unable to recover it. 00:26:27.162 [2024-07-16 00:27:45.826910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.162 [2024-07-16 00:27:45.826926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.162 qpair failed and we were unable to recover it. 00:26:27.162 [2024-07-16 00:27:45.827214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.162 [2024-07-16 00:27:45.827256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.162 qpair failed and we were unable to recover it. 00:26:27.162 [2024-07-16 00:27:45.827599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.162 [2024-07-16 00:27:45.827631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.162 qpair failed and we were unable to recover it. 00:26:27.162 [2024-07-16 00:27:45.827903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.162 [2024-07-16 00:27:45.827936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.162 qpair failed and we were unable to recover it. 00:26:27.162 [2024-07-16 00:27:45.828204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.162 [2024-07-16 00:27:45.828247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.162 qpair failed and we were unable to recover it. 00:26:27.162 [2024-07-16 00:27:45.828566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.162 [2024-07-16 00:27:45.828598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.162 qpair failed and we were unable to recover it. 00:26:27.162 [2024-07-16 00:27:45.828928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.162 [2024-07-16 00:27:45.828960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.162 qpair failed and we were unable to recover it. 00:26:27.162 [2024-07-16 00:27:45.829207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.162 [2024-07-16 00:27:45.829223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.162 qpair failed and we were unable to recover it. 00:26:27.162 [2024-07-16 00:27:45.829497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.162 [2024-07-16 00:27:45.829514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.162 qpair failed and we were unable to recover it. 00:26:27.162 [2024-07-16 00:27:45.829737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.162 [2024-07-16 00:27:45.829757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.162 qpair failed and we were unable to recover it. 00:26:27.162 [2024-07-16 00:27:45.830033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.162 [2024-07-16 00:27:45.830050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.162 qpair failed and we were unable to recover it. 00:26:27.162 [2024-07-16 00:27:45.830302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.162 [2024-07-16 00:27:45.830319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.162 qpair failed and we were unable to recover it. 00:26:27.162 [2024-07-16 00:27:45.830558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.162 [2024-07-16 00:27:45.830574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.162 qpair failed and we were unable to recover it. 00:26:27.162 [2024-07-16 00:27:45.830776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.162 [2024-07-16 00:27:45.830793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.163 qpair failed and we were unable to recover it. 00:26:27.163 [2024-07-16 00:27:45.831021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.163 [2024-07-16 00:27:45.831037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.163 qpair failed and we were unable to recover it. 00:26:27.163 [2024-07-16 00:27:45.831333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.163 [2024-07-16 00:27:45.831367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.163 qpair failed and we were unable to recover it. 00:26:27.163 [2024-07-16 00:27:45.831547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.163 [2024-07-16 00:27:45.831579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.163 qpair failed and we were unable to recover it. 00:26:27.163 [2024-07-16 00:27:45.831904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.163 [2024-07-16 00:27:45.831936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.163 qpair failed and we were unable to recover it. 00:26:27.163 [2024-07-16 00:27:45.832218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.163 [2024-07-16 00:27:45.832261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.163 qpair failed and we were unable to recover it. 00:26:27.163 [2024-07-16 00:27:45.832430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.163 [2024-07-16 00:27:45.832462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.163 qpair failed and we were unable to recover it. 00:26:27.163 [2024-07-16 00:27:45.832766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.163 [2024-07-16 00:27:45.832798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.163 qpair failed and we were unable to recover it. 00:26:27.163 [2024-07-16 00:27:45.833054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.163 [2024-07-16 00:27:45.833087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.163 qpair failed and we were unable to recover it. 00:26:27.163 [2024-07-16 00:27:45.833386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.163 [2024-07-16 00:27:45.833418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.163 qpair failed and we were unable to recover it. 00:26:27.163 [2024-07-16 00:27:45.833750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.163 [2024-07-16 00:27:45.833782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.163 qpair failed and we were unable to recover it. 00:26:27.163 [2024-07-16 00:27:45.834109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.163 [2024-07-16 00:27:45.834142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.163 qpair failed and we were unable to recover it. 00:26:27.163 [2024-07-16 00:27:45.834462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.163 [2024-07-16 00:27:45.834480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.163 qpair failed and we were unable to recover it. 00:26:27.163 [2024-07-16 00:27:45.834778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.163 [2024-07-16 00:27:45.834810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.163 qpair failed and we were unable to recover it. 00:26:27.163 [2024-07-16 00:27:45.835155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.163 [2024-07-16 00:27:45.835187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.163 qpair failed and we were unable to recover it. 00:26:27.163 [2024-07-16 00:27:45.835542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.163 [2024-07-16 00:27:45.835560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.163 qpair failed and we were unable to recover it. 00:26:27.163 [2024-07-16 00:27:45.835853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.163 [2024-07-16 00:27:45.835868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.163 qpair failed and we were unable to recover it. 00:26:27.163 [2024-07-16 00:27:45.836092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.163 [2024-07-16 00:27:45.836123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.163 qpair failed and we were unable to recover it. 00:26:27.163 [2024-07-16 00:27:45.836460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.163 [2024-07-16 00:27:45.836493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.163 qpair failed and we were unable to recover it. 00:26:27.163 [2024-07-16 00:27:45.836732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.163 [2024-07-16 00:27:45.836764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.163 qpair failed and we were unable to recover it. 00:26:27.163 [2024-07-16 00:27:45.837044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.163 [2024-07-16 00:27:45.837077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.163 qpair failed and we were unable to recover it. 00:26:27.163 [2024-07-16 00:27:45.837342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.163 [2024-07-16 00:27:45.837374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.163 qpair failed and we were unable to recover it. 00:26:27.163 [2024-07-16 00:27:45.837732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.163 [2024-07-16 00:27:45.837763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.163 qpair failed and we were unable to recover it. 00:26:27.163 [2024-07-16 00:27:45.838029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.163 [2024-07-16 00:27:45.838068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.163 qpair failed and we were unable to recover it. 00:26:27.163 [2024-07-16 00:27:45.838305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.163 [2024-07-16 00:27:45.838322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.163 qpair failed and we were unable to recover it. 00:26:27.163 [2024-07-16 00:27:45.838520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.163 [2024-07-16 00:27:45.838536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.163 qpair failed and we were unable to recover it. 00:26:27.163 [2024-07-16 00:27:45.838746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.163 [2024-07-16 00:27:45.838763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.163 qpair failed and we were unable to recover it. 00:26:27.163 [2024-07-16 00:27:45.839043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.163 [2024-07-16 00:27:45.839075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.163 qpair failed and we were unable to recover it. 00:26:27.163 [2024-07-16 00:27:45.839427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.163 [2024-07-16 00:27:45.839460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.163 qpair failed and we were unable to recover it. 00:26:27.163 [2024-07-16 00:27:45.839714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.163 [2024-07-16 00:27:45.839745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.163 qpair failed and we were unable to recover it. 00:26:27.163 [2024-07-16 00:27:45.840061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.163 [2024-07-16 00:27:45.840094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.163 qpair failed and we were unable to recover it. 00:26:27.163 [2024-07-16 00:27:45.840382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.163 [2024-07-16 00:27:45.840415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.163 qpair failed and we were unable to recover it. 00:26:27.163 [2024-07-16 00:27:45.840721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.163 [2024-07-16 00:27:45.840754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.163 qpair failed and we were unable to recover it. 00:26:27.163 [2024-07-16 00:27:45.841097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.163 [2024-07-16 00:27:45.841129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.163 qpair failed and we were unable to recover it. 00:26:27.163 [2024-07-16 00:27:45.841372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.163 [2024-07-16 00:27:45.841405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.163 qpair failed and we were unable to recover it. 00:26:27.163 [2024-07-16 00:27:45.841689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.163 [2024-07-16 00:27:45.841721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.163 qpair failed and we were unable to recover it. 00:26:27.163 [2024-07-16 00:27:45.842041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.163 [2024-07-16 00:27:45.842073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.163 qpair failed and we were unable to recover it. 00:26:27.163 [2024-07-16 00:27:45.842407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.163 [2024-07-16 00:27:45.842441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.163 qpair failed and we were unable to recover it. 00:26:27.163 [2024-07-16 00:27:45.842701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.163 [2024-07-16 00:27:45.842733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.163 qpair failed and we were unable to recover it. 00:26:27.163 [2024-07-16 00:27:45.843081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.163 [2024-07-16 00:27:45.843113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.163 qpair failed and we were unable to recover it. 00:26:27.163 [2024-07-16 00:27:45.843439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.163 [2024-07-16 00:27:45.843471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.163 qpair failed and we were unable to recover it. 00:26:27.163 [2024-07-16 00:27:45.843704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.164 [2024-07-16 00:27:45.843736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.164 qpair failed and we were unable to recover it. 00:26:27.164 [2024-07-16 00:27:45.843988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.164 [2024-07-16 00:27:45.844020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.164 qpair failed and we were unable to recover it. 00:26:27.164 [2024-07-16 00:27:45.844379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.164 [2024-07-16 00:27:45.844412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.164 qpair failed and we were unable to recover it. 00:26:27.164 [2024-07-16 00:27:45.844683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.164 [2024-07-16 00:27:45.844715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.164 qpair failed and we were unable to recover it. 00:26:27.164 [2024-07-16 00:27:45.844951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.164 [2024-07-16 00:27:45.844983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.164 qpair failed and we were unable to recover it. 00:26:27.164 [2024-07-16 00:27:45.845246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.164 [2024-07-16 00:27:45.845279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.164 qpair failed and we were unable to recover it. 00:26:27.164 [2024-07-16 00:27:45.845603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.164 [2024-07-16 00:27:45.845635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.164 qpair failed and we were unable to recover it. 00:26:27.164 [2024-07-16 00:27:45.845960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.164 [2024-07-16 00:27:45.845993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.164 qpair failed and we were unable to recover it. 00:26:27.164 [2024-07-16 00:27:45.846320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.164 [2024-07-16 00:27:45.846352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.164 qpair failed and we were unable to recover it. 00:26:27.164 [2024-07-16 00:27:45.846672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.164 [2024-07-16 00:27:45.846688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.164 qpair failed and we were unable to recover it. 00:26:27.164 [2024-07-16 00:27:45.846842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.164 [2024-07-16 00:27:45.846857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.164 qpair failed and we were unable to recover it. 00:26:27.164 [2024-07-16 00:27:45.847128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.164 [2024-07-16 00:27:45.847161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.164 qpair failed and we were unable to recover it. 00:26:27.164 [2024-07-16 00:27:45.847462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.164 [2024-07-16 00:27:45.847495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.164 qpair failed and we were unable to recover it. 00:26:27.164 [2024-07-16 00:27:45.847818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.164 [2024-07-16 00:27:45.847849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.164 qpair failed and we were unable to recover it. 00:26:27.164 [2024-07-16 00:27:45.848099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.164 [2024-07-16 00:27:45.848131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.164 qpair failed and we were unable to recover it. 00:26:27.164 [2024-07-16 00:27:45.848388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.164 [2024-07-16 00:27:45.848421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.164 qpair failed and we were unable to recover it. 00:26:27.164 [2024-07-16 00:27:45.848656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.164 [2024-07-16 00:27:45.848688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.164 qpair failed and we were unable to recover it. 00:26:27.164 [2024-07-16 00:27:45.849032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.164 [2024-07-16 00:27:45.849064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.164 qpair failed and we were unable to recover it. 00:26:27.164 [2024-07-16 00:27:45.849330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.164 [2024-07-16 00:27:45.849360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.164 qpair failed and we were unable to recover it. 00:26:27.164 [2024-07-16 00:27:45.849686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.164 [2024-07-16 00:27:45.849718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.164 qpair failed and we were unable to recover it. 00:26:27.164 [2024-07-16 00:27:45.850038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.164 [2024-07-16 00:27:45.850079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.164 qpair failed and we were unable to recover it. 00:26:27.164 [2024-07-16 00:27:45.850372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.164 [2024-07-16 00:27:45.850389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.164 qpair failed and we were unable to recover it. 00:26:27.164 [2024-07-16 00:27:45.850655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.164 [2024-07-16 00:27:45.850687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.164 qpair failed and we were unable to recover it. 00:26:27.164 [2024-07-16 00:27:45.851064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.164 [2024-07-16 00:27:45.851102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.164 qpair failed and we were unable to recover it. 00:26:27.164 [2024-07-16 00:27:45.851347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.164 [2024-07-16 00:27:45.851364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.164 qpair failed and we were unable to recover it. 00:26:27.164 [2024-07-16 00:27:45.851598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.164 [2024-07-16 00:27:45.851614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.164 qpair failed and we were unable to recover it. 00:26:27.164 [2024-07-16 00:27:45.851837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.164 [2024-07-16 00:27:45.851853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.164 qpair failed and we were unable to recover it. 00:26:27.164 [2024-07-16 00:27:45.852079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.164 [2024-07-16 00:27:45.852094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.164 qpair failed and we were unable to recover it. 00:26:27.164 [2024-07-16 00:27:45.852403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.164 [2024-07-16 00:27:45.852417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.164 qpair failed and we were unable to recover it. 00:26:27.164 [2024-07-16 00:27:45.852650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.164 [2024-07-16 00:27:45.852666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.164 qpair failed and we were unable to recover it. 00:26:27.164 [2024-07-16 00:27:45.852824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.164 [2024-07-16 00:27:45.852839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.164 qpair failed and we were unable to recover it. 00:26:27.164 [2024-07-16 00:27:45.853131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.164 [2024-07-16 00:27:45.853147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.164 qpair failed and we were unable to recover it. 00:26:27.164 [2024-07-16 00:27:45.853442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.164 [2024-07-16 00:27:45.853459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.164 qpair failed and we were unable to recover it. 00:26:27.164 [2024-07-16 00:27:45.853705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.164 [2024-07-16 00:27:45.853722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.164 qpair failed and we were unable to recover it. 00:26:27.164 [2024-07-16 00:27:45.853974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.164 [2024-07-16 00:27:45.853991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.164 qpair failed and we were unable to recover it. 00:26:27.164 [2024-07-16 00:27:45.854204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.164 [2024-07-16 00:27:45.854221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.164 qpair failed and we were unable to recover it. 00:26:27.164 [2024-07-16 00:27:45.854436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.164 [2024-07-16 00:27:45.854452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.164 qpair failed and we were unable to recover it. 00:26:27.164 [2024-07-16 00:27:45.854753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.164 [2024-07-16 00:27:45.854785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.164 qpair failed and we were unable to recover it. 00:26:27.164 [2024-07-16 00:27:45.855129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.164 [2024-07-16 00:27:45.855161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.164 qpair failed and we were unable to recover it. 00:26:27.164 [2024-07-16 00:27:45.855465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.164 [2024-07-16 00:27:45.855499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.164 qpair failed and we were unable to recover it. 00:26:27.164 [2024-07-16 00:27:45.855821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.164 [2024-07-16 00:27:45.855853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.165 qpair failed and we were unable to recover it. 00:26:27.165 [2024-07-16 00:27:45.856089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.165 [2024-07-16 00:27:45.856121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.165 qpair failed and we were unable to recover it. 00:26:27.165 [2024-07-16 00:27:45.856393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.165 [2024-07-16 00:27:45.856426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.165 qpair failed and we were unable to recover it. 00:26:27.165 [2024-07-16 00:27:45.856677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.165 [2024-07-16 00:27:45.856709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.165 qpair failed and we were unable to recover it. 00:26:27.165 [2024-07-16 00:27:45.856955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.165 [2024-07-16 00:27:45.856988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.165 qpair failed and we were unable to recover it. 00:26:27.165 [2024-07-16 00:27:45.857345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.165 [2024-07-16 00:27:45.857361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.165 qpair failed and we were unable to recover it. 00:26:27.165 [2024-07-16 00:27:45.857609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.165 [2024-07-16 00:27:45.857641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.165 qpair failed and we were unable to recover it. 00:26:27.165 [2024-07-16 00:27:45.857890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.165 [2024-07-16 00:27:45.857922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.165 qpair failed and we were unable to recover it. 00:26:27.165 [2024-07-16 00:27:45.858248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.165 [2024-07-16 00:27:45.858265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.165 qpair failed and we were unable to recover it. 00:26:27.165 [2024-07-16 00:27:45.858489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.165 [2024-07-16 00:27:45.858506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.165 qpair failed and we were unable to recover it. 00:26:27.165 [2024-07-16 00:27:45.858745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.165 [2024-07-16 00:27:45.858783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.165 qpair failed and we were unable to recover it. 00:26:27.165 [2024-07-16 00:27:45.859087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.165 [2024-07-16 00:27:45.859119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.165 qpair failed and we were unable to recover it. 00:26:27.165 [2024-07-16 00:27:45.859427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.165 [2024-07-16 00:27:45.859443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.165 qpair failed and we were unable to recover it. 00:26:27.165 [2024-07-16 00:27:45.859690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.165 [2024-07-16 00:27:45.859722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.165 qpair failed and we were unable to recover it. 00:26:27.165 [2024-07-16 00:27:45.860003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.165 [2024-07-16 00:27:45.860036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.165 qpair failed and we were unable to recover it. 00:26:27.165 [2024-07-16 00:27:45.860266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.165 [2024-07-16 00:27:45.860283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.165 qpair failed and we were unable to recover it. 00:26:27.165 [2024-07-16 00:27:45.860527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.165 [2024-07-16 00:27:45.860559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.165 qpair failed and we were unable to recover it. 00:26:27.165 [2024-07-16 00:27:45.860812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.165 [2024-07-16 00:27:45.860843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.165 qpair failed and we were unable to recover it. 00:26:27.165 [2024-07-16 00:27:45.861041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.165 [2024-07-16 00:27:45.861076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.165 qpair failed and we were unable to recover it. 00:26:27.165 [2024-07-16 00:27:45.861383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.165 [2024-07-16 00:27:45.861416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.165 qpair failed and we were unable to recover it. 00:26:27.165 [2024-07-16 00:27:45.861653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.165 [2024-07-16 00:27:45.861685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.165 qpair failed and we were unable to recover it. 00:26:27.165 [2024-07-16 00:27:45.862034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.165 [2024-07-16 00:27:45.862066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.165 qpair failed and we were unable to recover it. 00:26:27.165 [2024-07-16 00:27:45.862382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.165 [2024-07-16 00:27:45.862399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.165 qpair failed and we were unable to recover it. 00:26:27.165 [2024-07-16 00:27:45.862695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.165 [2024-07-16 00:27:45.862712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.165 qpair failed and we were unable to recover it. 00:26:27.165 [2024-07-16 00:27:45.863013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.165 [2024-07-16 00:27:45.863030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.165 qpair failed and we were unable to recover it. 00:26:27.165 [2024-07-16 00:27:45.863340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.165 [2024-07-16 00:27:45.863357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.165 qpair failed and we were unable to recover it. 00:26:27.165 [2024-07-16 00:27:45.863578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.165 [2024-07-16 00:27:45.863595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.165 qpair failed and we were unable to recover it. 00:26:27.165 [2024-07-16 00:27:45.863822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.165 [2024-07-16 00:27:45.863839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.165 qpair failed and we were unable to recover it. 00:26:27.165 [2024-07-16 00:27:45.863988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.165 [2024-07-16 00:27:45.864003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.165 qpair failed and we were unable to recover it. 00:26:27.165 [2024-07-16 00:27:45.864217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.165 [2024-07-16 00:27:45.864241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.165 qpair failed and we were unable to recover it. 00:26:27.165 [2024-07-16 00:27:45.864485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.165 [2024-07-16 00:27:45.864518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.165 qpair failed and we were unable to recover it. 00:26:27.165 [2024-07-16 00:27:45.864873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.165 [2024-07-16 00:27:45.864905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.165 qpair failed and we were unable to recover it. 00:26:27.165 [2024-07-16 00:27:45.865262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.165 [2024-07-16 00:27:45.865294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.165 qpair failed and we were unable to recover it. 00:26:27.165 [2024-07-16 00:27:45.865621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.165 [2024-07-16 00:27:45.865654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.165 qpair failed and we were unable to recover it. 00:26:27.165 [2024-07-16 00:27:45.865984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.165 [2024-07-16 00:27:45.866024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.165 qpair failed and we were unable to recover it. 00:26:27.165 [2024-07-16 00:27:45.866326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.165 [2024-07-16 00:27:45.866359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.165 qpair failed and we were unable to recover it. 00:26:27.165 [2024-07-16 00:27:45.866547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.165 [2024-07-16 00:27:45.866580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.165 qpair failed and we were unable to recover it. 00:26:27.165 [2024-07-16 00:27:45.866911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.165 [2024-07-16 00:27:45.866949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.165 qpair failed and we were unable to recover it. 00:26:27.165 [2024-07-16 00:27:45.867257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.165 [2024-07-16 00:27:45.867291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.165 qpair failed and we were unable to recover it. 00:26:27.165 [2024-07-16 00:27:45.867487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.165 [2024-07-16 00:27:45.867504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.165 qpair failed and we were unable to recover it. 00:26:27.165 [2024-07-16 00:27:45.867805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.165 [2024-07-16 00:27:45.867838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.165 qpair failed and we were unable to recover it. 00:26:27.165 [2024-07-16 00:27:45.868092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.166 [2024-07-16 00:27:45.868109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.166 qpair failed and we were unable to recover it. 00:26:27.166 [2024-07-16 00:27:45.868349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.166 [2024-07-16 00:27:45.868366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.166 qpair failed and we were unable to recover it. 00:26:27.166 [2024-07-16 00:27:45.868564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.166 [2024-07-16 00:27:45.868581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.166 qpair failed and we were unable to recover it. 00:26:27.166 [2024-07-16 00:27:45.868875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.166 [2024-07-16 00:27:45.868907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.166 qpair failed and we were unable to recover it. 00:26:27.166 [2024-07-16 00:27:45.869182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.166 [2024-07-16 00:27:45.869214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.166 qpair failed and we were unable to recover it. 00:26:27.166 [2024-07-16 00:27:45.869536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.166 [2024-07-16 00:27:45.869570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.166 qpair failed and we were unable to recover it. 00:26:27.166 [2024-07-16 00:27:45.869859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.166 [2024-07-16 00:27:45.869877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.166 qpair failed and we were unable to recover it. 00:26:27.166 [2024-07-16 00:27:45.870105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.166 [2024-07-16 00:27:45.870121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.166 qpair failed and we were unable to recover it. 00:26:27.166 [2024-07-16 00:27:45.870441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.166 [2024-07-16 00:27:45.870474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.166 qpair failed and we were unable to recover it. 00:26:27.166 [2024-07-16 00:27:45.870716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.166 [2024-07-16 00:27:45.870749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.166 qpair failed and we were unable to recover it. 00:26:27.166 [2024-07-16 00:27:45.870999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.166 [2024-07-16 00:27:45.871031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.166 qpair failed and we were unable to recover it. 00:26:27.166 [2024-07-16 00:27:45.871356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.166 [2024-07-16 00:27:45.871389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.166 qpair failed and we were unable to recover it. 00:26:27.166 [2024-07-16 00:27:45.871730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.166 [2024-07-16 00:27:45.871762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.166 qpair failed and we were unable to recover it. 00:26:27.166 [2024-07-16 00:27:45.872101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.166 [2024-07-16 00:27:45.872134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.166 qpair failed and we were unable to recover it. 00:26:27.166 [2024-07-16 00:27:45.872373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.166 [2024-07-16 00:27:45.872406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.166 qpair failed and we were unable to recover it. 00:26:27.166 [2024-07-16 00:27:45.872785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.166 [2024-07-16 00:27:45.872817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.166 qpair failed and we were unable to recover it. 00:26:27.166 [2024-07-16 00:27:45.873067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.166 [2024-07-16 00:27:45.873099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.166 qpair failed and we were unable to recover it. 00:26:27.166 [2024-07-16 00:27:45.873429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.166 [2024-07-16 00:27:45.873462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.166 qpair failed and we were unable to recover it. 00:26:27.166 [2024-07-16 00:27:45.873716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.166 [2024-07-16 00:27:45.873747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.166 qpair failed and we were unable to recover it. 00:26:27.166 [2024-07-16 00:27:45.874075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.166 [2024-07-16 00:27:45.874107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.166 qpair failed and we were unable to recover it. 00:26:27.166 [2024-07-16 00:27:45.874375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.166 [2024-07-16 00:27:45.874391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.166 qpair failed and we were unable to recover it. 00:26:27.166 [2024-07-16 00:27:45.874613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.166 [2024-07-16 00:27:45.874647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.166 qpair failed and we were unable to recover it. 00:26:27.166 [2024-07-16 00:27:45.874977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.166 [2024-07-16 00:27:45.875010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.166 qpair failed and we were unable to recover it. 00:26:27.166 [2024-07-16 00:27:45.875352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.166 [2024-07-16 00:27:45.875385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.166 qpair failed and we were unable to recover it. 00:26:27.166 [2024-07-16 00:27:45.875668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.166 [2024-07-16 00:27:45.875701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.166 qpair failed and we were unable to recover it. 00:26:27.166 [2024-07-16 00:27:45.876031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.166 [2024-07-16 00:27:45.876063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.166 qpair failed and we were unable to recover it. 00:26:27.166 [2024-07-16 00:27:45.876382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.166 [2024-07-16 00:27:45.876415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.166 qpair failed and we were unable to recover it. 00:26:27.166 [2024-07-16 00:27:45.876743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.166 [2024-07-16 00:27:45.876759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.166 qpair failed and we were unable to recover it. 00:26:27.166 [2024-07-16 00:27:45.877054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.166 [2024-07-16 00:27:45.877071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.166 qpair failed and we were unable to recover it. 00:26:27.166 [2024-07-16 00:27:45.877374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.166 [2024-07-16 00:27:45.877407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.166 qpair failed and we were unable to recover it. 00:26:27.166 [2024-07-16 00:27:45.877659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.166 [2024-07-16 00:27:45.877690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.166 qpair failed and we were unable to recover it. 00:26:27.166 [2024-07-16 00:27:45.878030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.166 [2024-07-16 00:27:45.878063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.166 qpair failed and we were unable to recover it. 00:26:27.166 [2024-07-16 00:27:45.878351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.166 [2024-07-16 00:27:45.878384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.166 qpair failed and we were unable to recover it. 00:26:27.166 [2024-07-16 00:27:45.878620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.166 [2024-07-16 00:27:45.878651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.166 qpair failed and we were unable to recover it. 00:26:27.166 [2024-07-16 00:27:45.878922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.166 [2024-07-16 00:27:45.878954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.166 qpair failed and we were unable to recover it. 00:26:27.167 [2024-07-16 00:27:45.879215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.167 [2024-07-16 00:27:45.879241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.167 qpair failed and we were unable to recover it. 00:26:27.167 [2024-07-16 00:27:45.879465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.167 [2024-07-16 00:27:45.879481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.167 qpair failed and we were unable to recover it. 00:26:27.167 [2024-07-16 00:27:45.879760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.167 [2024-07-16 00:27:45.879777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.167 qpair failed and we were unable to recover it. 00:26:27.167 [2024-07-16 00:27:45.879978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.167 [2024-07-16 00:27:45.879995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.167 qpair failed and we were unable to recover it. 00:26:27.167 [2024-07-16 00:27:45.880311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.167 [2024-07-16 00:27:45.880328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.167 qpair failed and we were unable to recover it. 00:26:27.167 [2024-07-16 00:27:45.880550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.167 [2024-07-16 00:27:45.880583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.167 qpair failed and we were unable to recover it. 00:26:27.167 [2024-07-16 00:27:45.880852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.167 [2024-07-16 00:27:45.880884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.167 qpair failed and we were unable to recover it. 00:26:27.167 [2024-07-16 00:27:45.881213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.167 [2024-07-16 00:27:45.881236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.167 qpair failed and we were unable to recover it. 00:26:27.167 [2024-07-16 00:27:45.881544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.167 [2024-07-16 00:27:45.881576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.167 qpair failed and we were unable to recover it. 00:26:27.167 [2024-07-16 00:27:45.881830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.167 [2024-07-16 00:27:45.881862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.167 qpair failed and we were unable to recover it. 00:26:27.167 [2024-07-16 00:27:45.882124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.167 [2024-07-16 00:27:45.882169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.167 qpair failed and we were unable to recover it. 00:26:27.167 [2024-07-16 00:27:45.882400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.167 [2024-07-16 00:27:45.882417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.167 qpair failed and we were unable to recover it. 00:26:27.167 [2024-07-16 00:27:45.882718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.167 [2024-07-16 00:27:45.882749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.167 qpair failed and we were unable to recover it. 00:26:27.167 [2024-07-16 00:27:45.882984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.167 [2024-07-16 00:27:45.883016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.167 qpair failed and we were unable to recover it. 00:26:27.167 [2024-07-16 00:27:45.883270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.167 [2024-07-16 00:27:45.883287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.167 qpair failed and we were unable to recover it. 00:26:27.167 [2024-07-16 00:27:45.883588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.167 [2024-07-16 00:27:45.883620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.167 qpair failed and we were unable to recover it. 00:26:27.167 [2024-07-16 00:27:45.883955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.167 [2024-07-16 00:27:45.883987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.167 qpair failed and we were unable to recover it. 00:26:27.167 [2024-07-16 00:27:45.884286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.167 [2024-07-16 00:27:45.884303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.167 qpair failed and we were unable to recover it. 00:26:27.167 [2024-07-16 00:27:45.884435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.167 [2024-07-16 00:27:45.884450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.167 qpair failed and we were unable to recover it. 00:26:27.167 [2024-07-16 00:27:45.884759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.167 [2024-07-16 00:27:45.884792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.167 qpair failed and we were unable to recover it. 00:26:27.167 [2024-07-16 00:27:45.884976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.167 [2024-07-16 00:27:45.885008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.167 qpair failed and we were unable to recover it. 00:26:27.167 [2024-07-16 00:27:45.885334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.167 [2024-07-16 00:27:45.885368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.167 qpair failed and we were unable to recover it. 00:26:27.167 [2024-07-16 00:27:45.885683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.167 [2024-07-16 00:27:45.885699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.167 qpair failed and we were unable to recover it. 00:26:27.167 [2024-07-16 00:27:45.886023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.167 [2024-07-16 00:27:45.886039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.167 qpair failed and we were unable to recover it. 00:26:27.167 [2024-07-16 00:27:45.886266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.167 [2024-07-16 00:27:45.886283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.167 qpair failed and we were unable to recover it. 00:26:27.167 [2024-07-16 00:27:45.886573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.167 [2024-07-16 00:27:45.886589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.167 qpair failed and we were unable to recover it. 00:26:27.167 [2024-07-16 00:27:45.886915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.167 [2024-07-16 00:27:45.886947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.167 qpair failed and we were unable to recover it. 00:26:27.167 [2024-07-16 00:27:45.887258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.167 [2024-07-16 00:27:45.887291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.167 qpair failed and we were unable to recover it. 00:26:27.167 [2024-07-16 00:27:45.887622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.167 [2024-07-16 00:27:45.887654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.167 qpair failed and we were unable to recover it. 00:26:27.167 [2024-07-16 00:27:45.887839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.167 [2024-07-16 00:27:45.887876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.167 qpair failed and we were unable to recover it. 00:26:27.167 [2024-07-16 00:27:45.888126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.167 [2024-07-16 00:27:45.888158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.167 qpair failed and we were unable to recover it. 00:26:27.167 [2024-07-16 00:27:45.888430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.167 [2024-07-16 00:27:45.888447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.167 qpair failed and we were unable to recover it. 00:26:27.167 [2024-07-16 00:27:45.888726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.167 [2024-07-16 00:27:45.888742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.167 qpair failed and we were unable to recover it. 00:26:27.167 [2024-07-16 00:27:45.888879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.167 [2024-07-16 00:27:45.888893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.167 qpair failed and we were unable to recover it. 00:26:27.167 [2024-07-16 00:27:45.889041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.167 [2024-07-16 00:27:45.889057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.167 qpair failed and we were unable to recover it. 00:26:27.167 [2024-07-16 00:27:45.889329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.167 [2024-07-16 00:27:45.889346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.167 qpair failed and we were unable to recover it. 00:26:27.167 [2024-07-16 00:27:45.889667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.167 [2024-07-16 00:27:45.889699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.167 qpair failed and we were unable to recover it. 00:26:27.167 [2024-07-16 00:27:45.889936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.167 [2024-07-16 00:27:45.889969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.167 qpair failed and we were unable to recover it. 00:26:27.167 [2024-07-16 00:27:45.890255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.167 [2024-07-16 00:27:45.890287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.167 qpair failed and we were unable to recover it. 00:26:27.167 [2024-07-16 00:27:45.890477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.167 [2024-07-16 00:27:45.890494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.167 qpair failed and we were unable to recover it. 00:26:27.167 [2024-07-16 00:27:45.890771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.167 [2024-07-16 00:27:45.890802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.168 qpair failed and we were unable to recover it. 00:26:27.168 [2024-07-16 00:27:45.891115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.168 [2024-07-16 00:27:45.891146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.168 qpair failed and we were unable to recover it. 00:26:27.168 [2024-07-16 00:27:45.891489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.168 [2024-07-16 00:27:45.891522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.168 qpair failed and we were unable to recover it. 00:26:27.168 [2024-07-16 00:27:45.891859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.168 [2024-07-16 00:27:45.891891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.168 qpair failed and we were unable to recover it. 00:26:27.168 [2024-07-16 00:27:45.892221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.168 [2024-07-16 00:27:45.892266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.168 qpair failed and we were unable to recover it. 00:26:27.168 [2024-07-16 00:27:45.892573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.168 [2024-07-16 00:27:45.892590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.168 qpair failed and we were unable to recover it. 00:26:27.168 [2024-07-16 00:27:45.892912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.168 [2024-07-16 00:27:45.892943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.168 qpair failed and we were unable to recover it. 00:26:27.168 [2024-07-16 00:27:45.893179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.168 [2024-07-16 00:27:45.893196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.168 qpair failed and we were unable to recover it. 00:26:27.168 [2024-07-16 00:27:45.893404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.168 [2024-07-16 00:27:45.893422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.168 qpair failed and we were unable to recover it. 00:26:27.168 [2024-07-16 00:27:45.893712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.168 [2024-07-16 00:27:45.893729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.168 qpair failed and we were unable to recover it. 00:26:27.168 [2024-07-16 00:27:45.893934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.168 [2024-07-16 00:27:45.893951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.168 qpair failed and we were unable to recover it. 00:26:27.168 [2024-07-16 00:27:45.894254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.168 [2024-07-16 00:27:45.894287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.168 qpair failed and we were unable to recover it. 00:26:27.168 [2024-07-16 00:27:45.894551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.168 [2024-07-16 00:27:45.894584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.168 qpair failed and we were unable to recover it. 00:26:27.168 [2024-07-16 00:27:45.894845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.168 [2024-07-16 00:27:45.894877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.168 qpair failed and we were unable to recover it. 00:26:27.168 [2024-07-16 00:27:45.895210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.168 [2024-07-16 00:27:45.895266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.168 qpair failed and we were unable to recover it. 00:26:27.168 [2024-07-16 00:27:45.895528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.168 [2024-07-16 00:27:45.895544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.168 qpair failed and we were unable to recover it. 00:26:27.168 [2024-07-16 00:27:45.895844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.168 [2024-07-16 00:27:45.895881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.168 qpair failed and we were unable to recover it. 00:26:27.168 [2024-07-16 00:27:45.896077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.168 [2024-07-16 00:27:45.896109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.168 qpair failed and we were unable to recover it. 00:26:27.168 [2024-07-16 00:27:45.896351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.168 [2024-07-16 00:27:45.896368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.168 qpair failed and we were unable to recover it. 00:26:27.168 [2024-07-16 00:27:45.896654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.168 [2024-07-16 00:27:45.896686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.168 qpair failed and we were unable to recover it. 00:26:27.168 [2024-07-16 00:27:45.896886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.168 [2024-07-16 00:27:45.896918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.168 qpair failed and we were unable to recover it. 00:26:27.168 [2024-07-16 00:27:45.897258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.168 [2024-07-16 00:27:45.897275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.168 qpair failed and we were unable to recover it. 00:26:27.168 [2024-07-16 00:27:45.897576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.168 [2024-07-16 00:27:45.897607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.168 qpair failed and we were unable to recover it. 00:26:27.168 [2024-07-16 00:27:45.897779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.168 [2024-07-16 00:27:45.897811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.168 qpair failed and we were unable to recover it. 00:26:27.168 [2024-07-16 00:27:45.898147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.168 [2024-07-16 00:27:45.898179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.168 qpair failed and we were unable to recover it. 00:26:27.168 [2024-07-16 00:27:45.898501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.168 [2024-07-16 00:27:45.898535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.168 qpair failed and we were unable to recover it. 00:26:27.168 [2024-07-16 00:27:45.898870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.168 [2024-07-16 00:27:45.898902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.168 qpair failed and we were unable to recover it. 00:26:27.168 [2024-07-16 00:27:45.899095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.168 [2024-07-16 00:27:45.899127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.168 qpair failed and we were unable to recover it. 00:26:27.168 [2024-07-16 00:27:45.899401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.168 [2024-07-16 00:27:45.899418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.168 qpair failed and we were unable to recover it. 00:26:27.168 [2024-07-16 00:27:45.899563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.168 [2024-07-16 00:27:45.899581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.168 qpair failed and we were unable to recover it. 00:26:27.168 [2024-07-16 00:27:45.899907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.168 [2024-07-16 00:27:45.899923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.168 qpair failed and we were unable to recover it. 00:26:27.168 [2024-07-16 00:27:45.900140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.168 [2024-07-16 00:27:45.900158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.168 qpair failed and we were unable to recover it. 00:26:27.168 [2024-07-16 00:27:45.900363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.168 [2024-07-16 00:27:45.900379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.168 qpair failed and we were unable to recover it. 00:26:27.168 [2024-07-16 00:27:45.900599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.168 [2024-07-16 00:27:45.900615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.168 qpair failed and we were unable to recover it. 00:26:27.168 [2024-07-16 00:27:45.900904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.168 [2024-07-16 00:27:45.900936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.168 qpair failed and we were unable to recover it. 00:26:27.168 [2024-07-16 00:27:45.901132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.168 [2024-07-16 00:27:45.901147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.168 qpair failed and we were unable to recover it. 00:26:27.168 [2024-07-16 00:27:45.901435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.168 [2024-07-16 00:27:45.901452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.168 qpair failed and we were unable to recover it. 00:26:27.168 [2024-07-16 00:27:45.901778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.168 [2024-07-16 00:27:45.901794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.168 qpair failed and we were unable to recover it. 00:26:27.168 [2024-07-16 00:27:45.902084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.168 [2024-07-16 00:27:45.902116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.168 qpair failed and we were unable to recover it. 00:26:27.168 [2024-07-16 00:27:45.902476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.168 [2024-07-16 00:27:45.902508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.168 qpair failed and we were unable to recover it. 00:26:27.168 [2024-07-16 00:27:45.902834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.168 [2024-07-16 00:27:45.902851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.168 qpair failed and we were unable to recover it. 00:26:27.168 [2024-07-16 00:27:45.903064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.169 [2024-07-16 00:27:45.903081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.169 qpair failed and we were unable to recover it. 00:26:27.169 [2024-07-16 00:27:45.903388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.169 [2024-07-16 00:27:45.903421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.169 qpair failed and we were unable to recover it. 00:26:27.169 [2024-07-16 00:27:45.903593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.169 [2024-07-16 00:27:45.903631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.169 qpair failed and we were unable to recover it. 00:26:27.169 [2024-07-16 00:27:45.903924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.169 [2024-07-16 00:27:45.903955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.169 qpair failed and we were unable to recover it. 00:26:27.169 [2024-07-16 00:27:45.904180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.169 [2024-07-16 00:27:45.904212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.169 qpair failed and we were unable to recover it. 00:26:27.169 [2024-07-16 00:27:45.904498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.169 [2024-07-16 00:27:45.904531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.169 qpair failed and we were unable to recover it. 00:26:27.169 [2024-07-16 00:27:45.904836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.169 [2024-07-16 00:27:45.904868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.169 qpair failed and we were unable to recover it. 00:26:27.169 [2024-07-16 00:27:45.905193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.169 [2024-07-16 00:27:45.905235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.169 qpair failed and we were unable to recover it. 00:26:27.169 [2024-07-16 00:27:45.905409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.169 [2024-07-16 00:27:45.905442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.169 qpair failed and we were unable to recover it. 00:26:27.169 [2024-07-16 00:27:45.905710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.169 [2024-07-16 00:27:45.905726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.169 qpair failed and we were unable to recover it. 00:26:27.169 [2024-07-16 00:27:45.905880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.169 [2024-07-16 00:27:45.905897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.169 qpair failed and we were unable to recover it. 00:26:27.169 [2024-07-16 00:27:45.906201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.169 [2024-07-16 00:27:45.906244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.169 qpair failed and we were unable to recover it. 00:26:27.169 [2024-07-16 00:27:45.906478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.169 [2024-07-16 00:27:45.906494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.169 qpair failed and we were unable to recover it. 00:26:27.169 [2024-07-16 00:27:45.906796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.169 [2024-07-16 00:27:45.906827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.169 qpair failed and we were unable to recover it. 00:26:27.169 [2024-07-16 00:27:45.907088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.169 [2024-07-16 00:27:45.907119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.169 qpair failed and we were unable to recover it. 00:26:27.169 [2024-07-16 00:27:45.907453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.169 [2024-07-16 00:27:45.907486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.169 qpair failed and we were unable to recover it. 00:26:27.169 [2024-07-16 00:27:45.907796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.169 [2024-07-16 00:27:45.907812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.169 qpair failed and we were unable to recover it. 00:26:27.169 [2024-07-16 00:27:45.908138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.169 [2024-07-16 00:27:45.908170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.169 qpair failed and we were unable to recover it. 00:26:27.169 [2024-07-16 00:27:45.908385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.169 [2024-07-16 00:27:45.908419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.169 qpair failed and we were unable to recover it. 00:26:27.169 [2024-07-16 00:27:45.908677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.169 [2024-07-16 00:27:45.908709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.169 qpair failed and we were unable to recover it. 00:26:27.169 [2024-07-16 00:27:45.909067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.169 [2024-07-16 00:27:45.909099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.169 qpair failed and we were unable to recover it. 00:26:27.169 [2024-07-16 00:27:45.909460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.169 [2024-07-16 00:27:45.909492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.169 qpair failed and we were unable to recover it. 00:26:27.169 [2024-07-16 00:27:45.909829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.169 [2024-07-16 00:27:45.909861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.169 qpair failed and we were unable to recover it. 00:26:27.169 [2024-07-16 00:27:45.910191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.169 [2024-07-16 00:27:45.910236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.169 qpair failed and we were unable to recover it. 00:26:27.169 [2024-07-16 00:27:45.910445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.169 [2024-07-16 00:27:45.910462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.169 qpair failed and we were unable to recover it. 00:26:27.169 [2024-07-16 00:27:45.910713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.169 [2024-07-16 00:27:45.910745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.169 qpair failed and we were unable to recover it. 00:26:27.169 [2024-07-16 00:27:45.911065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.169 [2024-07-16 00:27:45.911097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.169 qpair failed and we were unable to recover it. 00:26:27.169 [2024-07-16 00:27:45.911405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.169 [2024-07-16 00:27:45.911439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.169 qpair failed and we were unable to recover it. 00:26:27.169 [2024-07-16 00:27:45.911759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.169 [2024-07-16 00:27:45.911791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.169 qpair failed and we were unable to recover it. 00:26:27.169 [2024-07-16 00:27:45.911959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.169 [2024-07-16 00:27:45.911991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.169 qpair failed and we were unable to recover it. 00:26:27.169 [2024-07-16 00:27:45.912166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.169 [2024-07-16 00:27:45.912198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.169 qpair failed and we were unable to recover it. 00:26:27.169 [2024-07-16 00:27:45.912540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.169 [2024-07-16 00:27:45.912573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.169 qpair failed and we were unable to recover it. 00:26:27.169 [2024-07-16 00:27:45.912881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.169 [2024-07-16 00:27:45.912913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.169 qpair failed and we were unable to recover it. 00:26:27.169 [2024-07-16 00:27:45.913245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.169 [2024-07-16 00:27:45.913278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.169 qpair failed and we were unable to recover it. 00:26:27.169 [2024-07-16 00:27:45.913616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.169 [2024-07-16 00:27:45.913648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.169 qpair failed and we were unable to recover it. 00:26:27.169 [2024-07-16 00:27:45.913983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.169 [2024-07-16 00:27:45.914015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.169 qpair failed and we were unable to recover it. 00:26:27.169 [2024-07-16 00:27:45.914341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.169 [2024-07-16 00:27:45.914357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.169 qpair failed and we were unable to recover it. 00:26:27.169 [2024-07-16 00:27:45.914576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.169 [2024-07-16 00:27:45.914609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.169 qpair failed and we were unable to recover it. 00:26:27.169 [2024-07-16 00:27:45.914867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.169 [2024-07-16 00:27:45.914900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.169 qpair failed and we were unable to recover it. 00:26:27.169 [2024-07-16 00:27:45.915239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.169 [2024-07-16 00:27:45.915272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.169 qpair failed and we were unable to recover it. 00:26:27.169 [2024-07-16 00:27:45.915513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.169 [2024-07-16 00:27:45.915545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.169 qpair failed and we were unable to recover it. 00:26:27.170 [2024-07-16 00:27:45.915851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.170 [2024-07-16 00:27:45.915883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.170 qpair failed and we were unable to recover it. 00:26:27.170 [2024-07-16 00:27:45.916161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.170 [2024-07-16 00:27:45.916193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.170 qpair failed and we were unable to recover it. 00:26:27.170 [2024-07-16 00:27:45.916539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.170 [2024-07-16 00:27:45.916578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.170 qpair failed and we were unable to recover it. 00:26:27.170 [2024-07-16 00:27:45.916779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.170 [2024-07-16 00:27:45.916811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.170 qpair failed and we were unable to recover it. 00:26:27.170 [2024-07-16 00:27:45.917121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.170 [2024-07-16 00:27:45.917154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.170 qpair failed and we were unable to recover it. 00:26:27.170 [2024-07-16 00:27:45.917338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.170 [2024-07-16 00:27:45.917369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.170 qpair failed and we were unable to recover it. 00:26:27.170 [2024-07-16 00:27:45.917558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.170 [2024-07-16 00:27:45.917590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.170 qpair failed and we were unable to recover it. 00:26:27.170 [2024-07-16 00:27:45.917911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.170 [2024-07-16 00:27:45.917943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.170 qpair failed and we were unable to recover it. 00:26:27.170 [2024-07-16 00:27:45.918219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.170 [2024-07-16 00:27:45.918261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.170 qpair failed and we were unable to recover it. 00:26:27.170 [2024-07-16 00:27:45.918584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.170 [2024-07-16 00:27:45.918616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.170 qpair failed and we were unable to recover it. 00:26:27.170 [2024-07-16 00:27:45.918948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.170 [2024-07-16 00:27:45.918981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.170 qpair failed and we were unable to recover it. 00:26:27.170 [2024-07-16 00:27:45.919263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.170 [2024-07-16 00:27:45.919295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.170 qpair failed and we were unable to recover it. 00:26:27.170 [2024-07-16 00:27:45.919635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.170 [2024-07-16 00:27:45.919651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.170 qpair failed and we were unable to recover it. 00:26:27.170 [2024-07-16 00:27:45.919887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.170 [2024-07-16 00:27:45.919903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.170 qpair failed and we were unable to recover it. 00:26:27.170 [2024-07-16 00:27:45.920211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.170 [2024-07-16 00:27:45.920235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.170 qpair failed and we were unable to recover it. 00:26:27.170 [2024-07-16 00:27:45.920392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.170 [2024-07-16 00:27:45.920408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.170 qpair failed and we were unable to recover it. 00:26:27.170 [2024-07-16 00:27:45.920636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.170 [2024-07-16 00:27:45.920652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.170 qpair failed and we were unable to recover it. 00:26:27.170 [2024-07-16 00:27:45.920903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.170 [2024-07-16 00:27:45.920919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.170 qpair failed and we were unable to recover it. 00:26:27.170 [2024-07-16 00:27:45.921196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.170 [2024-07-16 00:27:45.921212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.170 qpair failed and we were unable to recover it. 00:26:27.170 [2024-07-16 00:27:45.921472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.170 [2024-07-16 00:27:45.921506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.170 qpair failed and we were unable to recover it. 00:26:27.170 [2024-07-16 00:27:45.921761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.170 [2024-07-16 00:27:45.921792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.170 qpair failed and we were unable to recover it. 00:26:27.170 [2024-07-16 00:27:45.922048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.170 [2024-07-16 00:27:45.922081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.170 qpair failed and we were unable to recover it. 00:26:27.170 [2024-07-16 00:27:45.922346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.170 [2024-07-16 00:27:45.922378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.170 qpair failed and we were unable to recover it. 00:26:27.170 [2024-07-16 00:27:45.922617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.170 [2024-07-16 00:27:45.922649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.170 qpair failed and we were unable to recover it. 00:26:27.170 [2024-07-16 00:27:45.922956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.170 [2024-07-16 00:27:45.922988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.170 qpair failed and we were unable to recover it. 00:26:27.170 [2024-07-16 00:27:45.923310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.170 [2024-07-16 00:27:45.923343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.170 qpair failed and we were unable to recover it. 00:26:27.170 [2024-07-16 00:27:45.923643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.170 [2024-07-16 00:27:45.923675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.170 qpair failed and we were unable to recover it. 00:26:27.170 [2024-07-16 00:27:45.924020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.170 [2024-07-16 00:27:45.924052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.170 qpair failed and we were unable to recover it. 00:26:27.170 [2024-07-16 00:27:45.924325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.170 [2024-07-16 00:27:45.924343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.170 qpair failed and we were unable to recover it. 00:26:27.170 [2024-07-16 00:27:45.924658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.170 [2024-07-16 00:27:45.924695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.170 qpair failed and we were unable to recover it. 00:26:27.170 [2024-07-16 00:27:45.925023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.170 [2024-07-16 00:27:45.925054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.170 qpair failed and we were unable to recover it. 00:26:27.170 [2024-07-16 00:27:45.925382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.170 [2024-07-16 00:27:45.925415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.170 qpair failed and we were unable to recover it. 00:26:27.170 [2024-07-16 00:27:45.925652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.170 [2024-07-16 00:27:45.925683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.170 qpair failed and we were unable to recover it. 00:26:27.170 [2024-07-16 00:27:45.925878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.170 [2024-07-16 00:27:45.925909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.170 qpair failed and we were unable to recover it. 00:26:27.170 [2024-07-16 00:27:45.926122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.170 [2024-07-16 00:27:45.926154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.171 qpair failed and we were unable to recover it. 00:26:27.171 [2024-07-16 00:27:45.926464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.171 [2024-07-16 00:27:45.926481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.171 qpair failed and we were unable to recover it. 00:26:27.171 [2024-07-16 00:27:45.926755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.171 [2024-07-16 00:27:45.926771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.171 qpair failed and we were unable to recover it. 00:26:27.171 [2024-07-16 00:27:45.927001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.171 [2024-07-16 00:27:45.927017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.171 qpair failed and we were unable to recover it. 00:26:27.171 [2024-07-16 00:27:45.927244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.171 [2024-07-16 00:27:45.927288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.171 qpair failed and we were unable to recover it. 00:26:27.171 [2024-07-16 00:27:45.927526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.171 [2024-07-16 00:27:45.927558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.171 qpair failed and we were unable to recover it. 00:26:27.171 [2024-07-16 00:27:45.927888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.171 [2024-07-16 00:27:45.927920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.171 qpair failed and we were unable to recover it. 00:26:27.171 [2024-07-16 00:27:45.928243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.171 [2024-07-16 00:27:45.928277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.171 qpair failed and we were unable to recover it. 00:26:27.171 [2024-07-16 00:27:45.928584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.171 [2024-07-16 00:27:45.928616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.171 qpair failed and we were unable to recover it. 00:26:27.171 [2024-07-16 00:27:45.928959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.171 [2024-07-16 00:27:45.928990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.171 qpair failed and we were unable to recover it. 00:26:27.171 [2024-07-16 00:27:45.929271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.171 [2024-07-16 00:27:45.929304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.171 qpair failed and we were unable to recover it. 00:26:27.171 [2024-07-16 00:27:45.929499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.171 [2024-07-16 00:27:45.929531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.171 qpair failed and we were unable to recover it. 00:26:27.171 [2024-07-16 00:27:45.929867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.171 [2024-07-16 00:27:45.929900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.171 qpair failed and we were unable to recover it. 00:26:27.171 [2024-07-16 00:27:45.930266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.171 [2024-07-16 00:27:45.930299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.171 qpair failed and we were unable to recover it. 00:26:27.171 [2024-07-16 00:27:45.930577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.171 [2024-07-16 00:27:45.930609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.171 qpair failed and we were unable to recover it. 00:26:27.171 [2024-07-16 00:27:45.930851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.171 [2024-07-16 00:27:45.930882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.171 qpair failed and we were unable to recover it. 00:26:27.171 [2024-07-16 00:27:45.931219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.171 [2024-07-16 00:27:45.931280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.171 qpair failed and we were unable to recover it. 00:26:27.171 [2024-07-16 00:27:45.931595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.171 [2024-07-16 00:27:45.931612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.171 qpair failed and we were unable to recover it. 00:26:27.171 [2024-07-16 00:27:45.931922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.171 [2024-07-16 00:27:45.931954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.171 qpair failed and we were unable to recover it. 00:26:27.171 [2024-07-16 00:27:45.932262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.171 [2024-07-16 00:27:45.932296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.171 qpair failed and we were unable to recover it. 00:26:27.171 [2024-07-16 00:27:45.932527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.171 [2024-07-16 00:27:45.932544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.171 qpair failed and we were unable to recover it. 00:26:27.171 [2024-07-16 00:27:45.932839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.171 [2024-07-16 00:27:45.932875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.171 qpair failed and we were unable to recover it. 00:26:27.171 [2024-07-16 00:27:45.933141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.171 [2024-07-16 00:27:45.933182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.171 qpair failed and we were unable to recover it. 00:26:27.171 [2024-07-16 00:27:45.933538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.171 [2024-07-16 00:27:45.933571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.171 qpair failed and we were unable to recover it. 00:26:27.171 [2024-07-16 00:27:45.933911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.171 [2024-07-16 00:27:45.933943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.171 qpair failed and we were unable to recover it. 00:26:27.171 [2024-07-16 00:27:45.934167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.171 [2024-07-16 00:27:45.934198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.171 qpair failed and we were unable to recover it. 00:26:27.171 [2024-07-16 00:27:45.934490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.171 [2024-07-16 00:27:45.934522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.171 qpair failed and we were unable to recover it. 00:26:27.171 [2024-07-16 00:27:45.934765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.171 [2024-07-16 00:27:45.934798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.171 qpair failed and we were unable to recover it. 00:26:27.171 [2024-07-16 00:27:45.935127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.171 [2024-07-16 00:27:45.935159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.171 qpair failed and we were unable to recover it. 00:26:27.171 [2024-07-16 00:27:45.935490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.171 [2024-07-16 00:27:45.935507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.171 qpair failed and we were unable to recover it. 00:26:27.171 [2024-07-16 00:27:45.935658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.171 [2024-07-16 00:27:45.935675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.171 qpair failed and we were unable to recover it. 00:26:27.171 [2024-07-16 00:27:45.935899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.171 [2024-07-16 00:27:45.935932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.171 qpair failed and we were unable to recover it. 00:26:27.171 [2024-07-16 00:27:45.936246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.171 [2024-07-16 00:27:45.936279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.171 qpair failed and we were unable to recover it. 00:26:27.171 [2024-07-16 00:27:45.936521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.171 [2024-07-16 00:27:45.936552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.171 qpair failed and we were unable to recover it. 00:26:27.171 [2024-07-16 00:27:45.936862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.171 [2024-07-16 00:27:45.936895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.171 qpair failed and we were unable to recover it. 00:26:27.171 [2024-07-16 00:27:45.937249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.171 [2024-07-16 00:27:45.937282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.171 qpair failed and we were unable to recover it. 00:26:27.171 [2024-07-16 00:27:45.937557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.171 [2024-07-16 00:27:45.937589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.171 qpair failed and we were unable to recover it. 00:26:27.171 [2024-07-16 00:27:45.937940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.171 [2024-07-16 00:27:45.937972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.171 qpair failed and we were unable to recover it. 00:26:27.171 [2024-07-16 00:27:45.938300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.171 [2024-07-16 00:27:45.938333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.171 qpair failed and we were unable to recover it. 00:26:27.171 [2024-07-16 00:27:45.938571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.171 [2024-07-16 00:27:45.938616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.171 qpair failed and we were unable to recover it. 00:26:27.171 [2024-07-16 00:27:45.938768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.171 [2024-07-16 00:27:45.938786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.171 qpair failed and we were unable to recover it. 00:26:27.172 [2024-07-16 00:27:45.939016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.172 [2024-07-16 00:27:45.939032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.172 qpair failed and we were unable to recover it. 00:26:27.172 [2024-07-16 00:27:45.939334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.172 [2024-07-16 00:27:45.939367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.172 qpair failed and we were unable to recover it. 00:26:27.172 [2024-07-16 00:27:45.939717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.172 [2024-07-16 00:27:45.939748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.172 qpair failed and we were unable to recover it. 00:26:27.172 [2024-07-16 00:27:45.940060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.172 [2024-07-16 00:27:45.940092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.172 qpair failed and we were unable to recover it. 00:26:27.172 [2024-07-16 00:27:45.940334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.172 [2024-07-16 00:27:45.940352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.172 qpair failed and we were unable to recover it. 00:26:27.172 [2024-07-16 00:27:45.940610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.172 [2024-07-16 00:27:45.940651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.172 qpair failed and we were unable to recover it. 00:26:27.172 [2024-07-16 00:27:45.940911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.172 [2024-07-16 00:27:45.940944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.172 qpair failed and we were unable to recover it. 00:26:27.172 [2024-07-16 00:27:45.941120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.172 [2024-07-16 00:27:45.941152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.172 qpair failed and we were unable to recover it. 00:26:27.172 [2024-07-16 00:27:45.941409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.172 [2024-07-16 00:27:45.941442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.172 qpair failed and we were unable to recover it. 00:26:27.172 [2024-07-16 00:27:45.941778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.172 [2024-07-16 00:27:45.941795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.172 qpair failed and we were unable to recover it. 00:26:27.172 [2024-07-16 00:27:45.942072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.172 [2024-07-16 00:27:45.942124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.172 qpair failed and we were unable to recover it. 00:26:27.172 [2024-07-16 00:27:45.942333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.172 [2024-07-16 00:27:45.942350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.172 qpair failed and we were unable to recover it. 00:26:27.172 [2024-07-16 00:27:45.942679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.172 [2024-07-16 00:27:45.942696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.172 qpair failed and we were unable to recover it. 00:26:27.172 [2024-07-16 00:27:45.942941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.172 [2024-07-16 00:27:45.942973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.172 qpair failed and we were unable to recover it. 00:26:27.172 [2024-07-16 00:27:45.943281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.172 [2024-07-16 00:27:45.943314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.172 qpair failed and we were unable to recover it. 00:26:27.172 [2024-07-16 00:27:45.943566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.172 [2024-07-16 00:27:45.943582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.172 qpair failed and we were unable to recover it. 00:26:27.172 [2024-07-16 00:27:45.943862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.172 [2024-07-16 00:27:45.943894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.172 qpair failed and we were unable to recover it. 00:26:27.172 [2024-07-16 00:27:45.944164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.172 [2024-07-16 00:27:45.944197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.172 qpair failed and we were unable to recover it. 00:26:27.172 [2024-07-16 00:27:45.944467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.172 [2024-07-16 00:27:45.944500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.172 qpair failed and we were unable to recover it. 00:26:27.172 [2024-07-16 00:27:45.944761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.172 [2024-07-16 00:27:45.944792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.172 qpair failed and we were unable to recover it. 00:26:27.172 [2024-07-16 00:27:45.945146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.172 [2024-07-16 00:27:45.945178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.172 qpair failed and we were unable to recover it. 00:26:27.172 [2024-07-16 00:27:45.945519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.172 [2024-07-16 00:27:45.945551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.172 qpair failed and we were unable to recover it. 00:26:27.172 [2024-07-16 00:27:45.945793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.172 [2024-07-16 00:27:45.945825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.172 qpair failed and we were unable to recover it. 00:26:27.172 [2024-07-16 00:27:45.946130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.172 [2024-07-16 00:27:45.946162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.172 qpair failed and we were unable to recover it. 00:26:27.172 [2024-07-16 00:27:45.946418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.172 [2024-07-16 00:27:45.946452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.172 qpair failed and we were unable to recover it. 00:26:27.172 [2024-07-16 00:27:45.946819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.172 [2024-07-16 00:27:45.946850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.172 qpair failed and we were unable to recover it. 00:26:27.172 [2024-07-16 00:27:45.947086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.172 [2024-07-16 00:27:45.947118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.172 qpair failed and we were unable to recover it. 00:26:27.172 [2024-07-16 00:27:45.947426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.172 [2024-07-16 00:27:45.947459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.172 qpair failed and we were unable to recover it. 00:26:27.172 [2024-07-16 00:27:45.947699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.172 [2024-07-16 00:27:45.947732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.172 qpair failed and we were unable to recover it. 00:26:27.172 [2024-07-16 00:27:45.947980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.172 [2024-07-16 00:27:45.947997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.172 qpair failed and we were unable to recover it. 00:26:27.172 [2024-07-16 00:27:45.948230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.172 [2024-07-16 00:27:45.948247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.172 qpair failed and we were unable to recover it. 00:26:27.172 [2024-07-16 00:27:45.948531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.172 [2024-07-16 00:27:45.948563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.172 qpair failed and we were unable to recover it. 00:26:27.172 [2024-07-16 00:27:45.948747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.172 [2024-07-16 00:27:45.948779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.172 qpair failed and we were unable to recover it. 00:26:27.172 [2024-07-16 00:27:45.949089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.172 [2024-07-16 00:27:45.949120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.172 qpair failed and we were unable to recover it. 00:26:27.172 [2024-07-16 00:27:45.949436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.172 [2024-07-16 00:27:45.949468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.172 qpair failed and we were unable to recover it. 00:26:27.172 [2024-07-16 00:27:45.949780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.172 [2024-07-16 00:27:45.949812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.172 qpair failed and we were unable to recover it. 00:26:27.172 [2024-07-16 00:27:45.950015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.172 [2024-07-16 00:27:45.950047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.172 qpair failed and we were unable to recover it. 00:26:27.172 [2024-07-16 00:27:45.950291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.172 [2024-07-16 00:27:45.950324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.172 qpair failed and we were unable to recover it. 00:26:27.172 [2024-07-16 00:27:45.950517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.172 [2024-07-16 00:27:45.950550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.172 qpair failed and we were unable to recover it. 00:26:27.172 [2024-07-16 00:27:45.950893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.172 [2024-07-16 00:27:45.950925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.172 qpair failed and we were unable to recover it. 00:26:27.172 [2024-07-16 00:27:45.951210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.173 [2024-07-16 00:27:45.951252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.173 qpair failed and we were unable to recover it. 00:26:27.173 [2024-07-16 00:27:45.951471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.173 [2024-07-16 00:27:45.951503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.173 qpair failed and we were unable to recover it. 00:26:27.173 [2024-07-16 00:27:45.951746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.173 [2024-07-16 00:27:45.951777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.173 qpair failed and we were unable to recover it. 00:26:27.173 [2024-07-16 00:27:45.952033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.173 [2024-07-16 00:27:45.952065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.173 qpair failed and we were unable to recover it. 00:26:27.173 [2024-07-16 00:27:45.952332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.173 [2024-07-16 00:27:45.952365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.173 qpair failed and we were unable to recover it. 00:26:27.173 [2024-07-16 00:27:45.952642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.173 [2024-07-16 00:27:45.952659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.173 qpair failed and we were unable to recover it. 00:26:27.173 [2024-07-16 00:27:45.952879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.173 [2024-07-16 00:27:45.952896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.173 qpair failed and we were unable to recover it. 00:26:27.173 [2024-07-16 00:27:45.953139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.173 [2024-07-16 00:27:45.953155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.173 qpair failed and we were unable to recover it. 00:26:27.173 [2024-07-16 00:27:45.953377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.173 [2024-07-16 00:27:45.953395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.173 qpair failed and we were unable to recover it. 00:26:27.173 [2024-07-16 00:27:45.953710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.173 [2024-07-16 00:27:45.953748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.173 qpair failed and we were unable to recover it. 00:26:27.173 [2024-07-16 00:27:45.954017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.173 [2024-07-16 00:27:45.954050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.173 qpair failed and we were unable to recover it. 00:26:27.173 [2024-07-16 00:27:45.954317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.173 [2024-07-16 00:27:45.954350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.173 qpair failed and we were unable to recover it. 00:26:27.173 [2024-07-16 00:27:45.954595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.173 [2024-07-16 00:27:45.954628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.173 qpair failed and we were unable to recover it. 00:26:27.173 [2024-07-16 00:27:45.954894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.173 [2024-07-16 00:27:45.954926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.173 qpair failed and we were unable to recover it. 00:26:27.173 [2024-07-16 00:27:45.955188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.173 [2024-07-16 00:27:45.955221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.173 qpair failed and we were unable to recover it. 00:26:27.173 [2024-07-16 00:27:45.955555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.173 [2024-07-16 00:27:45.955589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.173 qpair failed and we were unable to recover it. 00:26:27.173 [2024-07-16 00:27:45.955917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.173 [2024-07-16 00:27:45.955948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.173 qpair failed and we were unable to recover it. 00:26:27.173 [2024-07-16 00:27:45.956283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.173 [2024-07-16 00:27:45.956316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.173 qpair failed and we were unable to recover it. 00:26:27.173 [2024-07-16 00:27:45.956642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.173 [2024-07-16 00:27:45.956675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.173 qpair failed and we were unable to recover it. 00:26:27.173 [2024-07-16 00:27:45.956932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.173 [2024-07-16 00:27:45.956964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.173 qpair failed and we were unable to recover it. 00:26:27.173 [2024-07-16 00:27:45.957273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.173 [2024-07-16 00:27:45.957306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.173 qpair failed and we were unable to recover it. 00:26:27.173 [2024-07-16 00:27:45.957470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.173 [2024-07-16 00:27:45.957486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.173 qpair failed and we were unable to recover it. 00:26:27.173 [2024-07-16 00:27:45.957701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.173 [2024-07-16 00:27:45.957718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.173 qpair failed and we were unable to recover it. 00:26:27.173 [2024-07-16 00:27:45.958028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.173 [2024-07-16 00:27:45.958060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.173 qpair failed and we were unable to recover it. 00:26:27.173 [2024-07-16 00:27:45.958314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.173 [2024-07-16 00:27:45.958348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.173 qpair failed and we were unable to recover it. 00:26:27.173 [2024-07-16 00:27:45.958653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.173 [2024-07-16 00:27:45.958685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.173 qpair failed and we were unable to recover it. 00:26:27.173 [2024-07-16 00:27:45.958937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.173 [2024-07-16 00:27:45.958969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.173 qpair failed and we were unable to recover it. 00:26:27.173 [2024-07-16 00:27:45.959203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.173 [2024-07-16 00:27:45.959247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.173 qpair failed and we were unable to recover it. 00:26:27.173 [2024-07-16 00:27:45.959486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.173 [2024-07-16 00:27:45.959518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.173 qpair failed and we were unable to recover it. 00:26:27.173 [2024-07-16 00:27:45.959825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.173 [2024-07-16 00:27:45.959863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.173 qpair failed and we were unable to recover it. 00:26:27.173 [2024-07-16 00:27:45.960084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.173 [2024-07-16 00:27:45.960101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.173 qpair failed and we were unable to recover it. 00:26:27.173 [2024-07-16 00:27:45.960314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.173 [2024-07-16 00:27:45.960339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.173 qpair failed and we were unable to recover it. 00:26:27.173 [2024-07-16 00:27:45.960655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.173 [2024-07-16 00:27:45.960688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.173 qpair failed and we were unable to recover it. 00:26:27.173 [2024-07-16 00:27:45.960960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.173 [2024-07-16 00:27:45.960992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.173 qpair failed and we were unable to recover it. 00:26:27.173 [2024-07-16 00:27:45.961301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.173 [2024-07-16 00:27:45.961334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.173 qpair failed and we were unable to recover it. 00:26:27.173 [2024-07-16 00:27:45.961654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.173 [2024-07-16 00:27:45.961688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.173 qpair failed and we were unable to recover it. 00:26:27.173 [2024-07-16 00:27:45.962009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.173 [2024-07-16 00:27:45.962048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.173 qpair failed and we were unable to recover it. 00:26:27.173 [2024-07-16 00:27:45.962381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.173 [2024-07-16 00:27:45.962416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.173 qpair failed and we were unable to recover it. 00:26:27.173 [2024-07-16 00:27:45.962673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.173 [2024-07-16 00:27:45.962705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.173 qpair failed and we were unable to recover it. 00:26:27.173 [2024-07-16 00:27:45.962988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.173 [2024-07-16 00:27:45.963021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.173 qpair failed and we were unable to recover it. 00:26:27.173 [2024-07-16 00:27:45.963378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.173 [2024-07-16 00:27:45.963411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.173 qpair failed and we were unable to recover it. 00:26:27.173 [2024-07-16 00:27:45.963742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.174 [2024-07-16 00:27:45.963774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.174 qpair failed and we were unable to recover it. 00:26:27.174 [2024-07-16 00:27:45.964076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.174 [2024-07-16 00:27:45.964109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.174 qpair failed and we were unable to recover it. 00:26:27.174 [2024-07-16 00:27:45.964443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.174 [2024-07-16 00:27:45.964476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.174 qpair failed and we were unable to recover it. 00:26:27.174 [2024-07-16 00:27:45.964803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.174 [2024-07-16 00:27:45.964836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.174 qpair failed and we were unable to recover it. 00:26:27.174 [2024-07-16 00:27:45.965170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.174 [2024-07-16 00:27:45.965201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.174 qpair failed and we were unable to recover it. 00:26:27.174 [2024-07-16 00:27:45.965555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.174 [2024-07-16 00:27:45.965587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.174 qpair failed and we were unable to recover it. 00:26:27.174 [2024-07-16 00:27:45.965867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.174 [2024-07-16 00:27:45.965899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.174 qpair failed and we were unable to recover it. 00:26:27.174 [2024-07-16 00:27:45.966174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.174 [2024-07-16 00:27:45.966206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.174 qpair failed and we were unable to recover it. 00:26:27.174 [2024-07-16 00:27:45.966493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.174 [2024-07-16 00:27:45.966527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.174 qpair failed and we were unable to recover it. 00:26:27.174 [2024-07-16 00:27:45.966728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.174 [2024-07-16 00:27:45.966760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.174 qpair failed and we were unable to recover it. 00:26:27.174 [2024-07-16 00:27:45.967090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.174 [2024-07-16 00:27:45.967122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.174 qpair failed and we were unable to recover it. 00:26:27.174 [2024-07-16 00:27:45.967454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.174 [2024-07-16 00:27:45.967488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.174 qpair failed and we were unable to recover it. 00:26:27.174 [2024-07-16 00:27:45.967819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.174 [2024-07-16 00:27:45.967851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.174 qpair failed and we were unable to recover it. 00:26:27.174 [2024-07-16 00:27:45.968109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.174 [2024-07-16 00:27:45.968142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.174 qpair failed and we were unable to recover it. 00:26:27.174 [2024-07-16 00:27:45.968498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.174 [2024-07-16 00:27:45.968531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.174 qpair failed and we were unable to recover it. 00:26:27.174 [2024-07-16 00:27:45.968826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.174 [2024-07-16 00:27:45.968861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.174 qpair failed and we were unable to recover it. 00:26:27.174 [2024-07-16 00:27:45.969132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.174 [2024-07-16 00:27:45.969166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.174 qpair failed and we were unable to recover it. 00:26:27.174 [2024-07-16 00:27:45.969447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.174 [2024-07-16 00:27:45.969482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.174 qpair failed and we were unable to recover it. 00:26:27.174 [2024-07-16 00:27:45.969781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.174 [2024-07-16 00:27:45.969815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.174 qpair failed and we were unable to recover it. 00:26:27.174 [2024-07-16 00:27:45.970090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.174 [2024-07-16 00:27:45.970124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.174 qpair failed and we were unable to recover it. 00:26:27.174 [2024-07-16 00:27:45.970411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.174 [2024-07-16 00:27:45.970444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.174 qpair failed and we were unable to recover it. 00:26:27.174 [2024-07-16 00:27:45.970711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.174 [2024-07-16 00:27:45.970744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.174 qpair failed and we were unable to recover it. 00:26:27.174 [2024-07-16 00:27:45.971011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.174 [2024-07-16 00:27:45.971044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.174 qpair failed and we were unable to recover it. 00:26:27.174 [2024-07-16 00:27:45.971298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.174 [2024-07-16 00:27:45.971333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.174 qpair failed and we were unable to recover it. 00:26:27.174 [2024-07-16 00:27:45.971583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.174 [2024-07-16 00:27:45.971616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.174 qpair failed and we were unable to recover it. 00:26:27.174 [2024-07-16 00:27:45.971890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.174 [2024-07-16 00:27:45.971923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.174 qpair failed and we were unable to recover it. 00:26:27.174 [2024-07-16 00:27:45.972207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.174 [2024-07-16 00:27:45.972252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.174 qpair failed and we were unable to recover it. 00:26:27.174 [2024-07-16 00:27:45.972505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.174 [2024-07-16 00:27:45.972523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.174 qpair failed and we were unable to recover it. 00:26:27.174 [2024-07-16 00:27:45.972737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.174 [2024-07-16 00:27:45.972755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.174 qpair failed and we were unable to recover it. 00:26:27.174 [2024-07-16 00:27:45.972916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.174 [2024-07-16 00:27:45.972933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.174 qpair failed and we were unable to recover it. 00:26:27.174 [2024-07-16 00:27:45.973161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.174 [2024-07-16 00:27:45.973194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.174 qpair failed and we were unable to recover it. 00:26:27.174 [2024-07-16 00:27:45.973463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.174 [2024-07-16 00:27:45.973497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.174 qpair failed and we were unable to recover it. 00:26:27.174 [2024-07-16 00:27:45.973758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.174 [2024-07-16 00:27:45.973775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.174 qpair failed and we were unable to recover it. 00:26:27.174 [2024-07-16 00:27:45.974114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.174 [2024-07-16 00:27:45.974147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.174 qpair failed and we were unable to recover it. 00:26:27.174 [2024-07-16 00:27:45.974421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.174 [2024-07-16 00:27:45.974457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.174 qpair failed and we were unable to recover it. 00:26:27.174 [2024-07-16 00:27:45.974644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.174 [2024-07-16 00:27:45.974677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.174 qpair failed and we were unable to recover it. 00:26:27.174 [2024-07-16 00:27:45.975019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.174 [2024-07-16 00:27:45.975053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.174 qpair failed and we were unable to recover it. 00:26:27.174 [2024-07-16 00:27:45.975394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.174 [2024-07-16 00:27:45.975428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.174 qpair failed and we were unable to recover it. 00:26:27.174 [2024-07-16 00:27:45.975712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.174 [2024-07-16 00:27:45.975747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.174 qpair failed and we were unable to recover it. 00:26:27.174 [2024-07-16 00:27:45.976069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.174 [2024-07-16 00:27:45.976103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.174 qpair failed and we were unable to recover it. 00:26:27.174 [2024-07-16 00:27:45.976445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.174 [2024-07-16 00:27:45.976479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.174 qpair failed and we were unable to recover it. 00:26:27.175 [2024-07-16 00:27:45.976787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.175 [2024-07-16 00:27:45.976820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.175 qpair failed and we were unable to recover it. 00:26:27.175 [2024-07-16 00:27:45.977137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.175 [2024-07-16 00:27:45.977169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.175 qpair failed and we were unable to recover it. 00:26:27.175 [2024-07-16 00:27:45.977471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.175 [2024-07-16 00:27:45.977506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.175 qpair failed and we were unable to recover it. 00:26:27.175 [2024-07-16 00:27:45.977708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.175 [2024-07-16 00:27:45.977741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.175 qpair failed and we were unable to recover it. 00:26:27.175 [2024-07-16 00:27:45.978044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.175 [2024-07-16 00:27:45.978076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.175 qpair failed and we were unable to recover it. 00:26:27.175 [2024-07-16 00:27:45.978407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.175 [2024-07-16 00:27:45.978441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.175 qpair failed and we were unable to recover it. 00:26:27.175 [2024-07-16 00:27:45.978700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.175 [2024-07-16 00:27:45.978733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.175 qpair failed and we were unable to recover it. 00:26:27.175 [2024-07-16 00:27:45.979029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.175 [2024-07-16 00:27:45.979062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.175 qpair failed and we were unable to recover it. 00:26:27.175 [2024-07-16 00:27:45.979422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.175 [2024-07-16 00:27:45.979457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.175 qpair failed and we were unable to recover it. 00:26:27.175 [2024-07-16 00:27:45.979727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.175 [2024-07-16 00:27:45.979761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.175 qpair failed and we were unable to recover it. 00:26:27.175 [2024-07-16 00:27:45.979964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.175 [2024-07-16 00:27:45.980000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.175 qpair failed and we were unable to recover it. 00:26:27.175 [2024-07-16 00:27:45.980263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.175 [2024-07-16 00:27:45.980299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.175 qpair failed and we were unable to recover it. 00:26:27.175 [2024-07-16 00:27:45.980629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.175 [2024-07-16 00:27:45.980664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.175 qpair failed and we were unable to recover it. 00:26:27.175 [2024-07-16 00:27:45.981013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.175 [2024-07-16 00:27:45.981049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.175 qpair failed and we were unable to recover it. 00:26:27.175 [2024-07-16 00:27:45.981314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.175 [2024-07-16 00:27:45.981361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.175 qpair failed and we were unable to recover it. 00:26:27.175 [2024-07-16 00:27:45.981598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.175 [2024-07-16 00:27:45.981616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.175 qpair failed and we were unable to recover it. 00:26:27.175 [2024-07-16 00:27:45.981759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.175 [2024-07-16 00:27:45.981777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.175 qpair failed and we were unable to recover it. 00:26:27.175 [2024-07-16 00:27:45.982068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.175 [2024-07-16 00:27:45.982086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.175 qpair failed and we were unable to recover it. 00:26:27.175 [2024-07-16 00:27:45.982339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.175 [2024-07-16 00:27:45.982382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.175 qpair failed and we were unable to recover it. 00:26:27.175 [2024-07-16 00:27:45.982730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.175 [2024-07-16 00:27:45.982762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.175 qpair failed and we were unable to recover it. 00:26:27.175 [2024-07-16 00:27:45.983086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.175 [2024-07-16 00:27:45.983122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.175 qpair failed and we were unable to recover it. 00:26:27.175 [2024-07-16 00:27:45.983404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.175 [2024-07-16 00:27:45.983438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.175 qpair failed and we were unable to recover it. 00:26:27.175 [2024-07-16 00:27:45.983702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.175 [2024-07-16 00:27:45.983741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.175 qpair failed and we were unable to recover it. 00:26:27.175 [2024-07-16 00:27:45.984008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.175 [2024-07-16 00:27:45.984041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.175 qpair failed and we were unable to recover it. 00:26:27.175 [2024-07-16 00:27:45.984379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.175 [2024-07-16 00:27:45.984413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.175 qpair failed and we were unable to recover it. 00:26:27.175 [2024-07-16 00:27:45.984668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.175 [2024-07-16 00:27:45.984702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.175 qpair failed and we were unable to recover it. 00:26:27.175 [2024-07-16 00:27:45.985053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.175 [2024-07-16 00:27:45.985085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.175 qpair failed and we were unable to recover it. 00:26:27.175 [2024-07-16 00:27:45.985357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.175 [2024-07-16 00:27:45.985391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.175 qpair failed and we were unable to recover it. 00:26:27.175 [2024-07-16 00:27:45.985586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.175 [2024-07-16 00:27:45.985619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.175 qpair failed and we were unable to recover it. 00:26:27.175 [2024-07-16 00:27:45.985874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.175 [2024-07-16 00:27:45.985907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.175 qpair failed and we were unable to recover it. 00:26:27.175 [2024-07-16 00:27:45.986213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.175 [2024-07-16 00:27:45.986257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.175 qpair failed and we were unable to recover it. 00:26:27.175 [2024-07-16 00:27:45.986460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.175 [2024-07-16 00:27:45.986492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.175 qpair failed and we were unable to recover it. 00:26:27.175 [2024-07-16 00:27:45.986810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.175 [2024-07-16 00:27:45.986844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.175 qpair failed and we were unable to recover it. 00:26:27.175 [2024-07-16 00:27:45.987110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.175 [2024-07-16 00:27:45.987142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.175 qpair failed and we were unable to recover it. 00:26:27.175 [2024-07-16 00:27:45.987421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.175 [2024-07-16 00:27:45.987456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.175 qpair failed and we were unable to recover it. 00:26:27.175 [2024-07-16 00:27:45.987723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.176 [2024-07-16 00:27:45.987740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.176 qpair failed and we were unable to recover it. 00:26:27.176 [2024-07-16 00:27:45.988045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.176 [2024-07-16 00:27:45.988062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.176 qpair failed and we were unable to recover it. 00:26:27.176 [2024-07-16 00:27:45.988373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.176 [2024-07-16 00:27:45.988390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.176 qpair failed and we were unable to recover it. 00:26:27.176 [2024-07-16 00:27:45.988635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.176 [2024-07-16 00:27:45.988654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.176 qpair failed and we were unable to recover it. 00:26:27.176 [2024-07-16 00:27:45.988949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.176 [2024-07-16 00:27:45.988967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.176 qpair failed and we were unable to recover it. 00:26:27.176 [2024-07-16 00:27:45.989188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.176 [2024-07-16 00:27:45.989204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.176 qpair failed and we were unable to recover it. 00:26:27.176 [2024-07-16 00:27:45.989425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.176 [2024-07-16 00:27:45.989442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.176 qpair failed and we were unable to recover it. 00:26:27.176 [2024-07-16 00:27:45.989667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.176 [2024-07-16 00:27:45.989699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.176 qpair failed and we were unable to recover it. 00:26:27.176 [2024-07-16 00:27:45.989990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.176 [2024-07-16 00:27:45.990022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.176 qpair failed and we were unable to recover it. 00:26:27.176 [2024-07-16 00:27:45.990365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.176 [2024-07-16 00:27:45.990398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.176 qpair failed and we were unable to recover it. 00:26:27.176 [2024-07-16 00:27:45.990598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.176 [2024-07-16 00:27:45.990630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.176 qpair failed and we were unable to recover it. 00:26:27.176 [2024-07-16 00:27:45.990802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.176 [2024-07-16 00:27:45.990834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.176 qpair failed and we were unable to recover it. 00:26:27.176 [2024-07-16 00:27:45.991145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.176 [2024-07-16 00:27:45.991178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.176 qpair failed and we were unable to recover it. 00:26:27.176 [2024-07-16 00:27:45.991452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.176 [2024-07-16 00:27:45.991486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.176 qpair failed and we were unable to recover it. 00:26:27.176 [2024-07-16 00:27:45.991744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.176 [2024-07-16 00:27:45.991782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.176 qpair failed and we were unable to recover it. 00:26:27.176 [2024-07-16 00:27:45.992029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.176 [2024-07-16 00:27:45.992062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.176 qpair failed and we were unable to recover it. 00:26:27.176 [2024-07-16 00:27:45.992325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.176 [2024-07-16 00:27:45.992359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.176 qpair failed and we were unable to recover it. 00:26:27.176 [2024-07-16 00:27:45.992570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.176 [2024-07-16 00:27:45.992587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.176 qpair failed and we were unable to recover it. 00:26:27.450 [2024-07-16 00:27:45.992866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.450 [2024-07-16 00:27:45.992900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.450 qpair failed and we were unable to recover it. 00:26:27.450 [2024-07-16 00:27:45.993091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.450 [2024-07-16 00:27:45.993125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.450 qpair failed and we were unable to recover it. 00:26:27.450 [2024-07-16 00:27:45.993354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.450 [2024-07-16 00:27:45.993387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.450 qpair failed and we were unable to recover it. 00:26:27.450 [2024-07-16 00:27:45.993585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.450 [2024-07-16 00:27:45.993618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.450 qpair failed and we were unable to recover it. 00:26:27.450 [2024-07-16 00:27:45.993805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.450 [2024-07-16 00:27:45.993838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.450 qpair failed and we were unable to recover it. 00:26:27.450 [2024-07-16 00:27:45.994015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.450 [2024-07-16 00:27:45.994048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.450 qpair failed and we were unable to recover it. 00:26:27.450 [2024-07-16 00:27:45.994248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.450 [2024-07-16 00:27:45.994288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.450 qpair failed and we were unable to recover it. 00:26:27.450 [2024-07-16 00:27:45.994448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.450 [2024-07-16 00:27:45.994465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.450 qpair failed and we were unable to recover it. 00:26:27.450 [2024-07-16 00:27:45.994702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.450 [2024-07-16 00:27:45.994735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.450 qpair failed and we were unable to recover it. 00:26:27.450 [2024-07-16 00:27:45.994930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.450 [2024-07-16 00:27:45.994962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.450 qpair failed and we were unable to recover it. 00:26:27.450 [2024-07-16 00:27:45.995184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.450 [2024-07-16 00:27:45.995216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.450 qpair failed and we were unable to recover it. 00:26:27.450 [2024-07-16 00:27:45.995548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.450 [2024-07-16 00:27:45.995581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.450 qpair failed and we were unable to recover it. 00:26:27.450 [2024-07-16 00:27:45.995777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.450 [2024-07-16 00:27:45.995810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.450 qpair failed and we were unable to recover it. 00:26:27.450 [2024-07-16 00:27:45.995996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.450 [2024-07-16 00:27:45.996013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.450 qpair failed and we were unable to recover it. 00:26:27.450 [2024-07-16 00:27:45.996155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.450 [2024-07-16 00:27:45.996173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.450 qpair failed and we were unable to recover it. 00:26:27.450 [2024-07-16 00:27:45.996376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.450 [2024-07-16 00:27:45.996409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.450 qpair failed and we were unable to recover it. 00:26:27.450 [2024-07-16 00:27:45.996655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.450 [2024-07-16 00:27:45.996689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.450 qpair failed and we were unable to recover it. 00:26:27.450 [2024-07-16 00:27:45.997000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.450 [2024-07-16 00:27:45.997032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.450 qpair failed and we were unable to recover it. 00:26:27.450 [2024-07-16 00:27:45.997274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.450 [2024-07-16 00:27:45.997308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.450 qpair failed and we were unable to recover it. 00:26:27.450 [2024-07-16 00:27:45.997585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.450 [2024-07-16 00:27:45.997617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.450 qpair failed and we were unable to recover it. 00:26:27.450 [2024-07-16 00:27:45.997869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.450 [2024-07-16 00:27:45.997901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.450 qpair failed and we were unable to recover it. 00:26:27.450 [2024-07-16 00:27:45.998080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.450 [2024-07-16 00:27:45.998113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.450 qpair failed and we were unable to recover it. 00:26:27.450 [2024-07-16 00:27:45.998367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.450 [2024-07-16 00:27:45.998399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.450 qpair failed and we were unable to recover it. 00:26:27.450 [2024-07-16 00:27:45.998555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.450 [2024-07-16 00:27:45.998576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.450 qpair failed and we were unable to recover it. 00:26:27.450 [2024-07-16 00:27:45.998876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.450 [2024-07-16 00:27:45.998910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.450 qpair failed and we were unable to recover it. 00:26:27.450 [2024-07-16 00:27:45.999251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.450 [2024-07-16 00:27:45.999284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.450 qpair failed and we were unable to recover it. 00:26:27.450 [2024-07-16 00:27:45.999477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.450 [2024-07-16 00:27:45.999520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.450 qpair failed and we were unable to recover it. 00:26:27.450 [2024-07-16 00:27:45.999746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.450 [2024-07-16 00:27:45.999764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.450 qpair failed and we were unable to recover it. 00:26:27.450 [2024-07-16 00:27:45.999961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.450 [2024-07-16 00:27:45.999979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.450 qpair failed and we were unable to recover it. 00:26:27.450 [2024-07-16 00:27:46.000139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.450 [2024-07-16 00:27:46.000156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.450 qpair failed and we were unable to recover it. 00:26:27.450 [2024-07-16 00:27:46.000448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.450 [2024-07-16 00:27:46.000482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.450 qpair failed and we were unable to recover it. 00:26:27.450 [2024-07-16 00:27:46.000737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.450 [2024-07-16 00:27:46.000770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.450 qpair failed and we were unable to recover it. 00:26:27.450 [2024-07-16 00:27:46.001017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.450 [2024-07-16 00:27:46.001050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.450 qpair failed and we were unable to recover it. 00:26:27.450 [2024-07-16 00:27:46.001251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.450 [2024-07-16 00:27:46.001285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.450 qpair failed and we were unable to recover it. 00:26:27.450 [2024-07-16 00:27:46.001539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.450 [2024-07-16 00:27:46.001572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.450 qpair failed and we were unable to recover it. 00:26:27.450 [2024-07-16 00:27:46.001760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.450 [2024-07-16 00:27:46.001794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.450 qpair failed and we were unable to recover it. 00:26:27.450 [2024-07-16 00:27:46.001980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.450 [2024-07-16 00:27:46.002013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.450 qpair failed and we were unable to recover it. 00:26:27.450 [2024-07-16 00:27:46.002326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.450 [2024-07-16 00:27:46.002360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.450 qpair failed and we were unable to recover it. 00:26:27.450 [2024-07-16 00:27:46.002690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.450 [2024-07-16 00:27:46.002708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.450 qpair failed and we were unable to recover it. 00:26:27.451 [2024-07-16 00:27:46.002975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.451 [2024-07-16 00:27:46.003009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.451 qpair failed and we were unable to recover it. 00:26:27.451 [2024-07-16 00:27:46.003273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.451 [2024-07-16 00:27:46.003306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.451 qpair failed and we were unable to recover it. 00:26:27.451 [2024-07-16 00:27:46.003510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.451 [2024-07-16 00:27:46.003544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.451 qpair failed and we were unable to recover it. 00:26:27.451 [2024-07-16 00:27:46.003730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.451 [2024-07-16 00:27:46.003763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.451 qpair failed and we were unable to recover it. 00:26:27.451 [2024-07-16 00:27:46.003932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.451 [2024-07-16 00:27:46.003949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.451 qpair failed and we were unable to recover it. 00:26:27.451 [2024-07-16 00:27:46.004167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.451 [2024-07-16 00:27:46.004184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.451 qpair failed and we were unable to recover it. 00:26:27.451 [2024-07-16 00:27:46.004325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.451 [2024-07-16 00:27:46.004342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.451 qpair failed and we were unable to recover it. 00:26:27.451 [2024-07-16 00:27:46.004546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.451 [2024-07-16 00:27:46.004583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.451 qpair failed and we were unable to recover it. 00:26:27.451 [2024-07-16 00:27:46.004890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.451 [2024-07-16 00:27:46.004922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.451 qpair failed and we were unable to recover it. 00:26:27.451 [2024-07-16 00:27:46.005201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.451 [2024-07-16 00:27:46.005244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.451 qpair failed and we were unable to recover it. 00:26:27.451 [2024-07-16 00:27:46.005482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.451 [2024-07-16 00:27:46.005515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.451 qpair failed and we were unable to recover it. 00:26:27.451 [2024-07-16 00:27:46.005754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.451 [2024-07-16 00:27:46.005786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.451 qpair failed and we were unable to recover it. 00:26:27.451 [2024-07-16 00:27:46.006052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.451 [2024-07-16 00:27:46.006085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.451 qpair failed and we were unable to recover it. 00:26:27.451 [2024-07-16 00:27:46.006325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.451 [2024-07-16 00:27:46.006358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.451 qpair failed and we were unable to recover it. 00:26:27.451 [2024-07-16 00:27:46.006517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.451 [2024-07-16 00:27:46.006534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.451 qpair failed and we were unable to recover it. 00:26:27.451 [2024-07-16 00:27:46.006757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.451 [2024-07-16 00:27:46.006788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.451 qpair failed and we were unable to recover it. 00:26:27.451 [2024-07-16 00:27:46.007044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.451 [2024-07-16 00:27:46.007077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.451 qpair failed and we were unable to recover it. 00:26:27.451 [2024-07-16 00:27:46.007330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.451 [2024-07-16 00:27:46.007364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.451 qpair failed and we were unable to recover it. 00:26:27.451 [2024-07-16 00:27:46.007560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.451 [2024-07-16 00:27:46.007593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.451 qpair failed and we were unable to recover it. 00:26:27.451 [2024-07-16 00:27:46.007852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.451 [2024-07-16 00:27:46.007884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.451 qpair failed and we were unable to recover it. 00:26:27.451 [2024-07-16 00:27:46.008197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.451 [2024-07-16 00:27:46.008255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.451 qpair failed and we were unable to recover it. 00:26:27.451 [2024-07-16 00:27:46.008535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.451 [2024-07-16 00:27:46.008568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.451 qpair failed and we were unable to recover it. 00:26:27.451 [2024-07-16 00:27:46.008734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.451 [2024-07-16 00:27:46.008751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.451 qpair failed and we were unable to recover it. 00:26:27.451 [2024-07-16 00:27:46.009001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.451 [2024-07-16 00:27:46.009033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.451 qpair failed and we were unable to recover it. 00:26:27.451 [2024-07-16 00:27:46.009289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.451 [2024-07-16 00:27:46.009323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.451 qpair failed and we were unable to recover it. 00:26:27.451 [2024-07-16 00:27:46.009634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.451 [2024-07-16 00:27:46.009666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.451 qpair failed and we were unable to recover it. 00:26:27.451 [2024-07-16 00:27:46.009869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.451 [2024-07-16 00:27:46.009901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.451 qpair failed and we were unable to recover it. 00:26:27.451 [2024-07-16 00:27:46.010209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.451 [2024-07-16 00:27:46.010251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.451 qpair failed and we were unable to recover it. 00:26:27.451 [2024-07-16 00:27:46.010433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.451 [2024-07-16 00:27:46.010465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.451 qpair failed and we were unable to recover it. 00:26:27.451 [2024-07-16 00:27:46.010776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.451 [2024-07-16 00:27:46.010809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.451 qpair failed and we were unable to recover it. 00:26:27.451 [2024-07-16 00:27:46.011157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.451 [2024-07-16 00:27:46.011189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.451 qpair failed and we were unable to recover it. 00:26:27.451 [2024-07-16 00:27:46.011395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.451 [2024-07-16 00:27:46.011428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.451 qpair failed and we were unable to recover it. 00:26:27.451 [2024-07-16 00:27:46.011680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.451 [2024-07-16 00:27:46.011712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.451 qpair failed and we were unable to recover it. 00:26:27.451 [2024-07-16 00:27:46.012044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.451 [2024-07-16 00:27:46.012077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.451 qpair failed and we were unable to recover it. 00:26:27.451 [2024-07-16 00:27:46.012326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.451 [2024-07-16 00:27:46.012360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.451 qpair failed and we were unable to recover it. 00:26:27.451 [2024-07-16 00:27:46.012604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.451 [2024-07-16 00:27:46.012621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.451 qpair failed and we were unable to recover it. 00:26:27.451 [2024-07-16 00:27:46.012821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.451 [2024-07-16 00:27:46.012838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.451 qpair failed and we were unable to recover it. 00:26:27.451 [2024-07-16 00:27:46.012973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.451 [2024-07-16 00:27:46.012989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.451 qpair failed and we were unable to recover it. 00:26:27.451 [2024-07-16 00:27:46.013265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.451 [2024-07-16 00:27:46.013282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.451 qpair failed and we were unable to recover it. 00:26:27.451 [2024-07-16 00:27:46.013525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.451 [2024-07-16 00:27:46.013542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.451 qpair failed and we were unable to recover it. 00:26:27.451 [2024-07-16 00:27:46.013708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.451 [2024-07-16 00:27:46.013725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.451 qpair failed and we were unable to recover it. 00:26:27.452 [2024-07-16 00:27:46.013976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.452 [2024-07-16 00:27:46.014008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.452 qpair failed and we were unable to recover it. 00:26:27.452 [2024-07-16 00:27:46.014254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.452 [2024-07-16 00:27:46.014286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.452 qpair failed and we were unable to recover it. 00:26:27.452 [2024-07-16 00:27:46.014524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.452 [2024-07-16 00:27:46.014557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.452 qpair failed and we were unable to recover it. 00:26:27.452 [2024-07-16 00:27:46.014723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.452 [2024-07-16 00:27:46.014756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.452 qpair failed and we were unable to recover it. 00:26:27.452 [2024-07-16 00:27:46.014926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.452 [2024-07-16 00:27:46.014943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.452 qpair failed and we were unable to recover it. 00:26:27.452 [2024-07-16 00:27:46.015219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.452 [2024-07-16 00:27:46.015262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.452 qpair failed and we were unable to recover it. 00:26:27.452 [2024-07-16 00:27:46.015441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.452 [2024-07-16 00:27:46.015473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.452 qpair failed and we were unable to recover it. 00:26:27.452 [2024-07-16 00:27:46.015719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.452 [2024-07-16 00:27:46.015752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.452 qpair failed and we were unable to recover it. 00:26:27.452 [2024-07-16 00:27:46.016010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.452 [2024-07-16 00:27:46.016042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.452 qpair failed and we were unable to recover it. 00:26:27.452 [2024-07-16 00:27:46.016348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.452 [2024-07-16 00:27:46.016380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.452 qpair failed and we were unable to recover it. 00:26:27.452 [2024-07-16 00:27:46.016641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.452 [2024-07-16 00:27:46.016674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.452 qpair failed and we were unable to recover it. 00:26:27.452 [2024-07-16 00:27:46.016909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.452 [2024-07-16 00:27:46.016946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.452 qpair failed and we were unable to recover it. 00:26:27.452 [2024-07-16 00:27:46.017201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.452 [2024-07-16 00:27:46.017247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.452 qpair failed and we were unable to recover it. 00:26:27.452 [2024-07-16 00:27:46.017438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.452 [2024-07-16 00:27:46.017469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.452 qpair failed and we were unable to recover it. 00:26:27.452 [2024-07-16 00:27:46.017665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.452 [2024-07-16 00:27:46.017697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.452 qpair failed and we were unable to recover it. 00:26:27.452 [2024-07-16 00:27:46.018014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.452 [2024-07-16 00:27:46.018046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.452 qpair failed and we were unable to recover it. 00:26:27.452 [2024-07-16 00:27:46.018304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.452 [2024-07-16 00:27:46.018337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.452 qpair failed and we were unable to recover it. 00:26:27.452 [2024-07-16 00:27:46.018668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.452 [2024-07-16 00:27:46.018699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.452 qpair failed and we were unable to recover it. 00:26:27.452 [2024-07-16 00:27:46.018976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.452 [2024-07-16 00:27:46.019008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.452 qpair failed and we were unable to recover it. 00:26:27.452 [2024-07-16 00:27:46.019249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.452 [2024-07-16 00:27:46.019282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.452 qpair failed and we were unable to recover it. 00:26:27.452 [2024-07-16 00:27:46.019452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.452 [2024-07-16 00:27:46.019468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.452 qpair failed and we were unable to recover it. 00:26:27.452 [2024-07-16 00:27:46.019704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.452 [2024-07-16 00:27:46.019736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.452 qpair failed and we were unable to recover it. 00:26:27.452 [2024-07-16 00:27:46.019972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.452 [2024-07-16 00:27:46.020004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.452 qpair failed and we were unable to recover it. 00:26:27.452 [2024-07-16 00:27:46.020309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.452 [2024-07-16 00:27:46.020343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.452 qpair failed and we were unable to recover it. 00:26:27.452 [2024-07-16 00:27:46.020586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.452 [2024-07-16 00:27:46.020618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.452 qpair failed and we were unable to recover it. 00:26:27.452 [2024-07-16 00:27:46.020923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.452 [2024-07-16 00:27:46.020940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.452 qpair failed and we were unable to recover it. 00:26:27.452 [2024-07-16 00:27:46.021160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.452 [2024-07-16 00:27:46.021176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.452 qpair failed and we were unable to recover it. 00:26:27.452 [2024-07-16 00:27:46.021378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.452 [2024-07-16 00:27:46.021395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.452 qpair failed and we were unable to recover it. 00:26:27.452 [2024-07-16 00:27:46.021625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.452 [2024-07-16 00:27:46.021657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.452 qpair failed and we were unable to recover it. 00:26:27.452 [2024-07-16 00:27:46.021856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.452 [2024-07-16 00:27:46.021888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.452 qpair failed and we were unable to recover it. 00:26:27.452 [2024-07-16 00:27:46.022123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.452 [2024-07-16 00:27:46.022155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.452 qpair failed and we were unable to recover it. 00:26:27.452 [2024-07-16 00:27:46.022467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.452 [2024-07-16 00:27:46.022500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.452 qpair failed and we were unable to recover it. 00:26:27.452 [2024-07-16 00:27:46.022734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.452 [2024-07-16 00:27:46.022766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.452 qpair failed and we were unable to recover it. 00:26:27.452 [2024-07-16 00:27:46.023091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.452 [2024-07-16 00:27:46.023123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.452 qpair failed and we were unable to recover it. 00:26:27.452 [2024-07-16 00:27:46.023449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.452 [2024-07-16 00:27:46.023482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.452 qpair failed and we were unable to recover it. 00:26:27.452 [2024-07-16 00:27:46.023730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.452 [2024-07-16 00:27:46.023761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.452 qpair failed and we were unable to recover it. 00:26:27.452 [2024-07-16 00:27:46.024083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.452 [2024-07-16 00:27:46.024115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.452 qpair failed and we were unable to recover it. 00:26:27.452 [2024-07-16 00:27:46.024355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.452 [2024-07-16 00:27:46.024388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.452 qpair failed and we were unable to recover it. 00:26:27.452 [2024-07-16 00:27:46.024634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.452 [2024-07-16 00:27:46.024672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.452 qpair failed and we were unable to recover it. 00:26:27.452 [2024-07-16 00:27:46.024922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.452 [2024-07-16 00:27:46.024954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.452 qpair failed and we were unable to recover it. 00:26:27.452 [2024-07-16 00:27:46.025149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.452 [2024-07-16 00:27:46.025181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.452 qpair failed and we were unable to recover it. 00:26:27.453 [2024-07-16 00:27:46.025422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.453 [2024-07-16 00:27:46.025455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.453 qpair failed and we were unable to recover it. 00:26:27.453 [2024-07-16 00:27:46.025754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.453 [2024-07-16 00:27:46.025786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.453 qpair failed and we were unable to recover it. 00:26:27.453 [2024-07-16 00:27:46.026035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.453 [2024-07-16 00:27:46.026066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.453 qpair failed and we were unable to recover it. 00:26:27.453 [2024-07-16 00:27:46.026312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.453 [2024-07-16 00:27:46.026346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.453 qpair failed and we were unable to recover it. 00:26:27.453 [2024-07-16 00:27:46.026649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.453 [2024-07-16 00:27:46.026680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.453 qpair failed and we were unable to recover it. 00:26:27.453 [2024-07-16 00:27:46.026999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.453 [2024-07-16 00:27:46.027030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.453 qpair failed and we were unable to recover it. 00:26:27.453 [2024-07-16 00:27:46.027359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.453 [2024-07-16 00:27:46.027392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.453 qpair failed and we were unable to recover it. 00:26:27.453 [2024-07-16 00:27:46.027689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.453 [2024-07-16 00:27:46.027721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.453 qpair failed and we were unable to recover it. 00:26:27.453 [2024-07-16 00:27:46.028045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.453 [2024-07-16 00:27:46.028076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.453 qpair failed and we were unable to recover it. 00:26:27.453 [2024-07-16 00:27:46.028381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.453 [2024-07-16 00:27:46.028413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.453 qpair failed and we were unable to recover it. 00:26:27.453 [2024-07-16 00:27:46.028602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.453 [2024-07-16 00:27:46.028635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.453 qpair failed and we were unable to recover it. 00:26:27.453 [2024-07-16 00:27:46.028946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.453 [2024-07-16 00:27:46.028977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.453 qpair failed and we were unable to recover it. 00:26:27.453 [2024-07-16 00:27:46.029243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.453 [2024-07-16 00:27:46.029277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.453 qpair failed and we were unable to recover it. 00:26:27.453 [2024-07-16 00:27:46.029588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.453 [2024-07-16 00:27:46.029620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.453 qpair failed and we were unable to recover it. 00:26:27.453 [2024-07-16 00:27:46.029946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.453 [2024-07-16 00:27:46.029978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.453 qpair failed and we were unable to recover it. 00:26:27.453 [2024-07-16 00:27:46.030320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.453 [2024-07-16 00:27:46.030354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.453 qpair failed and we were unable to recover it. 00:26:27.453 [2024-07-16 00:27:46.030674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.453 [2024-07-16 00:27:46.030717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.453 qpair failed and we were unable to recover it. 00:26:27.453 [2024-07-16 00:27:46.030943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.453 [2024-07-16 00:27:46.030959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.453 qpair failed and we were unable to recover it. 00:26:27.453 [2024-07-16 00:27:46.031164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.453 [2024-07-16 00:27:46.031180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.453 qpair failed and we were unable to recover it. 00:26:27.453 [2024-07-16 00:27:46.031507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.453 [2024-07-16 00:27:46.031524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.453 qpair failed and we were unable to recover it. 00:26:27.453 [2024-07-16 00:27:46.031814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.453 [2024-07-16 00:27:46.031847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.453 qpair failed and we were unable to recover it. 00:26:27.453 [2024-07-16 00:27:46.032087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.453 [2024-07-16 00:27:46.032119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.453 qpair failed and we were unable to recover it. 00:26:27.453 [2024-07-16 00:27:46.032492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.453 [2024-07-16 00:27:46.032525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.453 qpair failed and we were unable to recover it. 00:26:27.453 [2024-07-16 00:27:46.032839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.453 [2024-07-16 00:27:46.032870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.453 qpair failed and we were unable to recover it. 00:26:27.453 [2024-07-16 00:27:46.033180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.453 [2024-07-16 00:27:46.033213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.453 qpair failed and we were unable to recover it. 00:26:27.453 [2024-07-16 00:27:46.033521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.453 [2024-07-16 00:27:46.033554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.453 qpair failed and we were unable to recover it. 00:26:27.453 [2024-07-16 00:27:46.033749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.453 [2024-07-16 00:27:46.033781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.453 qpair failed and we were unable to recover it. 00:26:27.453 [2024-07-16 00:27:46.034152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.453 [2024-07-16 00:27:46.034184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.453 qpair failed and we were unable to recover it. 00:26:27.453 [2024-07-16 00:27:46.034517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.453 [2024-07-16 00:27:46.034552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.453 qpair failed and we were unable to recover it. 00:26:27.453 [2024-07-16 00:27:46.034884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.453 [2024-07-16 00:27:46.034916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.453 qpair failed and we were unable to recover it. 00:26:27.453 [2024-07-16 00:27:46.035263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.453 [2024-07-16 00:27:46.035295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.453 qpair failed and we were unable to recover it. 00:26:27.453 [2024-07-16 00:27:46.035468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.453 [2024-07-16 00:27:46.035500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.453 qpair failed and we were unable to recover it. 00:26:27.453 [2024-07-16 00:27:46.035676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.453 [2024-07-16 00:27:46.035708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.453 qpair failed and we were unable to recover it. 00:26:27.453 [2024-07-16 00:27:46.036044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.453 [2024-07-16 00:27:46.036076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.453 qpair failed and we were unable to recover it. 00:26:27.453 [2024-07-16 00:27:46.036295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.453 [2024-07-16 00:27:46.036329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.453 qpair failed and we were unable to recover it. 00:26:27.453 [2024-07-16 00:27:46.036580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.453 [2024-07-16 00:27:46.036612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.453 qpair failed and we were unable to recover it. 00:26:27.453 [2024-07-16 00:27:46.036862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.453 [2024-07-16 00:27:46.036895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.453 qpair failed and we were unable to recover it. 00:26:27.453 [2024-07-16 00:27:46.037079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.453 [2024-07-16 00:27:46.037111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.453 qpair failed and we were unable to recover it. 00:26:27.453 [2024-07-16 00:27:46.037493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.453 [2024-07-16 00:27:46.037567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.453 qpair failed and we were unable to recover it. 00:26:27.453 [2024-07-16 00:27:46.037912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.453 [2024-07-16 00:27:46.037931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.453 qpair failed and we were unable to recover it. 00:26:27.453 [2024-07-16 00:27:46.038198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.453 [2024-07-16 00:27:46.038215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.453 qpair failed and we were unable to recover it. 00:26:27.453 [2024-07-16 00:27:46.038516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.454 [2024-07-16 00:27:46.038533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.454 qpair failed and we were unable to recover it. 00:26:27.454 [2024-07-16 00:27:46.038829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.454 [2024-07-16 00:27:46.038861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.454 qpair failed and we were unable to recover it. 00:26:27.454 [2024-07-16 00:27:46.039202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.454 [2024-07-16 00:27:46.039245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.454 qpair failed and we were unable to recover it. 00:26:27.454 [2024-07-16 00:27:46.039568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.454 [2024-07-16 00:27:46.039602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.454 qpair failed and we were unable to recover it. 00:26:27.454 [2024-07-16 00:27:46.039910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.454 [2024-07-16 00:27:46.039927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.454 qpair failed and we were unable to recover it. 00:26:27.454 [2024-07-16 00:27:46.040220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.454 [2024-07-16 00:27:46.040263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.454 qpair failed and we were unable to recover it. 00:26:27.454 [2024-07-16 00:27:46.040570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.454 [2024-07-16 00:27:46.040601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.454 qpair failed and we were unable to recover it. 00:26:27.454 [2024-07-16 00:27:46.040795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.454 [2024-07-16 00:27:46.040827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.454 qpair failed and we were unable to recover it. 00:26:27.454 [2024-07-16 00:27:46.041122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.454 [2024-07-16 00:27:46.041138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.454 qpair failed and we were unable to recover it. 00:26:27.454 [2024-07-16 00:27:46.041347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.454 [2024-07-16 00:27:46.041365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.454 qpair failed and we were unable to recover it. 00:26:27.454 [2024-07-16 00:27:46.041586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.454 [2024-07-16 00:27:46.041607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.454 qpair failed and we were unable to recover it. 00:26:27.454 [2024-07-16 00:27:46.041841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.454 [2024-07-16 00:27:46.041873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.454 qpair failed and we were unable to recover it. 00:26:27.454 [2024-07-16 00:27:46.042221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.454 [2024-07-16 00:27:46.042261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.454 qpair failed and we were unable to recover it. 00:26:27.454 [2024-07-16 00:27:46.042566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.454 [2024-07-16 00:27:46.042598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.454 qpair failed and we were unable to recover it. 00:26:27.454 [2024-07-16 00:27:46.042845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.454 [2024-07-16 00:27:46.042877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.454 qpair failed and we were unable to recover it. 00:26:27.454 [2024-07-16 00:27:46.043189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.454 [2024-07-16 00:27:46.043221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.454 qpair failed and we were unable to recover it. 00:26:27.454 [2024-07-16 00:27:46.043433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.454 [2024-07-16 00:27:46.043469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.454 qpair failed and we were unable to recover it. 00:26:27.454 [2024-07-16 00:27:46.043716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.454 [2024-07-16 00:27:46.043748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.454 qpair failed and we were unable to recover it. 00:26:27.454 [2024-07-16 00:27:46.043930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.454 [2024-07-16 00:27:46.043962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.454 qpair failed and we were unable to recover it. 00:26:27.454 [2024-07-16 00:27:46.044290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.454 [2024-07-16 00:27:46.044323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.454 qpair failed and we were unable to recover it. 00:26:27.454 [2024-07-16 00:27:46.044627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.454 [2024-07-16 00:27:46.044658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.454 qpair failed and we were unable to recover it. 00:26:27.454 [2024-07-16 00:27:46.044986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.454 [2024-07-16 00:27:46.045018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.454 qpair failed and we were unable to recover it. 00:26:27.454 [2024-07-16 00:27:46.045287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.454 [2024-07-16 00:27:46.045331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.454 qpair failed and we were unable to recover it. 00:26:27.454 [2024-07-16 00:27:46.045663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.454 [2024-07-16 00:27:46.045695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.454 qpair failed and we were unable to recover it. 00:26:27.454 [2024-07-16 00:27:46.045944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.454 [2024-07-16 00:27:46.045976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.454 qpair failed and we were unable to recover it. 00:26:27.454 [2024-07-16 00:27:46.046305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.454 [2024-07-16 00:27:46.046336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.454 qpair failed and we were unable to recover it. 00:26:27.454 [2024-07-16 00:27:46.046571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.454 [2024-07-16 00:27:46.046604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.454 qpair failed and we were unable to recover it. 00:26:27.454 [2024-07-16 00:27:46.046788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.454 [2024-07-16 00:27:46.046806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.454 qpair failed and we were unable to recover it. 00:26:27.454 [2024-07-16 00:27:46.047014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.454 [2024-07-16 00:27:46.047046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.454 qpair failed and we were unable to recover it. 00:26:27.454 [2024-07-16 00:27:46.047318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.454 [2024-07-16 00:27:46.047363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.454 qpair failed and we were unable to recover it. 00:26:27.454 [2024-07-16 00:27:46.047700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.454 [2024-07-16 00:27:46.047734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.454 qpair failed and we were unable to recover it. 00:26:27.454 [2024-07-16 00:27:46.048038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.454 [2024-07-16 00:27:46.048070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.454 qpair failed and we were unable to recover it. 00:26:27.454 [2024-07-16 00:27:46.048416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.454 [2024-07-16 00:27:46.048449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.454 qpair failed and we were unable to recover it. 00:26:27.454 [2024-07-16 00:27:46.048707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.454 [2024-07-16 00:27:46.048752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.454 qpair failed and we were unable to recover it. 00:26:27.454 [2024-07-16 00:27:46.049048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.455 [2024-07-16 00:27:46.049079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.455 qpair failed and we were unable to recover it. 00:26:27.455 [2024-07-16 00:27:46.049396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.455 [2024-07-16 00:27:46.049429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.455 qpair failed and we were unable to recover it. 00:26:27.455 [2024-07-16 00:27:46.049757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.455 [2024-07-16 00:27:46.049788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.455 qpair failed and we were unable to recover it. 00:26:27.455 [2024-07-16 00:27:46.050113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.455 [2024-07-16 00:27:46.050184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:27.455 qpair failed and we were unable to recover it. 00:26:27.455 [2024-07-16 00:27:46.050579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.455 [2024-07-16 00:27:46.050651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.455 qpair failed and we were unable to recover it. 00:26:27.455 [2024-07-16 00:27:46.050984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.455 [2024-07-16 00:27:46.051020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.455 qpair failed and we were unable to recover it. 00:26:27.455 [2024-07-16 00:27:46.051349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.455 [2024-07-16 00:27:46.051384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.455 qpair failed and we were unable to recover it. 00:26:27.455 [2024-07-16 00:27:46.051674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.455 [2024-07-16 00:27:46.051706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.455 qpair failed and we were unable to recover it. 00:26:27.455 [2024-07-16 00:27:46.051953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.455 [2024-07-16 00:27:46.051986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.455 qpair failed and we were unable to recover it. 00:26:27.455 [2024-07-16 00:27:46.052250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.455 [2024-07-16 00:27:46.052283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.455 qpair failed and we were unable to recover it. 00:26:27.455 [2024-07-16 00:27:46.052532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.455 [2024-07-16 00:27:46.052563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.455 qpair failed and we were unable to recover it. 00:26:27.455 [2024-07-16 00:27:46.052820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.455 [2024-07-16 00:27:46.052853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.455 qpair failed and we were unable to recover it. 00:26:27.455 [2024-07-16 00:27:46.053150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.455 [2024-07-16 00:27:46.053181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.455 qpair failed and we were unable to recover it. 00:26:27.455 [2024-07-16 00:27:46.053468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.455 [2024-07-16 00:27:46.053503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.455 qpair failed and we were unable to recover it. 00:26:27.455 [2024-07-16 00:27:46.053719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.455 [2024-07-16 00:27:46.053731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.455 qpair failed and we were unable to recover it. 00:26:27.455 [2024-07-16 00:27:46.053948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.455 [2024-07-16 00:27:46.053961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.455 qpair failed and we were unable to recover it. 00:26:27.455 [2024-07-16 00:27:46.054250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.455 [2024-07-16 00:27:46.054291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.455 qpair failed and we were unable to recover it. 00:26:27.455 [2024-07-16 00:27:46.054591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.455 [2024-07-16 00:27:46.054623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.455 qpair failed and we were unable to recover it. 00:26:27.455 [2024-07-16 00:27:46.054937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.455 [2024-07-16 00:27:46.054968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.455 qpair failed and we were unable to recover it. 00:26:27.455 [2024-07-16 00:27:46.055209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.455 [2024-07-16 00:27:46.055253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.455 qpair failed and we were unable to recover it. 00:26:27.455 [2024-07-16 00:27:46.055537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.455 [2024-07-16 00:27:46.055569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.455 qpair failed and we were unable to recover it. 00:26:27.455 [2024-07-16 00:27:46.055868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.455 [2024-07-16 00:27:46.055900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.455 qpair failed and we were unable to recover it. 00:26:27.455 [2024-07-16 00:27:46.056237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.455 [2024-07-16 00:27:46.056270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.455 qpair failed and we were unable to recover it. 00:26:27.455 [2024-07-16 00:27:46.056572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.455 [2024-07-16 00:27:46.056605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.455 qpair failed and we were unable to recover it. 00:26:27.455 [2024-07-16 00:27:46.056926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.455 [2024-07-16 00:27:46.056957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.455 qpair failed and we were unable to recover it. 00:26:27.455 [2024-07-16 00:27:46.057246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.455 [2024-07-16 00:27:46.057280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.455 qpair failed and we were unable to recover it. 00:26:27.455 [2024-07-16 00:27:46.057615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.455 [2024-07-16 00:27:46.057649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.455 qpair failed and we were unable to recover it. 00:26:27.455 [2024-07-16 00:27:46.057973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.455 [2024-07-16 00:27:46.058005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.455 qpair failed and we were unable to recover it. 00:26:27.455 [2024-07-16 00:27:46.058253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.455 [2024-07-16 00:27:46.058285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.455 qpair failed and we were unable to recover it. 00:26:27.455 [2024-07-16 00:27:46.058547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.455 [2024-07-16 00:27:46.058591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.455 qpair failed and we were unable to recover it. 00:26:27.455 [2024-07-16 00:27:46.058731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.455 [2024-07-16 00:27:46.058743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.455 qpair failed and we were unable to recover it. 00:26:27.455 [2024-07-16 00:27:46.059049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.455 [2024-07-16 00:27:46.059080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.455 qpair failed and we were unable to recover it. 00:26:27.455 [2024-07-16 00:27:46.059408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.455 [2024-07-16 00:27:46.059439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.455 qpair failed and we were unable to recover it. 00:26:27.455 [2024-07-16 00:27:46.059769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.455 [2024-07-16 00:27:46.059802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.455 qpair failed and we were unable to recover it. 00:26:27.455 [2024-07-16 00:27:46.060039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.455 [2024-07-16 00:27:46.060071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.455 qpair failed and we were unable to recover it. 00:26:27.455 [2024-07-16 00:27:46.060313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.455 [2024-07-16 00:27:46.060347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.455 qpair failed and we were unable to recover it. 00:26:27.455 [2024-07-16 00:27:46.060672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.455 [2024-07-16 00:27:46.060704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.455 qpair failed and we were unable to recover it. 00:26:27.455 [2024-07-16 00:27:46.061008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.455 [2024-07-16 00:27:46.061040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.455 qpair failed and we were unable to recover it. 00:26:27.455 [2024-07-16 00:27:46.061359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.455 [2024-07-16 00:27:46.061392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.455 qpair failed and we were unable to recover it. 00:26:27.455 [2024-07-16 00:27:46.061716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.455 [2024-07-16 00:27:46.061729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.455 qpair failed and we were unable to recover it. 00:26:27.455 [2024-07-16 00:27:46.062042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.455 [2024-07-16 00:27:46.062074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.455 qpair failed and we were unable to recover it. 00:26:27.456 [2024-07-16 00:27:46.062250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.456 [2024-07-16 00:27:46.062283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.456 qpair failed and we were unable to recover it. 00:26:27.456 [2024-07-16 00:27:46.062511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.456 [2024-07-16 00:27:46.062543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.456 qpair failed and we were unable to recover it. 00:26:27.456 [2024-07-16 00:27:46.062802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.456 [2024-07-16 00:27:46.062834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.456 qpair failed and we were unable to recover it. 00:26:27.456 [2024-07-16 00:27:46.063176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.456 [2024-07-16 00:27:46.063207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.456 qpair failed and we were unable to recover it. 00:26:27.456 [2024-07-16 00:27:46.063480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.456 [2024-07-16 00:27:46.063512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.456 qpair failed and we were unable to recover it. 00:26:27.456 [2024-07-16 00:27:46.063777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.456 [2024-07-16 00:27:46.063809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.456 qpair failed and we were unable to recover it. 00:26:27.456 [2024-07-16 00:27:46.064158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.456 [2024-07-16 00:27:46.064190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.456 qpair failed and we were unable to recover it. 00:26:27.456 [2024-07-16 00:27:46.064485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.456 [2024-07-16 00:27:46.064518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.456 qpair failed and we were unable to recover it. 00:26:27.456 [2024-07-16 00:27:46.064770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.456 [2024-07-16 00:27:46.064802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.456 qpair failed and we were unable to recover it. 00:26:27.456 [2024-07-16 00:27:46.065039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.456 [2024-07-16 00:27:46.065070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.456 qpair failed and we were unable to recover it. 00:26:27.456 [2024-07-16 00:27:46.065267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.456 [2024-07-16 00:27:46.065301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.456 qpair failed and we were unable to recover it. 00:26:27.456 [2024-07-16 00:27:46.065505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.456 [2024-07-16 00:27:46.065537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.456 qpair failed and we were unable to recover it. 00:26:27.456 [2024-07-16 00:27:46.065770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.456 [2024-07-16 00:27:46.065801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.456 qpair failed and we were unable to recover it. 00:26:27.456 [2024-07-16 00:27:46.066044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.456 [2024-07-16 00:27:46.066076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.456 qpair failed and we were unable to recover it. 00:26:27.456 [2024-07-16 00:27:46.066325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.456 [2024-07-16 00:27:46.066358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.456 qpair failed and we were unable to recover it. 00:26:27.456 [2024-07-16 00:27:46.066712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.456 [2024-07-16 00:27:46.066749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.456 qpair failed and we were unable to recover it. 00:26:27.456 [2024-07-16 00:27:46.067064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.456 [2024-07-16 00:27:46.067095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.456 qpair failed and we were unable to recover it. 00:26:27.456 [2024-07-16 00:27:46.067406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.456 [2024-07-16 00:27:46.067439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.456 qpair failed and we were unable to recover it. 00:26:27.456 [2024-07-16 00:27:46.067743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.456 [2024-07-16 00:27:46.067775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.456 qpair failed and we were unable to recover it. 00:26:27.456 [2024-07-16 00:27:46.068097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.456 [2024-07-16 00:27:46.068129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.456 qpair failed and we were unable to recover it. 00:26:27.456 [2024-07-16 00:27:46.068440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.456 [2024-07-16 00:27:46.068473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.456 qpair failed and we were unable to recover it. 00:26:27.456 [2024-07-16 00:27:46.068803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.456 [2024-07-16 00:27:46.068834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.456 qpair failed and we were unable to recover it. 00:26:27.456 [2024-07-16 00:27:46.069066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.456 [2024-07-16 00:27:46.069099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.456 qpair failed and we were unable to recover it. 00:26:27.456 [2024-07-16 00:27:46.069430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.456 [2024-07-16 00:27:46.069461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.456 qpair failed and we were unable to recover it. 00:26:27.456 [2024-07-16 00:27:46.069765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.456 [2024-07-16 00:27:46.069797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.456 qpair failed and we were unable to recover it. 00:26:27.456 [2024-07-16 00:27:46.070121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.456 [2024-07-16 00:27:46.070153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.456 qpair failed and we were unable to recover it. 00:26:27.456 [2024-07-16 00:27:46.070466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.456 [2024-07-16 00:27:46.070499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.456 qpair failed and we were unable to recover it. 00:26:27.456 [2024-07-16 00:27:46.070742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.456 [2024-07-16 00:27:46.070774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.456 qpair failed and we were unable to recover it. 00:26:27.456 [2024-07-16 00:27:46.071034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.456 [2024-07-16 00:27:46.071066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.456 qpair failed and we were unable to recover it. 00:26:27.456 [2024-07-16 00:27:46.071359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.456 [2024-07-16 00:27:46.071392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.456 qpair failed and we were unable to recover it. 00:26:27.456 [2024-07-16 00:27:46.071569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.456 [2024-07-16 00:27:46.071601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.456 qpair failed and we were unable to recover it. 00:26:27.456 [2024-07-16 00:27:46.071836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.456 [2024-07-16 00:27:46.071868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.456 qpair failed and we were unable to recover it. 00:26:27.456 [2024-07-16 00:27:46.072108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.456 [2024-07-16 00:27:46.072120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.456 qpair failed and we were unable to recover it. 00:26:27.456 [2024-07-16 00:27:46.072360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.456 [2024-07-16 00:27:46.072392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.456 qpair failed and we were unable to recover it. 00:26:27.456 [2024-07-16 00:27:46.072722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.456 [2024-07-16 00:27:46.072754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.456 qpair failed and we were unable to recover it. 00:26:27.456 [2024-07-16 00:27:46.073096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.456 [2024-07-16 00:27:46.073128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.456 qpair failed and we were unable to recover it. 00:26:27.456 [2024-07-16 00:27:46.073314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.456 [2024-07-16 00:27:46.073347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.456 qpair failed and we were unable to recover it. 00:26:27.456 [2024-07-16 00:27:46.073650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.456 [2024-07-16 00:27:46.073682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.456 qpair failed and we were unable to recover it. 00:26:27.456 [2024-07-16 00:27:46.074037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.456 [2024-07-16 00:27:46.074068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.456 qpair failed and we were unable to recover it. 00:26:27.456 [2024-07-16 00:27:46.074419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.456 [2024-07-16 00:27:46.074451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.456 qpair failed and we were unable to recover it. 00:26:27.456 [2024-07-16 00:27:46.074770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.456 [2024-07-16 00:27:46.074782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.456 qpair failed and we were unable to recover it. 00:26:27.457 [2024-07-16 00:27:46.075005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.457 [2024-07-16 00:27:46.075018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.457 qpair failed and we were unable to recover it. 00:26:27.457 [2024-07-16 00:27:46.075181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.457 [2024-07-16 00:27:46.075194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.457 qpair failed and we were unable to recover it. 00:26:27.457 [2024-07-16 00:27:46.075479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.457 [2024-07-16 00:27:46.075507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.457 qpair failed and we were unable to recover it. 00:26:27.457 [2024-07-16 00:27:46.075810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.457 [2024-07-16 00:27:46.075843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.457 qpair failed and we were unable to recover it. 00:26:27.457 [2024-07-16 00:27:46.076169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.457 [2024-07-16 00:27:46.076201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.457 qpair failed and we were unable to recover it. 00:26:27.457 [2024-07-16 00:27:46.076452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.457 [2024-07-16 00:27:46.076485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.457 qpair failed and we were unable to recover it. 00:26:27.457 [2024-07-16 00:27:46.076720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.457 [2024-07-16 00:27:46.076733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.457 qpair failed and we were unable to recover it. 00:26:27.457 [2024-07-16 00:27:46.077021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.457 [2024-07-16 00:27:46.077052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.457 qpair failed and we were unable to recover it. 00:26:27.457 [2024-07-16 00:27:46.077258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.457 [2024-07-16 00:27:46.077291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.457 qpair failed and we were unable to recover it. 00:26:27.457 [2024-07-16 00:27:46.077552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.457 [2024-07-16 00:27:46.077583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.457 qpair failed and we were unable to recover it. 00:26:27.457 [2024-07-16 00:27:46.077822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.457 [2024-07-16 00:27:46.077834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.457 qpair failed and we were unable to recover it. 00:26:27.457 [2024-07-16 00:27:46.078130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.457 [2024-07-16 00:27:46.078162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.457 qpair failed and we were unable to recover it. 00:26:27.457 [2024-07-16 00:27:46.078534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.457 [2024-07-16 00:27:46.078576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.457 qpair failed and we were unable to recover it. 00:26:27.457 [2024-07-16 00:27:46.078769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.457 [2024-07-16 00:27:46.078783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.457 qpair failed and we were unable to recover it. 00:26:27.457 [2024-07-16 00:27:46.079015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.457 [2024-07-16 00:27:46.079030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.457 qpair failed and we were unable to recover it. 00:26:27.457 [2024-07-16 00:27:46.079251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.457 [2024-07-16 00:27:46.079264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.457 qpair failed and we were unable to recover it. 00:26:27.457 [2024-07-16 00:27:46.079470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.457 [2024-07-16 00:27:46.079483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.457 qpair failed and we were unable to recover it. 00:26:27.457 [2024-07-16 00:27:46.079643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.457 [2024-07-16 00:27:46.079675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.457 qpair failed and we were unable to recover it. 00:26:27.457 [2024-07-16 00:27:46.079842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.457 [2024-07-16 00:27:46.079875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.457 qpair failed and we were unable to recover it. 00:26:27.457 [2024-07-16 00:27:46.080066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.457 [2024-07-16 00:27:46.080097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.457 qpair failed and we were unable to recover it. 00:26:27.457 [2024-07-16 00:27:46.080335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.457 [2024-07-16 00:27:46.080367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.457 qpair failed and we were unable to recover it. 00:26:27.457 [2024-07-16 00:27:46.080680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.457 [2024-07-16 00:27:46.080711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.457 qpair failed and we were unable to recover it. 00:26:27.457 [2024-07-16 00:27:46.080972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.457 [2024-07-16 00:27:46.081004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.457 qpair failed and we were unable to recover it. 00:26:27.457 [2024-07-16 00:27:46.081312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.457 [2024-07-16 00:27:46.081345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.457 qpair failed and we were unable to recover it. 00:26:27.457 [2024-07-16 00:27:46.081663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.457 [2024-07-16 00:27:46.081695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.457 qpair failed and we were unable to recover it. 00:26:27.457 [2024-07-16 00:27:46.081957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.457 [2024-07-16 00:27:46.081971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.457 qpair failed and we were unable to recover it. 00:26:27.457 [2024-07-16 00:27:46.082161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.457 [2024-07-16 00:27:46.082173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.457 qpair failed and we were unable to recover it. 00:26:27.457 [2024-07-16 00:27:46.082386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.457 [2024-07-16 00:27:46.082418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.457 qpair failed and we were unable to recover it. 00:26:27.457 [2024-07-16 00:27:46.082658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.457 [2024-07-16 00:27:46.082690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.457 qpair failed and we were unable to recover it. 00:26:27.457 [2024-07-16 00:27:46.083021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.457 [2024-07-16 00:27:46.083034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.457 qpair failed and we were unable to recover it. 00:26:27.457 [2024-07-16 00:27:46.083242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.457 [2024-07-16 00:27:46.083255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.457 qpair failed and we were unable to recover it. 00:26:27.457 [2024-07-16 00:27:46.083543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.457 [2024-07-16 00:27:46.083575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.457 qpair failed and we were unable to recover it. 00:26:27.457 [2024-07-16 00:27:46.083882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.457 [2024-07-16 00:27:46.083914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.457 qpair failed and we were unable to recover it. 00:26:27.457 [2024-07-16 00:27:46.084245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.457 [2024-07-16 00:27:46.084278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.457 qpair failed and we were unable to recover it. 00:26:27.457 [2024-07-16 00:27:46.084615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.457 [2024-07-16 00:27:46.084647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.457 qpair failed and we were unable to recover it. 00:26:27.457 [2024-07-16 00:27:46.084901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.457 [2024-07-16 00:27:46.084933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.457 qpair failed and we were unable to recover it. 00:26:27.457 [2024-07-16 00:27:46.085306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.457 [2024-07-16 00:27:46.085339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.457 qpair failed and we were unable to recover it. 00:26:27.457 [2024-07-16 00:27:46.085606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.457 [2024-07-16 00:27:46.085639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.457 qpair failed and we were unable to recover it. 00:26:27.457 [2024-07-16 00:27:46.086001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.457 [2024-07-16 00:27:46.086033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.457 qpair failed and we were unable to recover it. 00:26:27.457 [2024-07-16 00:27:46.086302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.457 [2024-07-16 00:27:46.086336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.457 qpair failed and we were unable to recover it. 00:26:27.457 [2024-07-16 00:27:46.086639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.457 [2024-07-16 00:27:46.086671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.457 qpair failed and we were unable to recover it. 00:26:27.457 [2024-07-16 00:27:46.086923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.458 [2024-07-16 00:27:46.086960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.458 qpair failed and we were unable to recover it. 00:26:27.458 [2024-07-16 00:27:46.087212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.458 [2024-07-16 00:27:46.087254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.458 qpair failed and we were unable to recover it. 00:26:27.458 [2024-07-16 00:27:46.087534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.458 [2024-07-16 00:27:46.087571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.458 qpair failed and we were unable to recover it. 00:26:27.458 [2024-07-16 00:27:46.087886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.458 [2024-07-16 00:27:46.087898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.458 qpair failed and we were unable to recover it. 00:26:27.458 [2024-07-16 00:27:46.088109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.458 [2024-07-16 00:27:46.088122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.458 qpair failed and we were unable to recover it. 00:26:27.458 [2024-07-16 00:27:46.088404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.458 [2024-07-16 00:27:46.088436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.458 qpair failed and we were unable to recover it. 00:26:27.458 [2024-07-16 00:27:46.088746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.458 [2024-07-16 00:27:46.088779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.458 qpair failed and we were unable to recover it. 00:26:27.458 [2024-07-16 00:27:46.089100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.458 [2024-07-16 00:27:46.089113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.458 qpair failed and we were unable to recover it. 00:26:27.458 [2024-07-16 00:27:46.089408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.458 [2024-07-16 00:27:46.089442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.458 qpair failed and we were unable to recover it. 00:26:27.458 [2024-07-16 00:27:46.089714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.458 [2024-07-16 00:27:46.089746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.458 qpair failed and we were unable to recover it. 00:26:27.458 [2024-07-16 00:27:46.090024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.458 [2024-07-16 00:27:46.090056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.458 qpair failed and we were unable to recover it. 00:26:27.458 [2024-07-16 00:27:46.090338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.458 [2024-07-16 00:27:46.090371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.458 qpair failed and we were unable to recover it. 00:26:27.458 [2024-07-16 00:27:46.090709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.458 [2024-07-16 00:27:46.090741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.458 qpair failed and we were unable to recover it. 00:26:27.458 [2024-07-16 00:27:46.091066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.458 [2024-07-16 00:27:46.091078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.458 qpair failed and we were unable to recover it. 00:26:27.458 [2024-07-16 00:27:46.091376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.458 [2024-07-16 00:27:46.091410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.458 qpair failed and we were unable to recover it. 00:26:27.458 [2024-07-16 00:27:46.091761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.458 [2024-07-16 00:27:46.091793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.458 qpair failed and we were unable to recover it. 00:26:27.458 [2024-07-16 00:27:46.092036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.458 [2024-07-16 00:27:46.092069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.458 qpair failed and we were unable to recover it. 00:26:27.458 [2024-07-16 00:27:46.092360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.458 [2024-07-16 00:27:46.092394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.458 qpair failed and we were unable to recover it. 00:26:27.458 [2024-07-16 00:27:46.092698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.458 [2024-07-16 00:27:46.092730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.458 qpair failed and we were unable to recover it. 00:26:27.458 [2024-07-16 00:27:46.093050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.458 [2024-07-16 00:27:46.093081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.458 qpair failed and we were unable to recover it. 00:26:27.458 [2024-07-16 00:27:46.093343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.458 [2024-07-16 00:27:46.093376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.458 qpair failed and we were unable to recover it. 00:26:27.458 [2024-07-16 00:27:46.093728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.458 [2024-07-16 00:27:46.093761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.458 qpair failed and we were unable to recover it. 00:26:27.458 [2024-07-16 00:27:46.093954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.458 [2024-07-16 00:27:46.093987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.458 qpair failed and we were unable to recover it. 00:26:27.458 [2024-07-16 00:27:46.094316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.458 [2024-07-16 00:27:46.094350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.458 qpair failed and we were unable to recover it. 00:26:27.458 [2024-07-16 00:27:46.094600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.458 [2024-07-16 00:27:46.094632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.458 qpair failed and we were unable to recover it. 00:26:27.458 [2024-07-16 00:27:46.094938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.458 [2024-07-16 00:27:46.094951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.458 qpair failed and we were unable to recover it. 00:26:27.458 [2024-07-16 00:27:46.095280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.458 [2024-07-16 00:27:46.095314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.458 qpair failed and we were unable to recover it. 00:26:27.458 [2024-07-16 00:27:46.095639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.458 [2024-07-16 00:27:46.095671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.458 qpair failed and we were unable to recover it. 00:26:27.458 [2024-07-16 00:27:46.095998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.458 [2024-07-16 00:27:46.096030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.458 qpair failed and we were unable to recover it. 00:26:27.458 [2024-07-16 00:27:46.096360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.458 [2024-07-16 00:27:46.096393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.458 qpair failed and we were unable to recover it. 00:26:27.458 [2024-07-16 00:27:46.096594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.458 [2024-07-16 00:27:46.096626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.458 qpair failed and we were unable to recover it. 00:26:27.458 [2024-07-16 00:27:46.096934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.458 [2024-07-16 00:27:46.096966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.458 qpair failed and we were unable to recover it. 00:26:27.458 [2024-07-16 00:27:46.097220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.458 [2024-07-16 00:27:46.097267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.458 qpair failed and we were unable to recover it. 00:26:27.458 [2024-07-16 00:27:46.097628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.458 [2024-07-16 00:27:46.097660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.458 qpair failed and we were unable to recover it. 00:26:27.458 [2024-07-16 00:27:46.097991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.458 [2024-07-16 00:27:46.098023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.458 qpair failed and we were unable to recover it. 00:26:27.458 [2024-07-16 00:27:46.098326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.458 [2024-07-16 00:27:46.098360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.458 qpair failed and we were unable to recover it. 00:26:27.459 [2024-07-16 00:27:46.098666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.459 [2024-07-16 00:27:46.098698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.459 qpair failed and we were unable to recover it. 00:26:27.459 [2024-07-16 00:27:46.099017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.459 [2024-07-16 00:27:46.099049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.459 qpair failed and we were unable to recover it. 00:26:27.459 [2024-07-16 00:27:46.099378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.459 [2024-07-16 00:27:46.099411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.459 qpair failed and we were unable to recover it. 00:26:27.459 [2024-07-16 00:27:46.099683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.459 [2024-07-16 00:27:46.099715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.459 qpair failed and we were unable to recover it. 00:26:27.459 [2024-07-16 00:27:46.099968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.459 [2024-07-16 00:27:46.100005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.459 qpair failed and we were unable to recover it. 00:26:27.459 [2024-07-16 00:27:46.100265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.459 [2024-07-16 00:27:46.100298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.459 qpair failed and we were unable to recover it. 00:26:27.459 [2024-07-16 00:27:46.100537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.459 [2024-07-16 00:27:46.100569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.459 qpair failed and we were unable to recover it. 00:26:27.459 [2024-07-16 00:27:46.100881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.459 [2024-07-16 00:27:46.100913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.459 qpair failed and we were unable to recover it. 00:26:27.459 [2024-07-16 00:27:46.101268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.459 [2024-07-16 00:27:46.101300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.459 qpair failed and we were unable to recover it. 00:26:27.459 [2024-07-16 00:27:46.101574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.459 [2024-07-16 00:27:46.101605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.459 qpair failed and we were unable to recover it. 00:26:27.459 [2024-07-16 00:27:46.101873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.459 [2024-07-16 00:27:46.101904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.459 qpair failed and we were unable to recover it. 00:26:27.459 [2024-07-16 00:27:46.102176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.459 [2024-07-16 00:27:46.102207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.459 qpair failed and we were unable to recover it. 00:26:27.459 [2024-07-16 00:27:46.102547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.459 [2024-07-16 00:27:46.102581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.459 qpair failed and we were unable to recover it. 00:26:27.459 [2024-07-16 00:27:46.102838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.459 [2024-07-16 00:27:46.102870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.459 qpair failed and we were unable to recover it. 00:26:27.459 [2024-07-16 00:27:46.103189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.459 [2024-07-16 00:27:46.103202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.459 qpair failed and we were unable to recover it. 00:26:27.459 [2024-07-16 00:27:46.103487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.459 [2024-07-16 00:27:46.103501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.459 qpair failed and we were unable to recover it. 00:26:27.459 [2024-07-16 00:27:46.103722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.459 [2024-07-16 00:27:46.103735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.459 qpair failed and we were unable to recover it. 00:26:27.459 [2024-07-16 00:27:46.103893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.459 [2024-07-16 00:27:46.103925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.459 qpair failed and we were unable to recover it. 00:26:27.459 [2024-07-16 00:27:46.104303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.459 [2024-07-16 00:27:46.104337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.459 qpair failed and we were unable to recover it. 00:26:27.459 [2024-07-16 00:27:46.104669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.459 [2024-07-16 00:27:46.104701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.459 qpair failed and we were unable to recover it. 00:26:27.459 [2024-07-16 00:27:46.105036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.459 [2024-07-16 00:27:46.105068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.459 qpair failed and we were unable to recover it. 00:26:27.459 [2024-07-16 00:27:46.105386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.459 [2024-07-16 00:27:46.105418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.459 qpair failed and we were unable to recover it. 00:26:27.459 [2024-07-16 00:27:46.105657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.459 [2024-07-16 00:27:46.105689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.459 qpair failed and we were unable to recover it. 00:26:27.459 [2024-07-16 00:27:46.106022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.459 [2024-07-16 00:27:46.106054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.459 qpair failed and we were unable to recover it. 00:26:27.459 [2024-07-16 00:27:46.106312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.459 [2024-07-16 00:27:46.106325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.459 qpair failed and we were unable to recover it. 00:26:27.459 [2024-07-16 00:27:46.106495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.459 [2024-07-16 00:27:46.106509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.459 qpair failed and we were unable to recover it. 00:26:27.459 [2024-07-16 00:27:46.106822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.459 [2024-07-16 00:27:46.106854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.459 qpair failed and we were unable to recover it. 00:26:27.459 [2024-07-16 00:27:46.107112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.459 [2024-07-16 00:27:46.107145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.459 qpair failed and we were unable to recover it. 00:26:27.459 [2024-07-16 00:27:46.107395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.459 [2024-07-16 00:27:46.107428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.459 qpair failed and we were unable to recover it. 00:26:27.459 [2024-07-16 00:27:46.107758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.459 [2024-07-16 00:27:46.107790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.459 qpair failed and we were unable to recover it. 00:26:27.459 [2024-07-16 00:27:46.108142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.459 [2024-07-16 00:27:46.108175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.459 qpair failed and we were unable to recover it. 00:26:27.459 [2024-07-16 00:27:46.108504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.459 [2024-07-16 00:27:46.108537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.459 qpair failed and we were unable to recover it. 00:26:27.459 [2024-07-16 00:27:46.108864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.459 [2024-07-16 00:27:46.108876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.459 qpair failed and we were unable to recover it. 00:26:27.459 [2024-07-16 00:27:46.109080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.459 [2024-07-16 00:27:46.109093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.459 qpair failed and we were unable to recover it. 00:26:27.459 [2024-07-16 00:27:46.109379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.459 [2024-07-16 00:27:46.109393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.459 qpair failed and we were unable to recover it. 00:26:27.459 [2024-07-16 00:27:46.109685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.459 [2024-07-16 00:27:46.109718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.459 qpair failed and we were unable to recover it. 00:26:27.459 [2024-07-16 00:27:46.110027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.459 [2024-07-16 00:27:46.110060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.459 qpair failed and we were unable to recover it. 00:26:27.459 [2024-07-16 00:27:46.110380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.459 [2024-07-16 00:27:46.110394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.459 qpair failed and we were unable to recover it. 00:26:27.459 [2024-07-16 00:27:46.110652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.459 [2024-07-16 00:27:46.110685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.459 qpair failed and we were unable to recover it. 00:26:27.459 [2024-07-16 00:27:46.110940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.459 [2024-07-16 00:27:46.110954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.459 qpair failed and we were unable to recover it. 00:26:27.459 [2024-07-16 00:27:46.111217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.459 [2024-07-16 00:27:46.111239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.459 qpair failed and we were unable to recover it. 00:26:27.460 [2024-07-16 00:27:46.111518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.460 [2024-07-16 00:27:46.111531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.460 qpair failed and we were unable to recover it. 00:26:27.460 [2024-07-16 00:27:46.111744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.460 [2024-07-16 00:27:46.111758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.460 qpair failed and we were unable to recover it. 00:26:27.460 [2024-07-16 00:27:46.112008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.460 [2024-07-16 00:27:46.112021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.460 qpair failed and we were unable to recover it. 00:26:27.460 [2024-07-16 00:27:46.112254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.460 [2024-07-16 00:27:46.112271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.460 qpair failed and we were unable to recover it. 00:26:27.460 [2024-07-16 00:27:46.112516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.460 [2024-07-16 00:27:46.112529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.460 qpair failed and we were unable to recover it. 00:26:27.460 [2024-07-16 00:27:46.112741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.460 [2024-07-16 00:27:46.112754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.460 qpair failed and we were unable to recover it. 00:26:27.460 [2024-07-16 00:27:46.112902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.460 [2024-07-16 00:27:46.112914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.460 qpair failed and we were unable to recover it. 00:26:27.460 [2024-07-16 00:27:46.113120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.460 [2024-07-16 00:27:46.113152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.460 qpair failed and we were unable to recover it. 00:26:27.460 [2024-07-16 00:27:46.113399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.460 [2024-07-16 00:27:46.113433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.460 qpair failed and we were unable to recover it. 00:26:27.460 [2024-07-16 00:27:46.113735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.460 [2024-07-16 00:27:46.113767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.460 qpair failed and we were unable to recover it. 00:26:27.460 [2024-07-16 00:27:46.114072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.460 [2024-07-16 00:27:46.114105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.460 qpair failed and we were unable to recover it. 00:26:27.460 [2024-07-16 00:27:46.114366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.460 [2024-07-16 00:27:46.114399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.460 qpair failed and we were unable to recover it. 00:26:27.460 [2024-07-16 00:27:46.114657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.460 [2024-07-16 00:27:46.114689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.460 qpair failed and we were unable to recover it. 00:26:27.460 [2024-07-16 00:27:46.115019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.460 [2024-07-16 00:27:46.115051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.460 qpair failed and we were unable to recover it. 00:26:27.460 [2024-07-16 00:27:46.115407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.460 [2024-07-16 00:27:46.115439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.460 qpair failed and we were unable to recover it. 00:26:27.460 [2024-07-16 00:27:46.115708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.460 [2024-07-16 00:27:46.115740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.460 qpair failed and we were unable to recover it. 00:26:27.460 [2024-07-16 00:27:46.115916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.460 [2024-07-16 00:27:46.115949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.460 qpair failed and we were unable to recover it. 00:26:27.460 [2024-07-16 00:27:46.116207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.460 [2024-07-16 00:27:46.116253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.460 qpair failed and we were unable to recover it. 00:26:27.460 [2024-07-16 00:27:46.116534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.460 [2024-07-16 00:27:46.116566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.460 qpair failed and we were unable to recover it. 00:26:27.460 [2024-07-16 00:27:46.116825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.460 [2024-07-16 00:27:46.116857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.460 qpair failed and we were unable to recover it. 00:26:27.460 [2024-07-16 00:27:46.117210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.460 [2024-07-16 00:27:46.117267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.460 qpair failed and we were unable to recover it. 00:26:27.460 [2024-07-16 00:27:46.117452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.460 [2024-07-16 00:27:46.117484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.460 qpair failed and we were unable to recover it. 00:26:27.460 [2024-07-16 00:27:46.117821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.460 [2024-07-16 00:27:46.117852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.460 qpair failed and we were unable to recover it. 00:26:27.460 [2024-07-16 00:27:46.118112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.460 [2024-07-16 00:27:46.118144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.460 qpair failed and we were unable to recover it. 00:26:27.460 [2024-07-16 00:27:46.118473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.460 [2024-07-16 00:27:46.118507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.460 qpair failed and we were unable to recover it. 00:26:27.460 [2024-07-16 00:27:46.118832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.460 [2024-07-16 00:27:46.118845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.460 qpair failed and we were unable to recover it. 00:26:27.460 [2024-07-16 00:27:46.118980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.460 [2024-07-16 00:27:46.119010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.460 qpair failed and we were unable to recover it. 00:26:27.460 [2024-07-16 00:27:46.119262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.460 [2024-07-16 00:27:46.119295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.460 qpair failed and we were unable to recover it. 00:26:27.460 [2024-07-16 00:27:46.119643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.460 [2024-07-16 00:27:46.119674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.460 qpair failed and we were unable to recover it. 00:26:27.460 [2024-07-16 00:27:46.119920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.460 [2024-07-16 00:27:46.119933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.460 qpair failed and we were unable to recover it. 00:26:27.460 [2024-07-16 00:27:46.120240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.460 [2024-07-16 00:27:46.120274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.460 qpair failed and we were unable to recover it. 00:26:27.460 [2024-07-16 00:27:46.120514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.460 [2024-07-16 00:27:46.120545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.460 qpair failed and we were unable to recover it. 00:26:27.460 [2024-07-16 00:27:46.120803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.460 [2024-07-16 00:27:46.120843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.460 qpair failed and we were unable to recover it. 00:26:27.460 [2024-07-16 00:27:46.121127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.460 [2024-07-16 00:27:46.121141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.460 qpair failed and we were unable to recover it. 00:26:27.460 [2024-07-16 00:27:46.121434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.460 [2024-07-16 00:27:46.121467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.460 qpair failed and we were unable to recover it. 00:26:27.460 [2024-07-16 00:27:46.121763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.460 [2024-07-16 00:27:46.121795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.460 qpair failed and we were unable to recover it. 00:26:27.460 [2024-07-16 00:27:46.122080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.460 [2024-07-16 00:27:46.122111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.460 qpair failed and we were unable to recover it. 00:26:27.460 [2024-07-16 00:27:46.122371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.460 [2024-07-16 00:27:46.122404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.460 qpair failed and we were unable to recover it. 00:26:27.460 [2024-07-16 00:27:46.122713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.460 [2024-07-16 00:27:46.122745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.460 qpair failed and we were unable to recover it. 00:26:27.460 [2024-07-16 00:27:46.123063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.460 [2024-07-16 00:27:46.123095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.460 qpair failed and we were unable to recover it. 00:26:27.460 [2024-07-16 00:27:46.123341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.461 [2024-07-16 00:27:46.123374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.461 qpair failed and we were unable to recover it. 00:26:27.461 [2024-07-16 00:27:46.123686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.461 [2024-07-16 00:27:46.123718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.461 qpair failed and we were unable to recover it. 00:26:27.461 [2024-07-16 00:27:46.124006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.461 [2024-07-16 00:27:46.124051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.461 qpair failed and we were unable to recover it. 00:26:27.461 [2024-07-16 00:27:46.124300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.461 [2024-07-16 00:27:46.124338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.461 qpair failed and we were unable to recover it. 00:26:27.461 [2024-07-16 00:27:46.124672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.461 [2024-07-16 00:27:46.124712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.461 qpair failed and we were unable to recover it. 00:26:27.461 [2024-07-16 00:27:46.124991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.461 [2024-07-16 00:27:46.125023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.461 qpair failed and we were unable to recover it. 00:26:27.461 [2024-07-16 00:27:46.125335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.461 [2024-07-16 00:27:46.125368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.461 qpair failed and we were unable to recover it. 00:26:27.461 [2024-07-16 00:27:46.125684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.461 [2024-07-16 00:27:46.125715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.461 qpair failed and we were unable to recover it. 00:26:27.461 [2024-07-16 00:27:46.126028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.461 [2024-07-16 00:27:46.126060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.461 qpair failed and we were unable to recover it. 00:26:27.461 [2024-07-16 00:27:46.126261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.461 [2024-07-16 00:27:46.126294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.461 qpair failed and we were unable to recover it. 00:26:27.461 [2024-07-16 00:27:46.126628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.461 [2024-07-16 00:27:46.126660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.461 qpair failed and we were unable to recover it. 00:26:27.461 [2024-07-16 00:27:46.126985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.461 [2024-07-16 00:27:46.127018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.461 qpair failed and we were unable to recover it. 00:26:27.461 [2024-07-16 00:27:46.127315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.461 [2024-07-16 00:27:46.127329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.461 qpair failed and we were unable to recover it. 00:26:27.461 [2024-07-16 00:27:46.127566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.461 [2024-07-16 00:27:46.127598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.461 qpair failed and we were unable to recover it. 00:26:27.461 [2024-07-16 00:27:46.127927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.461 [2024-07-16 00:27:46.127959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.461 qpair failed and we were unable to recover it. 00:26:27.461 [2024-07-16 00:27:46.128283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.461 [2024-07-16 00:27:46.128296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.461 qpair failed and we were unable to recover it. 00:26:27.461 [2024-07-16 00:27:46.128625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.461 [2024-07-16 00:27:46.128658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.461 qpair failed and we were unable to recover it. 00:26:27.461 [2024-07-16 00:27:46.128980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.461 [2024-07-16 00:27:46.129012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.461 qpair failed and we were unable to recover it. 00:26:27.461 [2024-07-16 00:27:46.129326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.461 [2024-07-16 00:27:46.129359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.461 qpair failed and we were unable to recover it. 00:26:27.461 [2024-07-16 00:27:46.129680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.461 [2024-07-16 00:27:46.129712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.461 qpair failed and we were unable to recover it. 00:26:27.461 [2024-07-16 00:27:46.130021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.461 [2024-07-16 00:27:46.130060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.461 qpair failed and we were unable to recover it. 00:26:27.461 [2024-07-16 00:27:46.130281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.461 [2024-07-16 00:27:46.130295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.461 qpair failed and we were unable to recover it. 00:26:27.461 [2024-07-16 00:27:46.130563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.461 [2024-07-16 00:27:46.130596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.461 qpair failed and we were unable to recover it. 00:26:27.461 [2024-07-16 00:27:46.130912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.461 [2024-07-16 00:27:46.130945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.461 qpair failed and we were unable to recover it. 00:26:27.461 [2024-07-16 00:27:46.131192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.461 [2024-07-16 00:27:46.131234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.461 qpair failed and we were unable to recover it. 00:26:27.461 [2024-07-16 00:27:46.131507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.461 [2024-07-16 00:27:46.131539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.461 qpair failed and we were unable to recover it. 00:26:27.461 [2024-07-16 00:27:46.131793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.461 [2024-07-16 00:27:46.131806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.461 qpair failed and we were unable to recover it. 00:26:27.461 [2024-07-16 00:27:46.132000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.461 [2024-07-16 00:27:46.132013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.461 qpair failed and we were unable to recover it. 00:26:27.461 [2024-07-16 00:27:46.132214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.461 [2024-07-16 00:27:46.132233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.461 qpair failed and we were unable to recover it. 00:26:27.461 [2024-07-16 00:27:46.132518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.461 [2024-07-16 00:27:46.132551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.461 qpair failed and we were unable to recover it. 00:26:27.461 [2024-07-16 00:27:46.132894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.461 [2024-07-16 00:27:46.132926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.461 qpair failed and we were unable to recover it. 00:26:27.461 [2024-07-16 00:27:46.133242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.461 [2024-07-16 00:27:46.133256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.461 qpair failed and we were unable to recover it. 00:26:27.461 [2024-07-16 00:27:46.133465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.461 [2024-07-16 00:27:46.133497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.461 qpair failed and we were unable to recover it. 00:26:27.461 [2024-07-16 00:27:46.133667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.461 [2024-07-16 00:27:46.133699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.461 qpair failed and we were unable to recover it. 00:26:27.461 [2024-07-16 00:27:46.133902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.461 [2024-07-16 00:27:46.133935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.461 qpair failed and we were unable to recover it. 00:26:27.461 [2024-07-16 00:27:46.134171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.461 [2024-07-16 00:27:46.134184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.461 qpair failed and we were unable to recover it. 00:26:27.461 [2024-07-16 00:27:46.134405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.461 [2024-07-16 00:27:46.134419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.461 qpair failed and we were unable to recover it. 00:26:27.461 [2024-07-16 00:27:46.134622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.461 [2024-07-16 00:27:46.134636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.461 qpair failed and we were unable to recover it. 00:26:27.461 [2024-07-16 00:27:46.134925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.461 [2024-07-16 00:27:46.134958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.461 qpair failed and we were unable to recover it. 00:26:27.461 [2024-07-16 00:27:46.135209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.461 [2024-07-16 00:27:46.135253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.461 qpair failed and we were unable to recover it. 00:26:27.461 [2024-07-16 00:27:46.135436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.461 [2024-07-16 00:27:46.135468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.461 qpair failed and we were unable to recover it. 00:26:27.462 [2024-07-16 00:27:46.135803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.462 [2024-07-16 00:27:46.135836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.462 qpair failed and we were unable to recover it. 00:26:27.462 [2024-07-16 00:27:46.136091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.462 [2024-07-16 00:27:46.136104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.462 qpair failed and we were unable to recover it. 00:26:27.462 [2024-07-16 00:27:46.136315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.462 [2024-07-16 00:27:46.136333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.462 qpair failed and we were unable to recover it. 00:26:27.462 [2024-07-16 00:27:46.136637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.462 [2024-07-16 00:27:46.136670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.462 qpair failed and we were unable to recover it. 00:26:27.462 [2024-07-16 00:27:46.136932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.462 [2024-07-16 00:27:46.136971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.462 qpair failed and we were unable to recover it. 00:26:27.462 [2024-07-16 00:27:46.137262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.462 [2024-07-16 00:27:46.137276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.462 qpair failed and we were unable to recover it. 00:26:27.462 [2024-07-16 00:27:46.137570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.462 [2024-07-16 00:27:46.137602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.462 qpair failed and we were unable to recover it. 00:26:27.462 [2024-07-16 00:27:46.137951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.462 [2024-07-16 00:27:46.137983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.462 qpair failed and we were unable to recover it. 00:26:27.462 [2024-07-16 00:27:46.138212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.462 [2024-07-16 00:27:46.138232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.462 qpair failed and we were unable to recover it. 00:26:27.462 [2024-07-16 00:27:46.138447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.462 [2024-07-16 00:27:46.138479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.462 qpair failed and we were unable to recover it. 00:26:27.462 [2024-07-16 00:27:46.138790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.462 [2024-07-16 00:27:46.138822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.462 qpair failed and we were unable to recover it. 00:26:27.462 [2024-07-16 00:27:46.139139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.462 [2024-07-16 00:27:46.139172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.462 qpair failed and we were unable to recover it. 00:26:27.462 [2024-07-16 00:27:46.139466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.462 [2024-07-16 00:27:46.139500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.462 qpair failed and we were unable to recover it. 00:26:27.462 [2024-07-16 00:27:46.139786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.462 [2024-07-16 00:27:46.139817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.462 qpair failed and we were unable to recover it. 00:26:27.462 [2024-07-16 00:27:46.140057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.462 [2024-07-16 00:27:46.140090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.462 qpair failed and we were unable to recover it. 00:26:27.462 [2024-07-16 00:27:46.140387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.462 [2024-07-16 00:27:46.140420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.462 qpair failed and we were unable to recover it. 00:26:27.462 [2024-07-16 00:27:46.140779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.462 [2024-07-16 00:27:46.140811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.462 qpair failed and we were unable to recover it. 00:26:27.462 [2024-07-16 00:27:46.141139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.462 [2024-07-16 00:27:46.141171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.462 qpair failed and we were unable to recover it. 00:26:27.462 [2024-07-16 00:27:46.141435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.462 [2024-07-16 00:27:46.141469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.462 qpair failed and we were unable to recover it. 00:26:27.462 [2024-07-16 00:27:46.141740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.462 [2024-07-16 00:27:46.141772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.462 qpair failed and we were unable to recover it. 00:26:27.462 [2024-07-16 00:27:46.142031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.462 [2024-07-16 00:27:46.142063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.462 qpair failed and we were unable to recover it. 00:26:27.462 [2024-07-16 00:27:46.142310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.462 [2024-07-16 00:27:46.142343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.462 qpair failed and we were unable to recover it. 00:26:27.462 [2024-07-16 00:27:46.142691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.462 [2024-07-16 00:27:46.142723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.462 qpair failed and we were unable to recover it. 00:26:27.462 [2024-07-16 00:27:46.143028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.462 [2024-07-16 00:27:46.143060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.462 qpair failed and we were unable to recover it. 00:26:27.462 [2024-07-16 00:27:46.143346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.462 [2024-07-16 00:27:46.143379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.462 qpair failed and we were unable to recover it. 00:26:27.462 [2024-07-16 00:27:46.143631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.462 [2024-07-16 00:27:46.143663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.462 qpair failed and we were unable to recover it. 00:26:27.462 [2024-07-16 00:27:46.143925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.462 [2024-07-16 00:27:46.143957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.462 qpair failed and we were unable to recover it. 00:26:27.462 [2024-07-16 00:27:46.144238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.462 [2024-07-16 00:27:46.144272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.462 qpair failed and we were unable to recover it. 00:26:27.462 [2024-07-16 00:27:46.144536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.462 [2024-07-16 00:27:46.144581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.462 qpair failed and we were unable to recover it. 00:26:27.462 [2024-07-16 00:27:46.144905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.462 [2024-07-16 00:27:46.144937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.462 qpair failed and we were unable to recover it. 00:26:27.462 [2024-07-16 00:27:46.145137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.462 [2024-07-16 00:27:46.145170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.462 qpair failed and we were unable to recover it. 00:26:27.462 [2024-07-16 00:27:46.145509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.462 [2024-07-16 00:27:46.145542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.462 qpair failed and we were unable to recover it. 00:26:27.462 [2024-07-16 00:27:46.145892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.462 [2024-07-16 00:27:46.145924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.462 qpair failed and we were unable to recover it. 00:26:27.462 [2024-07-16 00:27:46.146177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.462 [2024-07-16 00:27:46.146190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.462 qpair failed and we were unable to recover it. 00:26:27.462 [2024-07-16 00:27:46.146349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.463 [2024-07-16 00:27:46.146363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.463 qpair failed and we were unable to recover it. 00:26:27.463 [2024-07-16 00:27:46.146568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.463 [2024-07-16 00:27:46.146600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.463 qpair failed and we were unable to recover it. 00:26:27.463 [2024-07-16 00:27:46.146939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.463 [2024-07-16 00:27:46.146972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.463 qpair failed and we were unable to recover it. 00:26:27.463 [2024-07-16 00:27:46.147199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.463 [2024-07-16 00:27:46.147212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.463 qpair failed and we were unable to recover it. 00:26:27.463 [2024-07-16 00:27:46.147545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.463 [2024-07-16 00:27:46.147587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.463 qpair failed and we were unable to recover it. 00:26:27.463 [2024-07-16 00:27:46.147827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.463 [2024-07-16 00:27:46.147859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.463 qpair failed and we were unable to recover it. 00:26:27.463 [2024-07-16 00:27:46.148240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.463 [2024-07-16 00:27:46.148273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.463 qpair failed and we were unable to recover it. 00:26:27.463 [2024-07-16 00:27:46.148598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.463 [2024-07-16 00:27:46.148630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.463 qpair failed and we were unable to recover it. 00:26:27.463 [2024-07-16 00:27:46.148943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.463 [2024-07-16 00:27:46.148980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.463 qpair failed and we were unable to recover it. 00:26:27.463 [2024-07-16 00:27:46.149312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.463 [2024-07-16 00:27:46.149346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.463 qpair failed and we were unable to recover it. 00:26:27.463 [2024-07-16 00:27:46.149626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.463 [2024-07-16 00:27:46.149659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.463 qpair failed and we were unable to recover it. 00:26:27.463 [2024-07-16 00:27:46.149872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.463 [2024-07-16 00:27:46.149885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.463 qpair failed and we were unable to recover it. 00:26:27.463 [2024-07-16 00:27:46.150158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.463 [2024-07-16 00:27:46.150191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.463 qpair failed and we were unable to recover it. 00:26:27.463 [2024-07-16 00:27:46.150516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.463 [2024-07-16 00:27:46.150549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.463 qpair failed and we were unable to recover it. 00:26:27.463 [2024-07-16 00:27:46.150853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.463 [2024-07-16 00:27:46.150886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.463 qpair failed and we were unable to recover it. 00:26:27.463 [2024-07-16 00:27:46.151129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.463 [2024-07-16 00:27:46.151153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.463 qpair failed and we were unable to recover it. 00:26:27.463 [2024-07-16 00:27:46.151450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.463 [2024-07-16 00:27:46.151464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.463 qpair failed and we were unable to recover it. 00:26:27.463 [2024-07-16 00:27:46.151705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.463 [2024-07-16 00:27:46.151719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.463 qpair failed and we were unable to recover it. 00:26:27.463 [2024-07-16 00:27:46.151984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.463 [2024-07-16 00:27:46.152016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.463 qpair failed and we were unable to recover it. 00:26:27.463 [2024-07-16 00:27:46.152264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.463 [2024-07-16 00:27:46.152297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.463 qpair failed and we were unable to recover it. 00:26:27.463 [2024-07-16 00:27:46.152606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.463 [2024-07-16 00:27:46.152638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.463 qpair failed and we were unable to recover it. 00:26:27.463 [2024-07-16 00:27:46.152903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.463 [2024-07-16 00:27:46.152935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.463 qpair failed and we were unable to recover it. 00:26:27.463 [2024-07-16 00:27:46.153251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.463 [2024-07-16 00:27:46.153286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.463 qpair failed and we were unable to recover it. 00:26:27.463 [2024-07-16 00:27:46.153595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.463 [2024-07-16 00:27:46.153628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.463 qpair failed and we were unable to recover it. 00:26:27.463 [2024-07-16 00:27:46.153948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.463 [2024-07-16 00:27:46.153980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.463 qpair failed and we were unable to recover it. 00:26:27.463 [2024-07-16 00:27:46.154183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.463 [2024-07-16 00:27:46.154196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.463 qpair failed and we were unable to recover it. 00:26:27.463 [2024-07-16 00:27:46.154482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.463 [2024-07-16 00:27:46.154495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.463 qpair failed and we were unable to recover it. 00:26:27.463 [2024-07-16 00:27:46.154794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.463 [2024-07-16 00:27:46.154807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.463 qpair failed and we were unable to recover it. 00:26:27.463 [2024-07-16 00:27:46.155108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.463 [2024-07-16 00:27:46.155139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.463 qpair failed and we were unable to recover it. 00:26:27.463 [2024-07-16 00:27:46.155404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.463 [2024-07-16 00:27:46.155438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.463 qpair failed and we were unable to recover it. 00:26:27.463 [2024-07-16 00:27:46.155794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.463 [2024-07-16 00:27:46.155826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.463 qpair failed and we were unable to recover it. 00:26:27.463 [2024-07-16 00:27:46.156125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.463 [2024-07-16 00:27:46.156158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.463 qpair failed and we were unable to recover it. 00:26:27.463 [2024-07-16 00:27:46.156413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.463 [2024-07-16 00:27:46.156446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.463 qpair failed and we were unable to recover it. 00:26:27.463 [2024-07-16 00:27:46.156753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.463 [2024-07-16 00:27:46.156786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.463 qpair failed and we were unable to recover it. 00:26:27.463 [2024-07-16 00:27:46.157089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.463 [2024-07-16 00:27:46.157122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.463 qpair failed and we were unable to recover it. 00:26:27.463 [2024-07-16 00:27:46.157454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.463 [2024-07-16 00:27:46.157486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.463 qpair failed and we were unable to recover it. 00:26:27.463 [2024-07-16 00:27:46.157684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.463 [2024-07-16 00:27:46.157717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.463 qpair failed and we were unable to recover it. 00:26:27.463 [2024-07-16 00:27:46.157940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.463 [2024-07-16 00:27:46.157954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.463 qpair failed and we were unable to recover it. 00:26:27.463 [2024-07-16 00:27:46.158237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.463 [2024-07-16 00:27:46.158250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.463 qpair failed and we were unable to recover it. 00:26:27.463 [2024-07-16 00:27:46.158571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.463 [2024-07-16 00:27:46.158603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.463 qpair failed and we were unable to recover it. 00:26:27.463 [2024-07-16 00:27:46.158881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.464 [2024-07-16 00:27:46.158913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.464 qpair failed and we were unable to recover it. 00:26:27.464 [2024-07-16 00:27:46.159091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.464 [2024-07-16 00:27:46.159124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.464 qpair failed and we were unable to recover it. 00:26:27.464 [2024-07-16 00:27:46.159459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.464 [2024-07-16 00:27:46.159493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.464 qpair failed and we were unable to recover it. 00:26:27.464 [2024-07-16 00:27:46.159829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.464 [2024-07-16 00:27:46.159861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.464 qpair failed and we were unable to recover it. 00:26:27.464 [2024-07-16 00:27:46.160100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.464 [2024-07-16 00:27:46.160132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.464 qpair failed and we were unable to recover it. 00:26:27.464 [2024-07-16 00:27:46.160389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.464 [2024-07-16 00:27:46.160402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.464 qpair failed and we were unable to recover it. 00:26:27.464 [2024-07-16 00:27:46.160602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.464 [2024-07-16 00:27:46.160617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.464 qpair failed and we were unable to recover it. 00:26:27.464 [2024-07-16 00:27:46.160824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.464 [2024-07-16 00:27:46.160837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.464 qpair failed and we were unable to recover it. 00:26:27.464 [2024-07-16 00:27:46.160973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.464 [2024-07-16 00:27:46.160988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.464 qpair failed and we were unable to recover it. 00:26:27.464 [2024-07-16 00:27:46.161257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.464 [2024-07-16 00:27:46.161289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.464 qpair failed and we were unable to recover it. 00:26:27.464 [2024-07-16 00:27:46.161648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.464 [2024-07-16 00:27:46.161681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.464 qpair failed and we were unable to recover it. 00:26:27.464 [2024-07-16 00:27:46.161956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.464 [2024-07-16 00:27:46.161970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.464 qpair failed and we were unable to recover it. 00:26:27.464 [2024-07-16 00:27:46.162335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.464 [2024-07-16 00:27:46.162367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.464 qpair failed and we were unable to recover it. 00:26:27.464 [2024-07-16 00:27:46.162698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.464 [2024-07-16 00:27:46.162730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.464 qpair failed and we were unable to recover it. 00:26:27.464 [2024-07-16 00:27:46.163010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.464 [2024-07-16 00:27:46.163042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.464 qpair failed and we were unable to recover it. 00:26:27.464 [2024-07-16 00:27:46.163314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.464 [2024-07-16 00:27:46.163347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.464 qpair failed and we were unable to recover it. 00:26:27.464 [2024-07-16 00:27:46.163623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.464 [2024-07-16 00:27:46.163656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.464 qpair failed and we were unable to recover it. 00:26:27.464 [2024-07-16 00:27:46.163891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.464 [2024-07-16 00:27:46.163904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.464 qpair failed and we were unable to recover it. 00:26:27.464 [2024-07-16 00:27:46.164204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.464 [2024-07-16 00:27:46.164246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.464 qpair failed and we were unable to recover it. 00:26:27.464 [2024-07-16 00:27:46.164579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.464 [2024-07-16 00:27:46.164612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.464 qpair failed and we were unable to recover it. 00:26:27.464 [2024-07-16 00:27:46.164795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.464 [2024-07-16 00:27:46.164809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.464 qpair failed and we were unable to recover it. 00:26:27.464 [2024-07-16 00:27:46.165092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.464 [2024-07-16 00:27:46.165124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.464 qpair failed and we were unable to recover it. 00:26:27.464 [2024-07-16 00:27:46.165465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.464 [2024-07-16 00:27:46.165498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.464 qpair failed and we were unable to recover it. 00:26:27.464 [2024-07-16 00:27:46.165809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.464 [2024-07-16 00:27:46.165841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.464 qpair failed and we were unable to recover it. 00:26:27.464 [2024-07-16 00:27:46.166155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.464 [2024-07-16 00:27:46.166187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.464 qpair failed and we were unable to recover it. 00:26:27.464 [2024-07-16 00:27:46.166534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.464 [2024-07-16 00:27:46.166567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.464 qpair failed and we were unable to recover it. 00:26:27.464 [2024-07-16 00:27:46.166878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.464 [2024-07-16 00:27:46.166911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.464 qpair failed and we were unable to recover it. 00:26:27.464 [2024-07-16 00:27:46.167246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.464 [2024-07-16 00:27:46.167280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.464 qpair failed and we were unable to recover it. 00:26:27.464 [2024-07-16 00:27:46.167614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.464 [2024-07-16 00:27:46.167646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.464 qpair failed and we were unable to recover it. 00:26:27.464 [2024-07-16 00:27:46.167822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.464 [2024-07-16 00:27:46.167853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.464 qpair failed and we were unable to recover it. 00:26:27.464 [2024-07-16 00:27:46.168158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.464 [2024-07-16 00:27:46.168190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.464 qpair failed and we were unable to recover it. 00:26:27.464 [2024-07-16 00:27:46.168534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.464 [2024-07-16 00:27:46.168568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.464 qpair failed and we were unable to recover it. 00:26:27.464 [2024-07-16 00:27:46.168822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.464 [2024-07-16 00:27:46.168855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.464 qpair failed and we were unable to recover it. 00:26:27.464 [2024-07-16 00:27:46.169102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.464 [2024-07-16 00:27:46.169116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.464 qpair failed and we were unable to recover it. 00:26:27.464 [2024-07-16 00:27:46.169318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.464 [2024-07-16 00:27:46.169331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.464 qpair failed and we were unable to recover it. 00:26:27.464 [2024-07-16 00:27:46.169572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.464 [2024-07-16 00:27:46.169603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.464 qpair failed and we were unable to recover it. 00:26:27.464 [2024-07-16 00:27:46.169799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.464 [2024-07-16 00:27:46.169832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.464 qpair failed and we were unable to recover it. 00:26:27.464 [2024-07-16 00:27:46.170168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.464 [2024-07-16 00:27:46.170211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.464 qpair failed and we were unable to recover it. 00:26:27.464 [2024-07-16 00:27:46.170491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.464 [2024-07-16 00:27:46.170504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.464 qpair failed and we were unable to recover it. 00:26:27.464 [2024-07-16 00:27:46.170777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.464 [2024-07-16 00:27:46.170810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.464 qpair failed and we were unable to recover it. 00:26:27.464 [2024-07-16 00:27:46.171131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.464 [2024-07-16 00:27:46.171164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.465 qpair failed and we were unable to recover it. 00:26:27.465 [2024-07-16 00:27:46.171473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.465 [2024-07-16 00:27:46.171506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.465 qpair failed and we were unable to recover it. 00:26:27.465 [2024-07-16 00:27:46.171846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.465 [2024-07-16 00:27:46.171878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.465 qpair failed and we were unable to recover it. 00:26:27.465 [2024-07-16 00:27:46.172214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.465 [2024-07-16 00:27:46.172255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.465 qpair failed and we were unable to recover it. 00:26:27.465 [2024-07-16 00:27:46.172537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.465 [2024-07-16 00:27:46.172569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.465 qpair failed and we were unable to recover it. 00:26:27.465 [2024-07-16 00:27:46.172823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.465 [2024-07-16 00:27:46.172856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.465 qpair failed and we were unable to recover it. 00:26:27.465 [2024-07-16 00:27:46.173113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.465 [2024-07-16 00:27:46.173145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.465 qpair failed and we were unable to recover it. 00:26:27.465 [2024-07-16 00:27:46.173439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.465 [2024-07-16 00:27:46.173473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.465 qpair failed and we were unable to recover it. 00:26:27.465 [2024-07-16 00:27:46.173780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.465 [2024-07-16 00:27:46.173817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.465 qpair failed and we were unable to recover it. 00:26:27.465 [2024-07-16 00:27:46.174059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.465 [2024-07-16 00:27:46.174091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.465 qpair failed and we were unable to recover it. 00:26:27.465 [2024-07-16 00:27:46.174278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.465 [2024-07-16 00:27:46.174310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.465 qpair failed and we were unable to recover it. 00:26:27.465 [2024-07-16 00:27:46.174642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.465 [2024-07-16 00:27:46.174680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.465 qpair failed and we were unable to recover it. 00:26:27.465 [2024-07-16 00:27:46.174908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.465 [2024-07-16 00:27:46.174922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.465 qpair failed and we were unable to recover it. 00:26:27.465 [2024-07-16 00:27:46.175186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.465 [2024-07-16 00:27:46.175200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.465 qpair failed and we were unable to recover it. 00:26:27.465 [2024-07-16 00:27:46.175516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.465 [2024-07-16 00:27:46.175549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.465 qpair failed and we were unable to recover it. 00:26:27.465 [2024-07-16 00:27:46.175859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.465 [2024-07-16 00:27:46.175891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.465 qpair failed and we were unable to recover it. 00:26:27.465 [2024-07-16 00:27:46.176128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.465 [2024-07-16 00:27:46.176160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.465 qpair failed and we were unable to recover it. 00:26:27.465 [2024-07-16 00:27:46.176516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.465 [2024-07-16 00:27:46.176549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.465 qpair failed and we were unable to recover it. 00:26:27.465 [2024-07-16 00:27:46.176857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.465 [2024-07-16 00:27:46.176889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.465 qpair failed and we were unable to recover it. 00:26:27.465 [2024-07-16 00:27:46.177209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.465 [2024-07-16 00:27:46.177254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.465 qpair failed and we were unable to recover it. 00:26:27.465 [2024-07-16 00:27:46.177591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.465 [2024-07-16 00:27:46.177622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.465 qpair failed and we were unable to recover it. 00:26:27.465 [2024-07-16 00:27:46.177875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.465 [2024-07-16 00:27:46.177907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.465 qpair failed and we were unable to recover it. 00:26:27.465 [2024-07-16 00:27:46.178239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.465 [2024-07-16 00:27:46.178272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.465 qpair failed and we were unable to recover it. 00:26:27.465 [2024-07-16 00:27:46.178532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.465 [2024-07-16 00:27:46.178564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.465 qpair failed and we were unable to recover it. 00:26:27.465 [2024-07-16 00:27:46.178841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.465 [2024-07-16 00:27:46.178874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.465 qpair failed and we were unable to recover it. 00:26:27.465 [2024-07-16 00:27:46.179150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.465 [2024-07-16 00:27:46.179163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.465 qpair failed and we were unable to recover it. 00:26:27.465 [2024-07-16 00:27:46.179482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.465 [2024-07-16 00:27:46.179515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.465 qpair failed and we were unable to recover it. 00:26:27.465 [2024-07-16 00:27:46.179776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.465 [2024-07-16 00:27:46.179809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.465 qpair failed and we were unable to recover it. 00:26:27.465 [2024-07-16 00:27:46.180155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.465 [2024-07-16 00:27:46.180169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.465 qpair failed and we were unable to recover it. 00:26:27.465 [2024-07-16 00:27:46.180408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.465 [2024-07-16 00:27:46.180421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.465 qpair failed and we were unable to recover it. 00:26:27.465 [2024-07-16 00:27:46.180687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.465 [2024-07-16 00:27:46.180700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.465 qpair failed and we were unable to recover it. 00:26:27.465 [2024-07-16 00:27:46.180899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.465 [2024-07-16 00:27:46.180912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.465 qpair failed and we were unable to recover it. 00:26:27.465 [2024-07-16 00:27:46.181138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.465 [2024-07-16 00:27:46.181151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.465 qpair failed and we were unable to recover it. 00:26:27.465 [2024-07-16 00:27:46.181347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.465 [2024-07-16 00:27:46.181360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.465 qpair failed and we were unable to recover it. 00:26:27.465 [2024-07-16 00:27:46.181661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.465 [2024-07-16 00:27:46.181693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.465 qpair failed and we were unable to recover it. 00:26:27.465 [2024-07-16 00:27:46.182025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.465 [2024-07-16 00:27:46.182057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.465 qpair failed and we were unable to recover it. 00:26:27.465 [2024-07-16 00:27:46.182316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.465 [2024-07-16 00:27:46.182348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.465 qpair failed and we were unable to recover it. 00:26:27.465 [2024-07-16 00:27:46.182612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.465 [2024-07-16 00:27:46.182643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.465 qpair failed and we were unable to recover it. 00:26:27.465 [2024-07-16 00:27:46.182993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.465 [2024-07-16 00:27:46.183025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.465 qpair failed and we were unable to recover it. 00:26:27.465 [2024-07-16 00:27:46.183294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.465 [2024-07-16 00:27:46.183327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.465 qpair failed and we were unable to recover it. 00:26:27.465 [2024-07-16 00:27:46.183586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.465 [2024-07-16 00:27:46.183618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.465 qpair failed and we were unable to recover it. 00:26:27.466 [2024-07-16 00:27:46.183924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.466 [2024-07-16 00:27:46.183956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.466 qpair failed and we were unable to recover it. 00:26:27.466 [2024-07-16 00:27:46.184277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.466 [2024-07-16 00:27:46.184309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.466 qpair failed and we were unable to recover it. 00:26:27.466 [2024-07-16 00:27:46.184478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.466 [2024-07-16 00:27:46.184511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.466 qpair failed and we were unable to recover it. 00:26:27.466 [2024-07-16 00:27:46.184786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.466 [2024-07-16 00:27:46.184818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.466 qpair failed and we were unable to recover it. 00:26:27.466 [2024-07-16 00:27:46.185149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.466 [2024-07-16 00:27:46.185182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.466 qpair failed and we were unable to recover it. 00:26:27.466 [2024-07-16 00:27:46.185542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.466 [2024-07-16 00:27:46.185575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.466 qpair failed and we were unable to recover it. 00:26:27.466 [2024-07-16 00:27:46.185886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.466 [2024-07-16 00:27:46.185918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.466 qpair failed and we were unable to recover it. 00:26:27.466 [2024-07-16 00:27:46.186217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.466 [2024-07-16 00:27:46.186264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.466 qpair failed and we were unable to recover it. 00:26:27.466 [2024-07-16 00:27:46.186577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.466 [2024-07-16 00:27:46.186608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.466 qpair failed and we were unable to recover it. 00:26:27.466 [2024-07-16 00:27:46.186894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.466 [2024-07-16 00:27:46.186925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.466 qpair failed and we were unable to recover it. 00:26:27.466 [2024-07-16 00:27:46.187255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.466 [2024-07-16 00:27:46.187289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.466 qpair failed and we were unable to recover it. 00:26:27.466 [2024-07-16 00:27:46.187621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.466 [2024-07-16 00:27:46.187654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.466 qpair failed and we were unable to recover it. 00:26:27.466 [2024-07-16 00:27:46.187908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.466 [2024-07-16 00:27:46.187941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.466 qpair failed and we were unable to recover it. 00:26:27.466 [2024-07-16 00:27:46.188270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.466 [2024-07-16 00:27:46.188303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.466 qpair failed and we were unable to recover it. 00:26:27.466 [2024-07-16 00:27:46.188638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.466 [2024-07-16 00:27:46.188671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.466 qpair failed and we were unable to recover it. 00:26:27.466 [2024-07-16 00:27:46.188999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.466 [2024-07-16 00:27:46.189032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.466 qpair failed and we were unable to recover it. 00:26:27.466 [2024-07-16 00:27:46.189359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.466 [2024-07-16 00:27:46.189373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.466 qpair failed and we were unable to recover it. 00:26:27.466 [2024-07-16 00:27:46.189639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.466 [2024-07-16 00:27:46.189652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.466 qpair failed and we were unable to recover it. 00:26:27.466 [2024-07-16 00:27:46.189947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.466 [2024-07-16 00:27:46.189980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.466 qpair failed and we were unable to recover it. 00:26:27.466 [2024-07-16 00:27:46.190219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.466 [2024-07-16 00:27:46.190260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.466 qpair failed and we were unable to recover it. 00:26:27.466 [2024-07-16 00:27:46.190591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.466 [2024-07-16 00:27:46.190622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.466 qpair failed and we were unable to recover it. 00:26:27.466 [2024-07-16 00:27:46.190958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.466 [2024-07-16 00:27:46.190991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.466 qpair failed and we were unable to recover it. 00:26:27.466 [2024-07-16 00:27:46.191316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.466 [2024-07-16 00:27:46.191329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.466 qpair failed and we were unable to recover it. 00:26:27.466 [2024-07-16 00:27:46.191657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.466 [2024-07-16 00:27:46.191689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.466 qpair failed and we were unable to recover it. 00:26:27.466 [2024-07-16 00:27:46.191941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.466 [2024-07-16 00:27:46.191973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.466 qpair failed and we were unable to recover it. 00:26:27.466 [2024-07-16 00:27:46.192282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.466 [2024-07-16 00:27:46.192296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.466 qpair failed and we were unable to recover it. 00:26:27.466 [2024-07-16 00:27:46.192507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.466 [2024-07-16 00:27:46.192520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.466 qpair failed and we were unable to recover it. 00:26:27.466 [2024-07-16 00:27:46.192723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.466 [2024-07-16 00:27:46.192737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.466 qpair failed and we were unable to recover it. 00:26:27.466 [2024-07-16 00:27:46.192950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.466 [2024-07-16 00:27:46.192982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.466 qpair failed and we were unable to recover it. 00:26:27.466 [2024-07-16 00:27:46.193317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.466 [2024-07-16 00:27:46.193350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.466 qpair failed and we were unable to recover it. 00:26:27.466 [2024-07-16 00:27:46.193682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.466 [2024-07-16 00:27:46.193714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.466 qpair failed and we were unable to recover it. 00:26:27.466 [2024-07-16 00:27:46.194017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.466 [2024-07-16 00:27:46.194050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.466 qpair failed and we were unable to recover it. 00:26:27.466 [2024-07-16 00:27:46.194354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.466 [2024-07-16 00:27:46.194388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.466 qpair failed and we were unable to recover it. 00:26:27.466 [2024-07-16 00:27:46.194731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.466 [2024-07-16 00:27:46.194762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.466 qpair failed and we were unable to recover it. 00:26:27.466 [2024-07-16 00:27:46.195171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.466 [2024-07-16 00:27:46.195270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.466 qpair failed and we were unable to recover it. 00:26:27.466 [2024-07-16 00:27:46.195594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.466 [2024-07-16 00:27:46.195613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.466 qpair failed and we were unable to recover it. 00:26:27.466 [2024-07-16 00:27:46.195917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.466 [2024-07-16 00:27:46.195950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.466 qpair failed and we were unable to recover it. 00:26:27.466 [2024-07-16 00:27:46.196262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.466 [2024-07-16 00:27:46.196298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.466 qpair failed and we were unable to recover it. 00:26:27.466 [2024-07-16 00:27:46.196630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.466 [2024-07-16 00:27:46.196663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.466 qpair failed and we were unable to recover it. 00:26:27.466 [2024-07-16 00:27:46.197004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.466 [2024-07-16 00:27:46.197037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.466 qpair failed and we were unable to recover it. 00:26:27.466 [2024-07-16 00:27:46.197303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.467 [2024-07-16 00:27:46.197339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.467 qpair failed and we were unable to recover it. 00:26:27.467 [2024-07-16 00:27:46.197695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.467 [2024-07-16 00:27:46.197727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.467 qpair failed and we were unable to recover it. 00:26:27.467 [2024-07-16 00:27:46.197976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.467 [2024-07-16 00:27:46.198009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.467 qpair failed and we were unable to recover it. 00:26:27.467 [2024-07-16 00:27:46.198338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.467 [2024-07-16 00:27:46.198372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.467 qpair failed and we were unable to recover it. 00:26:27.467 [2024-07-16 00:27:46.198700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.467 [2024-07-16 00:27:46.198732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.467 qpair failed and we were unable to recover it. 00:26:27.467 [2024-07-16 00:27:46.198936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.467 [2024-07-16 00:27:46.198969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.467 qpair failed and we were unable to recover it. 00:26:27.467 [2024-07-16 00:27:46.199307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.467 [2024-07-16 00:27:46.199340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.467 qpair failed and we were unable to recover it. 00:26:27.467 [2024-07-16 00:27:46.199592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.467 [2024-07-16 00:27:46.199624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.467 qpair failed and we were unable to recover it. 00:26:27.467 [2024-07-16 00:27:46.199883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.467 [2024-07-16 00:27:46.199916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.467 qpair failed and we were unable to recover it. 00:26:27.467 [2024-07-16 00:27:46.200255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.467 [2024-07-16 00:27:46.200290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.467 qpair failed and we were unable to recover it. 00:26:27.467 [2024-07-16 00:27:46.200538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.467 [2024-07-16 00:27:46.200570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.467 qpair failed and we were unable to recover it. 00:26:27.467 [2024-07-16 00:27:46.200904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.467 [2024-07-16 00:27:46.200938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.467 qpair failed and we were unable to recover it. 00:26:27.467 [2024-07-16 00:27:46.201184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.467 [2024-07-16 00:27:46.201202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.467 qpair failed and we were unable to recover it. 00:26:27.467 [2024-07-16 00:27:46.201494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.467 [2024-07-16 00:27:46.201527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.467 qpair failed and we were unable to recover it. 00:26:27.467 [2024-07-16 00:27:46.201765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.467 [2024-07-16 00:27:46.201798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.467 qpair failed and we were unable to recover it. 00:26:27.467 [2024-07-16 00:27:46.202053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.467 [2024-07-16 00:27:46.202087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.467 qpair failed and we were unable to recover it. 00:26:27.467 [2024-07-16 00:27:46.202371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.467 [2024-07-16 00:27:46.202404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.467 qpair failed and we were unable to recover it. 00:26:27.467 [2024-07-16 00:27:46.202677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.467 [2024-07-16 00:27:46.202710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.467 qpair failed and we were unable to recover it. 00:26:27.467 [2024-07-16 00:27:46.203054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.467 [2024-07-16 00:27:46.203087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.467 qpair failed and we were unable to recover it. 00:26:27.467 [2024-07-16 00:27:46.203420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.467 [2024-07-16 00:27:46.203454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.467 qpair failed and we were unable to recover it. 00:26:27.467 [2024-07-16 00:27:46.203637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.467 [2024-07-16 00:27:46.203669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.467 qpair failed and we were unable to recover it. 00:26:27.467 [2024-07-16 00:27:46.203977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.467 [2024-07-16 00:27:46.204016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.467 qpair failed and we were unable to recover it. 00:26:27.467 [2024-07-16 00:27:46.204281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.467 [2024-07-16 00:27:46.204298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.467 qpair failed and we were unable to recover it. 00:26:27.467 [2024-07-16 00:27:46.204497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.467 [2024-07-16 00:27:46.204514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.467 qpair failed and we were unable to recover it. 00:26:27.467 [2024-07-16 00:27:46.204718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.467 [2024-07-16 00:27:46.204735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.467 qpair failed and we were unable to recover it. 00:26:27.467 [2024-07-16 00:27:46.205074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.467 [2024-07-16 00:27:46.205107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.467 qpair failed and we were unable to recover it. 00:26:27.467 [2024-07-16 00:27:46.205434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.467 [2024-07-16 00:27:46.205467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.467 qpair failed and we were unable to recover it. 00:26:27.467 [2024-07-16 00:27:46.205795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.467 [2024-07-16 00:27:46.205827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.467 qpair failed and we were unable to recover it. 00:26:27.467 [2024-07-16 00:27:46.206157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.467 [2024-07-16 00:27:46.206189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.467 qpair failed and we were unable to recover it. 00:26:27.467 [2024-07-16 00:27:46.206478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.467 [2024-07-16 00:27:46.206512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.467 qpair failed and we were unable to recover it. 00:26:27.467 [2024-07-16 00:27:46.206821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.467 [2024-07-16 00:27:46.206853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.467 qpair failed and we were unable to recover it. 00:26:27.467 [2024-07-16 00:27:46.207167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.467 [2024-07-16 00:27:46.207199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.467 qpair failed and we were unable to recover it. 00:26:27.467 [2024-07-16 00:27:46.207507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.467 [2024-07-16 00:27:46.207524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.467 qpair failed and we were unable to recover it. 00:26:27.467 [2024-07-16 00:27:46.207859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.467 [2024-07-16 00:27:46.207891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.467 qpair failed and we were unable to recover it. 00:26:27.467 [2024-07-16 00:27:46.208203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.467 [2024-07-16 00:27:46.208248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.467 qpair failed and we were unable to recover it. 00:26:27.467 [2024-07-16 00:27:46.208567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.467 [2024-07-16 00:27:46.208600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.467 qpair failed and we were unable to recover it. 00:26:27.467 [2024-07-16 00:27:46.208880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.468 [2024-07-16 00:27:46.208913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.468 qpair failed and we were unable to recover it. 00:26:27.468 [2024-07-16 00:27:46.209244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.468 [2024-07-16 00:27:46.209278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.468 qpair failed and we were unable to recover it. 00:26:27.468 [2024-07-16 00:27:46.209517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.468 [2024-07-16 00:27:46.209549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.468 qpair failed and we were unable to recover it. 00:26:27.468 [2024-07-16 00:27:46.209883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.468 [2024-07-16 00:27:46.209916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.468 qpair failed and we were unable to recover it. 00:26:27.468 [2024-07-16 00:27:46.210234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.468 [2024-07-16 00:27:46.210268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.468 qpair failed and we were unable to recover it. 00:26:27.468 [2024-07-16 00:27:46.210561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.468 [2024-07-16 00:27:46.210593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.468 qpair failed and we were unable to recover it. 00:26:27.468 [2024-07-16 00:27:46.210918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.468 [2024-07-16 00:27:46.210950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.468 qpair failed and we were unable to recover it. 00:26:27.468 [2024-07-16 00:27:46.211221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.468 [2024-07-16 00:27:46.211249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.468 qpair failed and we were unable to recover it. 00:26:27.468 [2024-07-16 00:27:46.211567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.468 [2024-07-16 00:27:46.211600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.468 qpair failed and we were unable to recover it. 00:26:27.468 [2024-07-16 00:27:46.211858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.468 [2024-07-16 00:27:46.211891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.468 qpair failed and we were unable to recover it. 00:26:27.468 [2024-07-16 00:27:46.212200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.468 [2024-07-16 00:27:46.212246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.468 qpair failed and we were unable to recover it. 00:26:27.468 [2024-07-16 00:27:46.212577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.468 [2024-07-16 00:27:46.212610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.468 qpair failed and we were unable to recover it. 00:26:27.468 [2024-07-16 00:27:46.212941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.468 [2024-07-16 00:27:46.212986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.468 qpair failed and we were unable to recover it. 00:26:27.468 [2024-07-16 00:27:46.213314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.468 [2024-07-16 00:27:46.213349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.468 qpair failed and we were unable to recover it. 00:26:27.468 [2024-07-16 00:27:46.213655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.468 [2024-07-16 00:27:46.213687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.468 qpair failed and we were unable to recover it. 00:26:27.468 [2024-07-16 00:27:46.214009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.468 [2024-07-16 00:27:46.214042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.468 qpair failed and we were unable to recover it. 00:26:27.468 [2024-07-16 00:27:46.214350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.468 [2024-07-16 00:27:46.214383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.468 qpair failed and we were unable to recover it. 00:26:27.468 [2024-07-16 00:27:46.214631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.468 [2024-07-16 00:27:46.214664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.468 qpair failed and we were unable to recover it. 00:26:27.468 [2024-07-16 00:27:46.214939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.468 [2024-07-16 00:27:46.214980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.468 qpair failed and we were unable to recover it. 00:26:27.468 [2024-07-16 00:27:46.215316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.468 [2024-07-16 00:27:46.215349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.468 qpair failed and we were unable to recover it. 00:26:27.468 [2024-07-16 00:27:46.215617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.468 [2024-07-16 00:27:46.215649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.468 qpair failed and we were unable to recover it. 00:26:27.468 [2024-07-16 00:27:46.216005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.468 [2024-07-16 00:27:46.216037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.468 qpair failed and we were unable to recover it. 00:26:27.468 [2024-07-16 00:27:46.216280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.468 [2024-07-16 00:27:46.216315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.468 qpair failed and we were unable to recover it. 00:26:27.468 [2024-07-16 00:27:46.216645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.468 [2024-07-16 00:27:46.216677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.468 qpair failed and we were unable to recover it. 00:26:27.468 [2024-07-16 00:27:46.216924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.468 [2024-07-16 00:27:46.216957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.468 qpair failed and we were unable to recover it. 00:26:27.468 [2024-07-16 00:27:46.217332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.468 [2024-07-16 00:27:46.217366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.468 qpair failed and we were unable to recover it. 00:26:27.468 [2024-07-16 00:27:46.217578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.468 [2024-07-16 00:27:46.217612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.468 qpair failed and we were unable to recover it. 00:26:27.468 [2024-07-16 00:27:46.217939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.468 [2024-07-16 00:27:46.217981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.468 qpair failed and we were unable to recover it. 00:26:27.468 [2024-07-16 00:27:46.218223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.468 [2024-07-16 00:27:46.218246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.468 qpair failed and we were unable to recover it. 00:26:27.468 [2024-07-16 00:27:46.218532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.468 [2024-07-16 00:27:46.218565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.468 qpair failed and we were unable to recover it. 00:26:27.468 [2024-07-16 00:27:46.218891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.468 [2024-07-16 00:27:46.218923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.468 qpair failed and we were unable to recover it. 00:26:27.468 [2024-07-16 00:27:46.219212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.468 [2024-07-16 00:27:46.219268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.468 qpair failed and we were unable to recover it. 00:26:27.468 [2024-07-16 00:27:46.219511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.468 [2024-07-16 00:27:46.219544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.468 qpair failed and we were unable to recover it. 00:26:27.468 [2024-07-16 00:27:46.219868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.468 [2024-07-16 00:27:46.219902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.468 qpair failed and we were unable to recover it. 00:26:27.468 [2024-07-16 00:27:46.220208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.468 [2024-07-16 00:27:46.220252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.468 qpair failed and we were unable to recover it. 00:26:27.468 [2024-07-16 00:27:46.220581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.468 [2024-07-16 00:27:46.220614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.468 qpair failed and we were unable to recover it. 00:26:27.468 [2024-07-16 00:27:46.220855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.468 [2024-07-16 00:27:46.220887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.468 qpair failed and we were unable to recover it. 00:26:27.468 [2024-07-16 00:27:46.221128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.468 [2024-07-16 00:27:46.221171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.468 qpair failed and we were unable to recover it. 00:26:27.468 [2024-07-16 00:27:46.221443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.468 [2024-07-16 00:27:46.221461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.468 qpair failed and we were unable to recover it. 00:26:27.468 [2024-07-16 00:27:46.221742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.468 [2024-07-16 00:27:46.221780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.468 qpair failed and we were unable to recover it. 00:26:27.468 [2024-07-16 00:27:46.222086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.468 [2024-07-16 00:27:46.222119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.468 qpair failed and we were unable to recover it. 00:26:27.468 [2024-07-16 00:27:46.222446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.468 [2024-07-16 00:27:46.222463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.469 qpair failed and we were unable to recover it. 00:26:27.469 [2024-07-16 00:27:46.222676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.469 [2024-07-16 00:27:46.222708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.469 qpair failed and we were unable to recover it. 00:26:27.469 [2024-07-16 00:27:46.223028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.469 [2024-07-16 00:27:46.223061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.469 qpair failed and we were unable to recover it. 00:26:27.469 [2024-07-16 00:27:46.223409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.469 [2024-07-16 00:27:46.223427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.469 qpair failed and we were unable to recover it. 00:26:27.469 [2024-07-16 00:27:46.223714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.469 [2024-07-16 00:27:46.223747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.469 qpair failed and we were unable to recover it. 00:26:27.469 [2024-07-16 00:27:46.224003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.469 [2024-07-16 00:27:46.224036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.469 qpair failed and we were unable to recover it. 00:26:27.469 [2024-07-16 00:27:46.224348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.469 [2024-07-16 00:27:46.224382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.469 qpair failed and we were unable to recover it. 00:26:27.469 [2024-07-16 00:27:46.224695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.469 [2024-07-16 00:27:46.224728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.469 qpair failed and we were unable to recover it. 00:26:27.469 [2024-07-16 00:27:46.225052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.469 [2024-07-16 00:27:46.225085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.469 qpair failed and we were unable to recover it. 00:26:27.469 [2024-07-16 00:27:46.225344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.469 [2024-07-16 00:27:46.225378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.469 qpair failed and we were unable to recover it. 00:26:27.469 [2024-07-16 00:27:46.225737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.469 [2024-07-16 00:27:46.225771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.469 qpair failed and we were unable to recover it. 00:26:27.469 [2024-07-16 00:27:46.226038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.469 [2024-07-16 00:27:46.226074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.469 qpair failed and we were unable to recover it. 00:26:27.469 [2024-07-16 00:27:46.226413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.469 [2024-07-16 00:27:46.226447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.469 qpair failed and we were unable to recover it. 00:26:27.469 [2024-07-16 00:27:46.226787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.469 [2024-07-16 00:27:46.226820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.469 qpair failed and we were unable to recover it. 00:26:27.469 [2024-07-16 00:27:46.227127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.469 [2024-07-16 00:27:46.227160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.469 qpair failed and we were unable to recover it. 00:26:27.469 [2024-07-16 00:27:46.227405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.469 [2024-07-16 00:27:46.227439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.469 qpair failed and we were unable to recover it. 00:26:27.469 [2024-07-16 00:27:46.227782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.469 [2024-07-16 00:27:46.227814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.469 qpair failed and we were unable to recover it. 00:26:27.469 [2024-07-16 00:27:46.228064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.469 [2024-07-16 00:27:46.228097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.469 qpair failed and we were unable to recover it. 00:26:27.469 [2024-07-16 00:27:46.228465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.469 [2024-07-16 00:27:46.228499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.469 qpair failed and we were unable to recover it. 00:26:27.469 [2024-07-16 00:27:46.228839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.469 [2024-07-16 00:27:46.228872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.469 qpair failed and we were unable to recover it. 00:26:27.469 [2024-07-16 00:27:46.229065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.469 [2024-07-16 00:27:46.229082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.469 qpair failed and we were unable to recover it. 00:26:27.469 [2024-07-16 00:27:46.229308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.469 [2024-07-16 00:27:46.229342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.469 qpair failed and we were unable to recover it. 00:26:27.469 [2024-07-16 00:27:46.229674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.469 [2024-07-16 00:27:46.229706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.469 qpair failed and we were unable to recover it. 00:26:27.469 [2024-07-16 00:27:46.229989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.469 [2024-07-16 00:27:46.230022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.469 qpair failed and we were unable to recover it. 00:26:27.469 [2024-07-16 00:27:46.230285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.469 [2024-07-16 00:27:46.230303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.469 qpair failed and we were unable to recover it. 00:26:27.469 [2024-07-16 00:27:46.230542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.469 [2024-07-16 00:27:46.230559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.469 qpair failed and we were unable to recover it. 00:26:27.469 [2024-07-16 00:27:46.230879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.469 [2024-07-16 00:27:46.230896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.469 qpair failed and we were unable to recover it. 00:26:27.469 [2024-07-16 00:27:46.231143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.469 [2024-07-16 00:27:46.231176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.469 qpair failed and we were unable to recover it. 00:26:27.469 [2024-07-16 00:27:46.231459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.469 [2024-07-16 00:27:46.231492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.469 qpair failed and we were unable to recover it. 00:26:27.469 [2024-07-16 00:27:46.231825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.469 [2024-07-16 00:27:46.231857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.469 qpair failed and we were unable to recover it. 00:26:27.469 [2024-07-16 00:27:46.232178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.469 [2024-07-16 00:27:46.232210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.469 qpair failed and we were unable to recover it. 00:26:27.469 [2024-07-16 00:27:46.232544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.469 [2024-07-16 00:27:46.232577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.469 qpair failed and we were unable to recover it. 00:26:27.469 [2024-07-16 00:27:46.232907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.469 [2024-07-16 00:27:46.232941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.469 qpair failed and we were unable to recover it. 00:26:27.469 [2024-07-16 00:27:46.233277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.469 [2024-07-16 00:27:46.233311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.469 qpair failed and we were unable to recover it. 00:26:27.469 [2024-07-16 00:27:46.233637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.469 [2024-07-16 00:27:46.233670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.469 qpair failed and we were unable to recover it. 00:26:27.469 [2024-07-16 00:27:46.234005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.469 [2024-07-16 00:27:46.234039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.469 qpair failed and we were unable to recover it. 00:26:27.469 [2024-07-16 00:27:46.234320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.469 [2024-07-16 00:27:46.234354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.469 qpair failed and we were unable to recover it. 00:26:27.469 [2024-07-16 00:27:46.234616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.469 [2024-07-16 00:27:46.234649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.469 qpair failed and we were unable to recover it. 00:26:27.469 [2024-07-16 00:27:46.234933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.469 [2024-07-16 00:27:46.234966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.469 qpair failed and we were unable to recover it. 00:26:27.469 [2024-07-16 00:27:46.235212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.469 [2024-07-16 00:27:46.235281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.469 qpair failed and we were unable to recover it. 00:26:27.469 [2024-07-16 00:27:46.235624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.469 [2024-07-16 00:27:46.235656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.469 qpair failed and we were unable to recover it. 00:26:27.469 [2024-07-16 00:27:46.235981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.469 [2024-07-16 00:27:46.236014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.469 qpair failed and we were unable to recover it. 00:26:27.470 [2024-07-16 00:27:46.236351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.470 [2024-07-16 00:27:46.236384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.470 qpair failed and we were unable to recover it. 00:26:27.470 [2024-07-16 00:27:46.236717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.470 [2024-07-16 00:27:46.236750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.470 qpair failed and we were unable to recover it. 00:26:27.470 [2024-07-16 00:27:46.237075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.470 [2024-07-16 00:27:46.237091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.470 qpair failed and we were unable to recover it. 00:26:27.470 [2024-07-16 00:27:46.237339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.470 [2024-07-16 00:27:46.237374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.470 qpair failed and we were unable to recover it. 00:26:27.470 [2024-07-16 00:27:46.237711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.470 [2024-07-16 00:27:46.237744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.470 qpair failed and we were unable to recover it. 00:26:27.470 [2024-07-16 00:27:46.238072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.470 [2024-07-16 00:27:46.238104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.470 qpair failed and we were unable to recover it. 00:26:27.470 [2024-07-16 00:27:46.238435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.470 [2024-07-16 00:27:46.238469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.470 qpair failed and we were unable to recover it. 00:26:27.470 [2024-07-16 00:27:46.238751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.470 [2024-07-16 00:27:46.238783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.470 qpair failed and we were unable to recover it. 00:26:27.470 [2024-07-16 00:27:46.239114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.470 [2024-07-16 00:27:46.239147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.470 qpair failed and we were unable to recover it. 00:26:27.470 [2024-07-16 00:27:46.239481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.470 [2024-07-16 00:27:46.239514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.470 qpair failed and we were unable to recover it. 00:26:27.470 [2024-07-16 00:27:46.239842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.470 [2024-07-16 00:27:46.239875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.470 qpair failed and we were unable to recover it. 00:26:27.470 [2024-07-16 00:27:46.240210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.470 [2024-07-16 00:27:46.240254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.470 qpair failed and we were unable to recover it. 00:26:27.470 [2024-07-16 00:27:46.240531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.470 [2024-07-16 00:27:46.240564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.470 qpair failed and we were unable to recover it. 00:26:27.470 [2024-07-16 00:27:46.240898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.470 [2024-07-16 00:27:46.240930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.470 qpair failed and we were unable to recover it. 00:26:27.470 [2024-07-16 00:27:46.241245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.470 [2024-07-16 00:27:46.241280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.470 qpair failed and we were unable to recover it. 00:26:27.470 [2024-07-16 00:27:46.241572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.470 [2024-07-16 00:27:46.241605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.470 qpair failed and we were unable to recover it. 00:26:27.470 [2024-07-16 00:27:46.241953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.470 [2024-07-16 00:27:46.241986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.470 qpair failed and we were unable to recover it. 00:26:27.470 [2024-07-16 00:27:46.242245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.470 [2024-07-16 00:27:46.242262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.470 qpair failed and we were unable to recover it. 00:26:27.470 [2024-07-16 00:27:46.242572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.470 [2024-07-16 00:27:46.242606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.470 qpair failed and we were unable to recover it. 00:26:27.470 [2024-07-16 00:27:46.242877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.470 [2024-07-16 00:27:46.242910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.470 qpair failed and we were unable to recover it. 00:26:27.470 [2024-07-16 00:27:46.243265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.470 [2024-07-16 00:27:46.243300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.470 qpair failed and we were unable to recover it. 00:26:27.470 [2024-07-16 00:27:46.243567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.470 [2024-07-16 00:27:46.243600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.470 qpair failed and we were unable to recover it. 00:26:27.470 [2024-07-16 00:27:46.243851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.470 [2024-07-16 00:27:46.243883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.470 qpair failed and we were unable to recover it. 00:26:27.470 [2024-07-16 00:27:46.244084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.470 [2024-07-16 00:27:46.244101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.470 qpair failed and we were unable to recover it. 00:26:27.470 [2024-07-16 00:27:46.244401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.470 [2024-07-16 00:27:46.244440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.470 qpair failed and we were unable to recover it. 00:26:27.470 [2024-07-16 00:27:46.244727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.470 [2024-07-16 00:27:46.244760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.470 qpair failed and we were unable to recover it. 00:26:27.470 [2024-07-16 00:27:46.244961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.470 [2024-07-16 00:27:46.244996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.470 qpair failed and we were unable to recover it. 00:26:27.470 [2024-07-16 00:27:46.245315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.470 [2024-07-16 00:27:46.245348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.470 qpair failed and we were unable to recover it. 00:26:27.470 [2024-07-16 00:27:46.245675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.470 [2024-07-16 00:27:46.245709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.470 qpair failed and we were unable to recover it. 00:26:27.470 [2024-07-16 00:27:46.246020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.470 [2024-07-16 00:27:46.246053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.470 qpair failed and we were unable to recover it. 00:26:27.470 [2024-07-16 00:27:46.246292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.470 [2024-07-16 00:27:46.246326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.470 qpair failed and we were unable to recover it. 00:26:27.470 [2024-07-16 00:27:46.246582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.470 [2024-07-16 00:27:46.246600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.470 qpair failed and we were unable to recover it. 00:26:27.470 [2024-07-16 00:27:46.246910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.470 [2024-07-16 00:27:46.246942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.470 qpair failed and we were unable to recover it. 00:26:27.470 [2024-07-16 00:27:46.247261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.470 [2024-07-16 00:27:46.247295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.470 qpair failed and we were unable to recover it. 00:26:27.470 [2024-07-16 00:27:46.247481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.470 [2024-07-16 00:27:46.247514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.470 qpair failed and we were unable to recover it. 00:26:27.470 [2024-07-16 00:27:46.247823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.470 [2024-07-16 00:27:46.247857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.470 qpair failed and we were unable to recover it. 00:26:27.470 [2024-07-16 00:27:46.248173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.470 [2024-07-16 00:27:46.248205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.470 qpair failed and we were unable to recover it. 00:26:27.470 [2024-07-16 00:27:46.248589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.470 [2024-07-16 00:27:46.248607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.470 qpair failed and we were unable to recover it. 00:26:27.470 [2024-07-16 00:27:46.248955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.470 [2024-07-16 00:27:46.248972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.470 qpair failed and we were unable to recover it. 00:26:27.470 [2024-07-16 00:27:46.249268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.470 [2024-07-16 00:27:46.249302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.470 qpair failed and we were unable to recover it. 00:26:27.470 [2024-07-16 00:27:46.249623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.471 [2024-07-16 00:27:46.249658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.471 qpair failed and we were unable to recover it. 00:26:27.471 [2024-07-16 00:27:46.249875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.471 [2024-07-16 00:27:46.249908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.471 qpair failed and we were unable to recover it. 00:26:27.471 [2024-07-16 00:27:46.250168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.471 [2024-07-16 00:27:46.250186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.471 qpair failed and we were unable to recover it. 00:26:27.471 [2024-07-16 00:27:46.250390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.471 [2024-07-16 00:27:46.250409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.471 qpair failed and we were unable to recover it. 00:26:27.471 [2024-07-16 00:27:46.250576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.471 [2024-07-16 00:27:46.250594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.471 qpair failed and we were unable to recover it. 00:26:27.471 [2024-07-16 00:27:46.250826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.471 [2024-07-16 00:27:46.250860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.471 qpair failed and we were unable to recover it. 00:26:27.471 [2024-07-16 00:27:46.251118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.471 [2024-07-16 00:27:46.251151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.471 qpair failed and we were unable to recover it. 00:26:27.471 [2024-07-16 00:27:46.251404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.471 [2024-07-16 00:27:46.251422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.471 qpair failed and we were unable to recover it. 00:26:27.471 [2024-07-16 00:27:46.251744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.471 [2024-07-16 00:27:46.251776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.471 qpair failed and we were unable to recover it. 00:26:27.471 [2024-07-16 00:27:46.252022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.471 [2024-07-16 00:27:46.252055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.471 qpair failed and we were unable to recover it. 00:26:27.471 [2024-07-16 00:27:46.252418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.471 [2024-07-16 00:27:46.252452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.471 qpair failed and we were unable to recover it. 00:26:27.471 [2024-07-16 00:27:46.252784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.471 [2024-07-16 00:27:46.252822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.471 qpair failed and we were unable to recover it. 00:26:27.471 [2024-07-16 00:27:46.253132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.471 [2024-07-16 00:27:46.253164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.471 qpair failed and we were unable to recover it. 00:26:27.471 [2024-07-16 00:27:46.253467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.471 [2024-07-16 00:27:46.253485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.471 qpair failed and we were unable to recover it. 00:26:27.471 [2024-07-16 00:27:46.253734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.471 [2024-07-16 00:27:46.253751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.471 qpair failed and we were unable to recover it. 00:26:27.471 [2024-07-16 00:27:46.253962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.471 [2024-07-16 00:27:46.253980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.471 qpair failed and we were unable to recover it. 00:26:27.471 [2024-07-16 00:27:46.254194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.471 [2024-07-16 00:27:46.254211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.471 qpair failed and we were unable to recover it. 00:26:27.471 [2024-07-16 00:27:46.254458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.471 [2024-07-16 00:27:46.254475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.471 qpair failed and we were unable to recover it. 00:26:27.471 [2024-07-16 00:27:46.254715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.471 [2024-07-16 00:27:46.254732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.471 qpair failed and we were unable to recover it. 00:26:27.471 [2024-07-16 00:27:46.254943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.471 [2024-07-16 00:27:46.254960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.471 qpair failed and we were unable to recover it. 00:26:27.471 [2024-07-16 00:27:46.255255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.471 [2024-07-16 00:27:46.255273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.471 qpair failed and we were unable to recover it. 00:26:27.471 [2024-07-16 00:27:46.255520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.471 [2024-07-16 00:27:46.255556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.471 qpair failed and we were unable to recover it. 00:26:27.471 [2024-07-16 00:27:46.255797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.471 [2024-07-16 00:27:46.255830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.471 qpair failed and we were unable to recover it. 00:26:27.471 [2024-07-16 00:27:46.256140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.471 [2024-07-16 00:27:46.256172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.471 qpair failed and we were unable to recover it. 00:26:27.471 [2024-07-16 00:27:46.256367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.471 [2024-07-16 00:27:46.256401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.471 qpair failed and we were unable to recover it. 00:26:27.471 [2024-07-16 00:27:46.256647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.471 [2024-07-16 00:27:46.256680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.471 qpair failed and we were unable to recover it. 00:26:27.471 [2024-07-16 00:27:46.256882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.471 [2024-07-16 00:27:46.256916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.471 qpair failed and we were unable to recover it. 00:26:27.471 [2024-07-16 00:27:46.257261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.471 [2024-07-16 00:27:46.257295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.471 qpair failed and we were unable to recover it. 00:26:27.471 [2024-07-16 00:27:46.257602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.471 [2024-07-16 00:27:46.257636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.471 qpair failed and we were unable to recover it. 00:26:27.471 [2024-07-16 00:27:46.257978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.471 [2024-07-16 00:27:46.258011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.471 qpair failed and we were unable to recover it. 00:26:27.471 [2024-07-16 00:27:46.258257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.471 [2024-07-16 00:27:46.258291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.471 qpair failed and we were unable to recover it. 00:26:27.471 [2024-07-16 00:27:46.258545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.471 [2024-07-16 00:27:46.258578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.471 qpair failed and we were unable to recover it. 00:26:27.471 [2024-07-16 00:27:46.258770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.471 [2024-07-16 00:27:46.258802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.471 qpair failed and we were unable to recover it. 00:26:27.471 [2024-07-16 00:27:46.259110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.471 [2024-07-16 00:27:46.259143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.471 qpair failed and we were unable to recover it. 00:26:27.471 [2024-07-16 00:27:46.259398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.471 [2024-07-16 00:27:46.259431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.471 qpair failed and we were unable to recover it. 00:26:27.471 [2024-07-16 00:27:46.259790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.472 [2024-07-16 00:27:46.259822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.472 qpair failed and we were unable to recover it. 00:26:27.472 [2024-07-16 00:27:46.260139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.472 [2024-07-16 00:27:46.260172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.472 qpair failed and we were unable to recover it. 00:26:27.472 [2024-07-16 00:27:46.260459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.472 [2024-07-16 00:27:46.260476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.472 qpair failed and we were unable to recover it. 00:26:27.472 [2024-07-16 00:27:46.260789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.472 [2024-07-16 00:27:46.260821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.472 qpair failed and we were unable to recover it. 00:26:27.472 [2024-07-16 00:27:46.261141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.472 [2024-07-16 00:27:46.261174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.472 qpair failed and we were unable to recover it. 00:26:27.472 [2024-07-16 00:27:46.261421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.472 [2024-07-16 00:27:46.261439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.472 qpair failed and we were unable to recover it. 00:26:27.472 [2024-07-16 00:27:46.261741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.472 [2024-07-16 00:27:46.261774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.472 qpair failed and we were unable to recover it. 00:26:27.472 [2024-07-16 00:27:46.262094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.472 [2024-07-16 00:27:46.262127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.472 qpair failed and we were unable to recover it. 00:26:27.472 [2024-07-16 00:27:46.262427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.472 [2024-07-16 00:27:46.262444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.472 qpair failed and we were unable to recover it. 00:26:27.472 [2024-07-16 00:27:46.262652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.472 [2024-07-16 00:27:46.262669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.472 qpair failed and we were unable to recover it. 00:26:27.472 [2024-07-16 00:27:46.262831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.472 [2024-07-16 00:27:46.262864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.472 qpair failed and we were unable to recover it. 00:26:27.472 [2024-07-16 00:27:46.263194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.472 [2024-07-16 00:27:46.263239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.472 qpair failed and we were unable to recover it. 00:26:27.472 [2024-07-16 00:27:46.263423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.472 [2024-07-16 00:27:46.263439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.472 qpair failed and we were unable to recover it. 00:26:27.472 [2024-07-16 00:27:46.263770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.472 [2024-07-16 00:27:46.263803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.472 qpair failed and we were unable to recover it. 00:26:27.472 [2024-07-16 00:27:46.264147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.472 [2024-07-16 00:27:46.264179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.472 qpair failed and we were unable to recover it. 00:26:27.472 [2024-07-16 00:27:46.264470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.472 [2024-07-16 00:27:46.264504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.472 qpair failed and we were unable to recover it. 00:26:27.472 [2024-07-16 00:27:46.264839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.472 [2024-07-16 00:27:46.264872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.472 qpair failed and we were unable to recover it. 00:26:27.472 [2024-07-16 00:27:46.265132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.472 [2024-07-16 00:27:46.265149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.472 qpair failed and we were unable to recover it. 00:26:27.472 [2024-07-16 00:27:46.265397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.472 [2024-07-16 00:27:46.265431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.472 qpair failed and we were unable to recover it. 00:26:27.472 [2024-07-16 00:27:46.265744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.472 [2024-07-16 00:27:46.265776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.472 qpair failed and we were unable to recover it. 00:26:27.472 [2024-07-16 00:27:46.266124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.472 [2024-07-16 00:27:46.266157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.472 qpair failed and we were unable to recover it. 00:26:27.472 [2024-07-16 00:27:46.266479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.472 [2024-07-16 00:27:46.266497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.472 qpair failed and we were unable to recover it. 00:26:27.472 [2024-07-16 00:27:46.266821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.472 [2024-07-16 00:27:46.266838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.472 qpair failed and we were unable to recover it. 00:26:27.472 [2024-07-16 00:27:46.267081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.472 [2024-07-16 00:27:46.267098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.472 qpair failed and we were unable to recover it. 00:26:27.472 [2024-07-16 00:27:46.267320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.472 [2024-07-16 00:27:46.267338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.472 qpair failed and we were unable to recover it. 00:26:27.472 [2024-07-16 00:27:46.267615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.472 [2024-07-16 00:27:46.267648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.472 qpair failed and we were unable to recover it. 00:26:27.472 [2024-07-16 00:27:46.267848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.472 [2024-07-16 00:27:46.267880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.472 qpair failed and we were unable to recover it. 00:26:27.472 [2024-07-16 00:27:46.268122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.472 [2024-07-16 00:27:46.268154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.472 qpair failed and we were unable to recover it. 00:26:27.472 [2024-07-16 00:27:46.268401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.472 [2024-07-16 00:27:46.268418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.472 qpair failed and we were unable to recover it. 00:26:27.472 [2024-07-16 00:27:46.268725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.472 [2024-07-16 00:27:46.268758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.472 qpair failed and we were unable to recover it. 00:26:27.472 [2024-07-16 00:27:46.268951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.472 [2024-07-16 00:27:46.268985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.472 qpair failed and we were unable to recover it. 00:26:27.472 [2024-07-16 00:27:46.269264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.472 [2024-07-16 00:27:46.269297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.472 qpair failed and we were unable to recover it. 00:26:27.472 [2024-07-16 00:27:46.269629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.472 [2024-07-16 00:27:46.269647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.472 qpair failed and we were unable to recover it. 00:26:27.472 [2024-07-16 00:27:46.269966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.472 [2024-07-16 00:27:46.269984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.472 qpair failed and we were unable to recover it. 00:26:27.472 [2024-07-16 00:27:46.270279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.472 [2024-07-16 00:27:46.270312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.472 qpair failed and we were unable to recover it. 00:26:27.472 [2024-07-16 00:27:46.270575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.472 [2024-07-16 00:27:46.270608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.472 qpair failed and we were unable to recover it. 00:26:27.472 [2024-07-16 00:27:46.270865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.472 [2024-07-16 00:27:46.270898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.472 qpair failed and we were unable to recover it. 00:26:27.472 [2024-07-16 00:27:46.271259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.472 [2024-07-16 00:27:46.271276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.472 qpair failed and we were unable to recover it. 00:26:27.472 [2024-07-16 00:27:46.271553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.472 [2024-07-16 00:27:46.271570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.472 qpair failed and we were unable to recover it. 00:26:27.472 [2024-07-16 00:27:46.271867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.472 [2024-07-16 00:27:46.271900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.472 qpair failed and we were unable to recover it. 00:26:27.472 [2024-07-16 00:27:46.272171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.472 [2024-07-16 00:27:46.272204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.472 qpair failed and we were unable to recover it. 00:26:27.472 [2024-07-16 00:27:46.272554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.473 [2024-07-16 00:27:46.272574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.473 qpair failed and we were unable to recover it. 00:26:27.473 [2024-07-16 00:27:46.273154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.473 [2024-07-16 00:27:46.273187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.473 qpair failed and we were unable to recover it. 00:26:27.473 [2024-07-16 00:27:46.273556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.473 [2024-07-16 00:27:46.273591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.473 qpair failed and we were unable to recover it. 00:26:27.473 [2024-07-16 00:27:46.273899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.473 [2024-07-16 00:27:46.273938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.473 qpair failed and we were unable to recover it. 00:26:27.473 [2024-07-16 00:27:46.274252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.473 [2024-07-16 00:27:46.274287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.473 qpair failed and we were unable to recover it. 00:26:27.473 [2024-07-16 00:27:46.274528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.473 [2024-07-16 00:27:46.274561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.473 qpair failed and we were unable to recover it. 00:26:27.473 [2024-07-16 00:27:46.274818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.473 [2024-07-16 00:27:46.274851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.473 qpair failed and we were unable to recover it. 00:26:27.473 [2024-07-16 00:27:46.275039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.473 [2024-07-16 00:27:46.275072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.473 qpair failed and we were unable to recover it. 00:26:27.473 [2024-07-16 00:27:46.275338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.473 [2024-07-16 00:27:46.275356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.473 qpair failed and we were unable to recover it. 00:26:27.473 [2024-07-16 00:27:46.275583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.473 [2024-07-16 00:27:46.275616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.473 qpair failed and we were unable to recover it. 00:26:27.473 [2024-07-16 00:27:46.275927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.473 [2024-07-16 00:27:46.275959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.473 qpair failed and we were unable to recover it. 00:26:27.473 [2024-07-16 00:27:46.276300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.473 [2024-07-16 00:27:46.276333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.473 qpair failed and we were unable to recover it. 00:26:27.473 [2024-07-16 00:27:46.276564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.473 [2024-07-16 00:27:46.276581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.473 qpair failed and we were unable to recover it. 00:26:27.473 [2024-07-16 00:27:46.276732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.473 [2024-07-16 00:27:46.276749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.473 qpair failed and we were unable to recover it. 00:26:27.473 [2024-07-16 00:27:46.276988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.473 [2024-07-16 00:27:46.277005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.473 qpair failed and we were unable to recover it. 00:26:27.473 [2024-07-16 00:27:46.277229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.473 [2024-07-16 00:27:46.277247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.473 qpair failed and we were unable to recover it. 00:26:27.473 [2024-07-16 00:27:46.277485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.473 [2024-07-16 00:27:46.277518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.473 qpair failed and we were unable to recover it. 00:26:27.473 [2024-07-16 00:27:46.277860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.473 [2024-07-16 00:27:46.277893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.473 qpair failed and we were unable to recover it. 00:26:27.473 [2024-07-16 00:27:46.278203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.473 [2024-07-16 00:27:46.278247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.473 qpair failed and we were unable to recover it. 00:26:27.473 [2024-07-16 00:27:46.278579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.473 [2024-07-16 00:27:46.278612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.473 qpair failed and we were unable to recover it. 00:26:27.473 [2024-07-16 00:27:46.278891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.473 [2024-07-16 00:27:46.278924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.473 qpair failed and we were unable to recover it. 00:26:27.473 [2024-07-16 00:27:46.279189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.473 [2024-07-16 00:27:46.279221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.473 qpair failed and we were unable to recover it. 00:26:27.473 [2024-07-16 00:27:46.279598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.473 [2024-07-16 00:27:46.279631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.473 qpair failed and we were unable to recover it. 00:26:27.473 [2024-07-16 00:27:46.279924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.473 [2024-07-16 00:27:46.279957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.473 qpair failed and we were unable to recover it. 00:26:27.473 [2024-07-16 00:27:46.280213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.473 [2024-07-16 00:27:46.280241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.473 qpair failed and we were unable to recover it. 00:26:27.473 [2024-07-16 00:27:46.280460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.473 [2024-07-16 00:27:46.280477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.473 qpair failed and we were unable to recover it. 00:26:27.473 [2024-07-16 00:27:46.280762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.473 [2024-07-16 00:27:46.280794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.473 qpair failed and we were unable to recover it. 00:26:27.473 [2024-07-16 00:27:46.281030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.473 [2024-07-16 00:27:46.281062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.473 qpair failed and we were unable to recover it. 00:26:27.473 [2024-07-16 00:27:46.281334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.473 [2024-07-16 00:27:46.281351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.473 qpair failed and we were unable to recover it. 00:26:27.473 [2024-07-16 00:27:46.281574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.473 [2024-07-16 00:27:46.281608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.473 qpair failed and we were unable to recover it. 00:26:27.473 [2024-07-16 00:27:46.281901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.473 [2024-07-16 00:27:46.281939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.473 qpair failed and we were unable to recover it. 00:26:27.473 [2024-07-16 00:27:46.282182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.473 [2024-07-16 00:27:46.282214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.473 qpair failed and we were unable to recover it. 00:26:27.473 [2024-07-16 00:27:46.282472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.473 [2024-07-16 00:27:46.282505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.473 qpair failed and we were unable to recover it. 00:26:27.473 [2024-07-16 00:27:46.282697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.473 [2024-07-16 00:27:46.282731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.473 qpair failed and we were unable to recover it. 00:26:27.473 [2024-07-16 00:27:46.282980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.473 [2024-07-16 00:27:46.283013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.473 qpair failed and we were unable to recover it. 00:26:27.473 [2024-07-16 00:27:46.283378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.473 [2024-07-16 00:27:46.283411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.473 qpair failed and we were unable to recover it. 00:26:27.473 [2024-07-16 00:27:46.283747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.473 [2024-07-16 00:27:46.283780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.473 qpair failed and we were unable to recover it. 00:26:27.473 [2024-07-16 00:27:46.284189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.473 [2024-07-16 00:27:46.284222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.473 qpair failed and we were unable to recover it. 00:26:27.473 [2024-07-16 00:27:46.284556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.473 [2024-07-16 00:27:46.284588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.473 qpair failed and we were unable to recover it. 00:26:27.473 [2024-07-16 00:27:46.284891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.473 [2024-07-16 00:27:46.284924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.473 qpair failed and we were unable to recover it. 00:26:27.473 [2024-07-16 00:27:46.285251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.473 [2024-07-16 00:27:46.285286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.474 qpair failed and we were unable to recover it. 00:26:27.474 [2024-07-16 00:27:46.285591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.474 [2024-07-16 00:27:46.285624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.474 qpair failed and we were unable to recover it. 00:26:27.474 [2024-07-16 00:27:46.285823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.474 [2024-07-16 00:27:46.285856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.474 qpair failed and we were unable to recover it. 00:26:27.474 [2024-07-16 00:27:46.286048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.474 [2024-07-16 00:27:46.286081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.474 qpair failed and we were unable to recover it. 00:26:27.474 [2024-07-16 00:27:46.286424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.474 [2024-07-16 00:27:46.286458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.474 qpair failed and we were unable to recover it. 00:26:27.474 [2024-07-16 00:27:46.286716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.474 [2024-07-16 00:27:46.286749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.474 qpair failed and we were unable to recover it. 00:26:27.474 [2024-07-16 00:27:46.287106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.474 [2024-07-16 00:27:46.287138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.474 qpair failed and we were unable to recover it. 00:26:27.474 [2024-07-16 00:27:46.287449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.474 [2024-07-16 00:27:46.287483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.474 qpair failed and we were unable to recover it. 00:26:27.474 [2024-07-16 00:27:46.287786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.474 [2024-07-16 00:27:46.287819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.474 qpair failed and we were unable to recover it. 00:26:27.474 [2024-07-16 00:27:46.288145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.474 [2024-07-16 00:27:46.288177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.474 qpair failed and we were unable to recover it. 00:26:27.747 [2024-07-16 00:27:46.288451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.747 [2024-07-16 00:27:46.288488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.747 qpair failed and we were unable to recover it. 00:26:27.747 [2024-07-16 00:27:46.288812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.747 [2024-07-16 00:27:46.288847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.747 qpair failed and we were unable to recover it. 00:26:27.747 [2024-07-16 00:27:46.289087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.747 [2024-07-16 00:27:46.289119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.747 qpair failed and we were unable to recover it. 00:26:27.747 [2024-07-16 00:27:46.289497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.747 [2024-07-16 00:27:46.289530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.747 qpair failed and we were unable to recover it. 00:26:27.747 [2024-07-16 00:27:46.289864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.747 [2024-07-16 00:27:46.289897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.747 qpair failed and we were unable to recover it. 00:26:27.747 [2024-07-16 00:27:46.290087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.747 [2024-07-16 00:27:46.290120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.747 qpair failed and we were unable to recover it. 00:26:27.747 [2024-07-16 00:27:46.290456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.747 [2024-07-16 00:27:46.290490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.747 qpair failed and we were unable to recover it. 00:26:27.747 [2024-07-16 00:27:46.290754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.747 [2024-07-16 00:27:46.290792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.747 qpair failed and we were unable to recover it. 00:26:27.747 [2024-07-16 00:27:46.291143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.747 [2024-07-16 00:27:46.291176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.747 qpair failed and we were unable to recover it. 00:26:27.747 [2024-07-16 00:27:46.291478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.747 [2024-07-16 00:27:46.291512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.747 qpair failed and we were unable to recover it. 00:26:27.747 [2024-07-16 00:27:46.291822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.747 [2024-07-16 00:27:46.291854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.747 qpair failed and we were unable to recover it. 00:26:27.747 [2024-07-16 00:27:46.292117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.747 [2024-07-16 00:27:46.292150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.747 qpair failed and we were unable to recover it. 00:26:27.747 [2024-07-16 00:27:46.292403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.747 [2024-07-16 00:27:46.292437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.747 qpair failed and we were unable to recover it. 00:26:27.747 [2024-07-16 00:27:46.292770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.747 [2024-07-16 00:27:46.292803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.747 qpair failed and we were unable to recover it. 00:26:27.747 [2024-07-16 00:27:46.293077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.747 [2024-07-16 00:27:46.293110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.747 qpair failed and we were unable to recover it. 00:26:27.747 [2024-07-16 00:27:46.293354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.747 [2024-07-16 00:27:46.293388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.747 qpair failed and we were unable to recover it. 00:26:27.747 [2024-07-16 00:27:46.293716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.747 [2024-07-16 00:27:46.293749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.747 qpair failed and we were unable to recover it. 00:26:27.747 [2024-07-16 00:27:46.294079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.747 [2024-07-16 00:27:46.294111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.747 qpair failed and we were unable to recover it. 00:26:27.747 [2024-07-16 00:27:46.294432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.747 [2024-07-16 00:27:46.294465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.747 qpair failed and we were unable to recover it. 00:26:27.747 [2024-07-16 00:27:46.294814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.747 [2024-07-16 00:27:46.294831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.747 qpair failed and we were unable to recover it. 00:26:27.747 [2024-07-16 00:27:46.295121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.747 [2024-07-16 00:27:46.295155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.747 qpair failed and we were unable to recover it. 00:26:27.747 [2024-07-16 00:27:46.295469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.747 [2024-07-16 00:27:46.295503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.747 qpair failed and we were unable to recover it. 00:26:27.747 [2024-07-16 00:27:46.295763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.747 [2024-07-16 00:27:46.295796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.747 qpair failed and we were unable to recover it. 00:26:27.747 [2024-07-16 00:27:46.296063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.747 [2024-07-16 00:27:46.296096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.747 qpair failed and we were unable to recover it. 00:26:27.747 [2024-07-16 00:27:46.296365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.747 [2024-07-16 00:27:46.296399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.747 qpair failed and we were unable to recover it. 00:26:27.747 [2024-07-16 00:27:46.296753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.747 [2024-07-16 00:27:46.296786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.747 qpair failed and we were unable to recover it. 00:26:27.747 [2024-07-16 00:27:46.297092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.747 [2024-07-16 00:27:46.297126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.747 qpair failed and we were unable to recover it. 00:26:27.747 [2024-07-16 00:27:46.297317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.747 [2024-07-16 00:27:46.297351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.747 qpair failed and we were unable to recover it. 00:26:27.747 [2024-07-16 00:27:46.297627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.747 [2024-07-16 00:27:46.297644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.747 qpair failed and we were unable to recover it. 00:26:27.748 [2024-07-16 00:27:46.297938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.748 [2024-07-16 00:27:46.297972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.748 qpair failed and we were unable to recover it. 00:26:27.748 [2024-07-16 00:27:46.298213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.748 [2024-07-16 00:27:46.298256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.748 qpair failed and we were unable to recover it. 00:26:27.748 [2024-07-16 00:27:46.298567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.748 [2024-07-16 00:27:46.298599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.748 qpair failed and we were unable to recover it. 00:26:27.748 [2024-07-16 00:27:46.298853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.748 [2024-07-16 00:27:46.298889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.748 qpair failed and we were unable to recover it. 00:26:27.748 [2024-07-16 00:27:46.299216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.748 [2024-07-16 00:27:46.299260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.748 qpair failed and we were unable to recover it. 00:26:27.748 [2024-07-16 00:27:46.299588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.748 [2024-07-16 00:27:46.299620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.748 qpair failed and we were unable to recover it. 00:26:27.748 [2024-07-16 00:27:46.299958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.748 [2024-07-16 00:27:46.299991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.748 qpair failed and we were unable to recover it. 00:26:27.748 [2024-07-16 00:27:46.300271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.748 [2024-07-16 00:27:46.300305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.748 qpair failed and we were unable to recover it. 00:26:27.748 [2024-07-16 00:27:46.300655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.748 [2024-07-16 00:27:46.300688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.748 qpair failed and we were unable to recover it. 00:26:27.748 [2024-07-16 00:27:46.300992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.748 [2024-07-16 00:27:46.301025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.748 qpair failed and we were unable to recover it. 00:26:27.748 [2024-07-16 00:27:46.301294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.748 [2024-07-16 00:27:46.301328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.748 qpair failed and we were unable to recover it. 00:26:27.748 [2024-07-16 00:27:46.301565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.748 [2024-07-16 00:27:46.301599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.748 qpair failed and we were unable to recover it. 00:26:27.748 [2024-07-16 00:27:46.301913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.748 [2024-07-16 00:27:46.301945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.748 qpair failed and we were unable to recover it. 00:26:27.748 [2024-07-16 00:27:46.302290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.748 [2024-07-16 00:27:46.302324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.748 qpair failed and we were unable to recover it. 00:26:27.748 [2024-07-16 00:27:46.302579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.748 [2024-07-16 00:27:46.302612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.748 qpair failed and we were unable to recover it. 00:26:27.748 [2024-07-16 00:27:46.302800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.748 [2024-07-16 00:27:46.302833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.748 qpair failed and we were unable to recover it. 00:26:27.748 [2024-07-16 00:27:46.303077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.748 [2024-07-16 00:27:46.303110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.748 qpair failed and we were unable to recover it. 00:26:27.748 [2024-07-16 00:27:46.303442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.748 [2024-07-16 00:27:46.303474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.748 qpair failed and we were unable to recover it. 00:26:27.748 [2024-07-16 00:27:46.303733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.748 [2024-07-16 00:27:46.303766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.748 qpair failed and we were unable to recover it. 00:26:27.748 [2024-07-16 00:27:46.304121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.748 [2024-07-16 00:27:46.304153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.748 qpair failed and we were unable to recover it. 00:26:27.748 [2024-07-16 00:27:46.304393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.748 [2024-07-16 00:27:46.304427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.748 qpair failed and we were unable to recover it. 00:26:27.748 [2024-07-16 00:27:46.304756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.748 [2024-07-16 00:27:46.304790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.748 qpair failed and we were unable to recover it. 00:26:27.748 [2024-07-16 00:27:46.304991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.748 [2024-07-16 00:27:46.305024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.748 qpair failed and we were unable to recover it. 00:26:27.748 [2024-07-16 00:27:46.305345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.748 [2024-07-16 00:27:46.305362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.748 qpair failed and we were unable to recover it. 00:26:27.748 [2024-07-16 00:27:46.305602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.748 [2024-07-16 00:27:46.305635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.748 qpair failed and we were unable to recover it. 00:26:27.748 [2024-07-16 00:27:46.305960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.748 [2024-07-16 00:27:46.305992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.748 qpair failed and we were unable to recover it. 00:26:27.748 [2024-07-16 00:27:46.306200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.748 [2024-07-16 00:27:46.306244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.748 qpair failed and we were unable to recover it. 00:26:27.748 [2024-07-16 00:27:46.306522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.748 [2024-07-16 00:27:46.306554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.748 qpair failed and we were unable to recover it. 00:26:27.748 [2024-07-16 00:27:46.306802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.748 [2024-07-16 00:27:46.306819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.748 qpair failed and we were unable to recover it. 00:26:27.748 [2024-07-16 00:27:46.307098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.748 [2024-07-16 00:27:46.307115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.748 qpair failed and we were unable to recover it. 00:26:27.748 [2024-07-16 00:27:46.307353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.748 [2024-07-16 00:27:46.307370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.748 qpair failed and we were unable to recover it. 00:26:27.748 [2024-07-16 00:27:46.307685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.748 [2024-07-16 00:27:46.307703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.748 qpair failed and we were unable to recover it. 00:26:27.748 [2024-07-16 00:27:46.307998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.748 [2024-07-16 00:27:46.308030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.748 qpair failed and we were unable to recover it. 00:26:27.748 [2024-07-16 00:27:46.308373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.748 [2024-07-16 00:27:46.308406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.748 qpair failed and we were unable to recover it. 00:26:27.748 [2024-07-16 00:27:46.308720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.748 [2024-07-16 00:27:46.308738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.748 qpair failed and we were unable to recover it. 00:26:27.748 [2024-07-16 00:27:46.308895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.748 [2024-07-16 00:27:46.308911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.748 qpair failed and we were unable to recover it. 00:26:27.748 [2024-07-16 00:27:46.309126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.748 [2024-07-16 00:27:46.309143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.748 qpair failed and we were unable to recover it. 00:26:27.748 [2024-07-16 00:27:46.309317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.748 [2024-07-16 00:27:46.309335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.748 qpair failed and we were unable to recover it. 00:26:27.748 [2024-07-16 00:27:46.309568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.748 [2024-07-16 00:27:46.309601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.748 qpair failed and we were unable to recover it. 00:26:27.748 [2024-07-16 00:27:46.309845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.748 [2024-07-16 00:27:46.309877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.748 qpair failed and we were unable to recover it. 00:26:27.748 [2024-07-16 00:27:46.310150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.748 [2024-07-16 00:27:46.310184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.748 qpair failed and we were unable to recover it. 00:26:27.748 [2024-07-16 00:27:46.310365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.749 [2024-07-16 00:27:46.310397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.749 qpair failed and we were unable to recover it. 00:26:27.749 [2024-07-16 00:27:46.310703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.749 [2024-07-16 00:27:46.310721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.749 qpair failed and we were unable to recover it. 00:26:27.749 [2024-07-16 00:27:46.310902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.749 [2024-07-16 00:27:46.310920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.749 qpair failed and we were unable to recover it. 00:26:27.749 [2024-07-16 00:27:46.311222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.749 [2024-07-16 00:27:46.311263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.749 qpair failed and we were unable to recover it. 00:26:27.749 [2024-07-16 00:27:46.311514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.749 [2024-07-16 00:27:46.311547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.749 qpair failed and we were unable to recover it. 00:26:27.749 [2024-07-16 00:27:46.311863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.749 [2024-07-16 00:27:46.311900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.749 qpair failed and we were unable to recover it. 00:26:27.749 [2024-07-16 00:27:46.312059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.749 [2024-07-16 00:27:46.312091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.749 qpair failed and we were unable to recover it. 00:26:27.749 [2024-07-16 00:27:46.312425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.749 [2024-07-16 00:27:46.312463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.749 qpair failed and we were unable to recover it. 00:26:27.749 [2024-07-16 00:27:46.312790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.749 [2024-07-16 00:27:46.312822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.749 qpair failed and we were unable to recover it. 00:26:27.749 [2024-07-16 00:27:46.313153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.749 [2024-07-16 00:27:46.313186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.749 qpair failed and we were unable to recover it. 00:26:27.749 [2024-07-16 00:27:46.313435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.749 [2024-07-16 00:27:46.313452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.749 qpair failed and we were unable to recover it. 00:26:27.749 [2024-07-16 00:27:46.313672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.749 [2024-07-16 00:27:46.313704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.749 qpair failed and we were unable to recover it. 00:26:27.749 [2024-07-16 00:27:46.314000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.749 [2024-07-16 00:27:46.314033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.749 qpair failed and we were unable to recover it. 00:26:27.749 [2024-07-16 00:27:46.314364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.749 [2024-07-16 00:27:46.314399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.749 qpair failed and we were unable to recover it. 00:26:27.749 [2024-07-16 00:27:46.314665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.749 [2024-07-16 00:27:46.314683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.749 qpair failed and we were unable to recover it. 00:26:27.749 [2024-07-16 00:27:46.314970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.749 [2024-07-16 00:27:46.315002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.749 qpair failed and we were unable to recover it. 00:26:27.749 [2024-07-16 00:27:46.315271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.749 [2024-07-16 00:27:46.315305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.749 qpair failed and we were unable to recover it. 00:26:27.749 [2024-07-16 00:27:46.315538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.749 [2024-07-16 00:27:46.315555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.749 qpair failed and we were unable to recover it. 00:26:27.749 [2024-07-16 00:27:46.315856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.749 [2024-07-16 00:27:46.315888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.749 qpair failed and we were unable to recover it. 00:26:27.749 [2024-07-16 00:27:46.316089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.749 [2024-07-16 00:27:46.316122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.749 qpair failed and we were unable to recover it. 00:26:27.749 [2024-07-16 00:27:46.316365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.749 [2024-07-16 00:27:46.316398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.749 qpair failed and we were unable to recover it. 00:26:27.749 [2024-07-16 00:27:46.316734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.749 [2024-07-16 00:27:46.316767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.749 qpair failed and we were unable to recover it. 00:26:27.749 [2024-07-16 00:27:46.317100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.749 [2024-07-16 00:27:46.317131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.749 qpair failed and we were unable to recover it. 00:26:27.749 [2024-07-16 00:27:46.317441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.749 [2024-07-16 00:27:46.317478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.749 qpair failed and we were unable to recover it. 00:26:27.749 [2024-07-16 00:27:46.317704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.749 [2024-07-16 00:27:46.317721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.749 qpair failed and we were unable to recover it. 00:26:27.749 [2024-07-16 00:27:46.318040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.749 [2024-07-16 00:27:46.318057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.749 qpair failed and we were unable to recover it. 00:26:27.749 [2024-07-16 00:27:46.318352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.749 [2024-07-16 00:27:46.318370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.749 qpair failed and we were unable to recover it. 00:26:27.749 [2024-07-16 00:27:46.318662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.749 [2024-07-16 00:27:46.318696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.749 qpair failed and we were unable to recover it. 00:26:27.749 [2024-07-16 00:27:46.318879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.749 [2024-07-16 00:27:46.318912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.749 qpair failed and we were unable to recover it. 00:26:27.749 [2024-07-16 00:27:46.319183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.749 [2024-07-16 00:27:46.319215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.749 qpair failed and we were unable to recover it. 00:26:27.749 [2024-07-16 00:27:46.319485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.749 [2024-07-16 00:27:46.319502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.749 qpair failed and we were unable to recover it. 00:26:27.749 [2024-07-16 00:27:46.319817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.749 [2024-07-16 00:27:46.319850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.749 qpair failed and we were unable to recover it. 00:26:27.749 [2024-07-16 00:27:46.320180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.749 [2024-07-16 00:27:46.320218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.749 qpair failed and we were unable to recover it. 00:26:27.749 [2024-07-16 00:27:46.320489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.749 [2024-07-16 00:27:46.320506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.749 qpair failed and we were unable to recover it. 00:26:27.749 [2024-07-16 00:27:46.320786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.749 [2024-07-16 00:27:46.320803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.749 qpair failed and we were unable to recover it. 00:26:27.749 [2024-07-16 00:27:46.321099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.749 [2024-07-16 00:27:46.321131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.749 qpair failed and we were unable to recover it. 00:26:27.749 [2024-07-16 00:27:46.321369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.749 [2024-07-16 00:27:46.321403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.749 qpair failed and we were unable to recover it. 00:26:27.749 [2024-07-16 00:27:46.321649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.749 [2024-07-16 00:27:46.321681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.749 qpair failed and we were unable to recover it. 00:26:27.749 [2024-07-16 00:27:46.321966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.749 [2024-07-16 00:27:46.321998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.749 qpair failed and we were unable to recover it. 00:26:27.749 [2024-07-16 00:27:46.322302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.749 [2024-07-16 00:27:46.322337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.749 qpair failed and we were unable to recover it. 00:26:27.749 [2024-07-16 00:27:46.322579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.749 [2024-07-16 00:27:46.322612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.749 qpair failed and we were unable to recover it. 00:26:27.749 [2024-07-16 00:27:46.322876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.749 [2024-07-16 00:27:46.322908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.750 qpair failed and we were unable to recover it. 00:26:27.750 [2024-07-16 00:27:46.323213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.750 [2024-07-16 00:27:46.323265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.750 qpair failed and we were unable to recover it. 00:26:27.750 [2024-07-16 00:27:46.323537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.750 [2024-07-16 00:27:46.323575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.750 qpair failed and we were unable to recover it. 00:26:27.750 [2024-07-16 00:27:46.323822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.750 [2024-07-16 00:27:46.323855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.750 qpair failed and we were unable to recover it. 00:26:27.750 [2024-07-16 00:27:46.324217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.750 [2024-07-16 00:27:46.324274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.750 qpair failed and we were unable to recover it. 00:26:27.750 [2024-07-16 00:27:46.324589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.750 [2024-07-16 00:27:46.324622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.750 qpair failed and we were unable to recover it. 00:26:27.750 [2024-07-16 00:27:46.324857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.750 [2024-07-16 00:27:46.324890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.750 qpair failed and we were unable to recover it. 00:26:27.750 [2024-07-16 00:27:46.325219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.750 [2024-07-16 00:27:46.325262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.750 qpair failed and we were unable to recover it. 00:26:27.750 [2024-07-16 00:27:46.325591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.750 [2024-07-16 00:27:46.325624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.750 qpair failed and we were unable to recover it. 00:26:27.750 [2024-07-16 00:27:46.325882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.750 [2024-07-16 00:27:46.325915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.750 qpair failed and we were unable to recover it. 00:26:27.750 [2024-07-16 00:27:46.326165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.750 [2024-07-16 00:27:46.326198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.750 qpair failed and we were unable to recover it. 00:26:27.750 [2024-07-16 00:27:46.326490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.750 [2024-07-16 00:27:46.326523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.750 qpair failed and we were unable to recover it. 00:26:27.750 [2024-07-16 00:27:46.326790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.750 [2024-07-16 00:27:46.326822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.750 qpair failed and we were unable to recover it. 00:26:27.750 [2024-07-16 00:27:46.327174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.750 [2024-07-16 00:27:46.327207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.750 qpair failed and we were unable to recover it. 00:26:27.750 [2024-07-16 00:27:46.327551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.750 [2024-07-16 00:27:46.327584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.750 qpair failed and we were unable to recover it. 00:26:27.750 [2024-07-16 00:27:46.327803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.750 [2024-07-16 00:27:46.327836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.750 qpair failed and we were unable to recover it. 00:26:27.750 [2024-07-16 00:27:46.328118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.750 [2024-07-16 00:27:46.328151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.750 qpair failed and we were unable to recover it. 00:26:27.750 [2024-07-16 00:27:46.328427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.750 [2024-07-16 00:27:46.328445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.750 qpair failed and we were unable to recover it. 00:26:27.750 [2024-07-16 00:27:46.328654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.750 [2024-07-16 00:27:46.328686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.750 qpair failed and we were unable to recover it. 00:26:27.750 [2024-07-16 00:27:46.328951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.750 [2024-07-16 00:27:46.328983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.750 qpair failed and we were unable to recover it. 00:26:27.750 [2024-07-16 00:27:46.329292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.750 [2024-07-16 00:27:46.329326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.750 qpair failed and we were unable to recover it. 00:26:27.750 [2024-07-16 00:27:46.329640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.750 [2024-07-16 00:27:46.329672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.750 qpair failed and we were unable to recover it. 00:26:27.750 [2024-07-16 00:27:46.330044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.750 [2024-07-16 00:27:46.330076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.750 qpair failed and we were unable to recover it. 00:26:27.750 [2024-07-16 00:27:46.330320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.750 [2024-07-16 00:27:46.330338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.750 qpair failed and we were unable to recover it. 00:26:27.750 [2024-07-16 00:27:46.330542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.750 [2024-07-16 00:27:46.330574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.750 qpair failed and we were unable to recover it. 00:26:27.750 [2024-07-16 00:27:46.330907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.750 [2024-07-16 00:27:46.330939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.750 qpair failed and we were unable to recover it. 00:26:27.750 [2024-07-16 00:27:46.331218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.750 [2024-07-16 00:27:46.331260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.750 qpair failed and we were unable to recover it. 00:26:27.750 [2024-07-16 00:27:46.331515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.750 [2024-07-16 00:27:46.331548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.750 qpair failed and we were unable to recover it. 00:26:27.750 [2024-07-16 00:27:46.331849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.750 [2024-07-16 00:27:46.331867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.750 qpair failed and we were unable to recover it. 00:26:27.750 [2024-07-16 00:27:46.332196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.750 [2024-07-16 00:27:46.332247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.750 qpair failed and we were unable to recover it. 00:26:27.750 [2024-07-16 00:27:46.332419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.750 [2024-07-16 00:27:46.332452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.750 qpair failed and we were unable to recover it. 00:26:27.750 [2024-07-16 00:27:46.332644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.750 [2024-07-16 00:27:46.332677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.750 qpair failed and we were unable to recover it. 00:26:27.750 [2024-07-16 00:27:46.333010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.750 [2024-07-16 00:27:46.333042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.750 qpair failed and we were unable to recover it. 00:26:27.750 [2024-07-16 00:27:46.333306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.750 [2024-07-16 00:27:46.333323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.750 qpair failed and we were unable to recover it. 00:26:27.750 [2024-07-16 00:27:46.333641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.750 [2024-07-16 00:27:46.333673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.750 qpair failed and we were unable to recover it. 00:26:27.750 [2024-07-16 00:27:46.334003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.750 [2024-07-16 00:27:46.334036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.750 qpair failed and we were unable to recover it. 00:26:27.750 [2024-07-16 00:27:46.334326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.750 [2024-07-16 00:27:46.334359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.750 qpair failed and we were unable to recover it. 00:26:27.750 [2024-07-16 00:27:46.334685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.750 [2024-07-16 00:27:46.334702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.750 qpair failed and we were unable to recover it. 00:26:27.750 [2024-07-16 00:27:46.335009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.750 [2024-07-16 00:27:46.335026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.750 qpair failed and we were unable to recover it. 00:26:27.750 [2024-07-16 00:27:46.335261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.750 [2024-07-16 00:27:46.335294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.750 qpair failed and we were unable to recover it. 00:26:27.750 [2024-07-16 00:27:46.335622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.750 [2024-07-16 00:27:46.335655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.750 qpair failed and we were unable to recover it. 00:26:27.750 [2024-07-16 00:27:46.335965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.750 [2024-07-16 00:27:46.335997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.751 qpair failed and we were unable to recover it. 00:26:27.751 [2024-07-16 00:27:46.336320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.751 [2024-07-16 00:27:46.336354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.751 qpair failed and we were unable to recover it. 00:26:27.751 [2024-07-16 00:27:46.336605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.751 [2024-07-16 00:27:46.336637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.751 qpair failed and we were unable to recover it. 00:26:27.751 [2024-07-16 00:27:46.336965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.751 [2024-07-16 00:27:46.336997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.751 qpair failed and we were unable to recover it. 00:26:27.751 [2024-07-16 00:27:46.337288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.751 [2024-07-16 00:27:46.337322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.751 qpair failed and we were unable to recover it. 00:26:27.751 [2024-07-16 00:27:46.337664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.751 [2024-07-16 00:27:46.337696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.751 qpair failed and we were unable to recover it. 00:26:27.751 [2024-07-16 00:27:46.337973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.751 [2024-07-16 00:27:46.338006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.751 qpair failed and we were unable to recover it. 00:26:27.751 [2024-07-16 00:27:46.338248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.751 [2024-07-16 00:27:46.338282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.751 qpair failed and we were unable to recover it. 00:26:27.751 [2024-07-16 00:27:46.338612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.751 [2024-07-16 00:27:46.338644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.751 qpair failed and we were unable to recover it. 00:26:27.751 [2024-07-16 00:27:46.338882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.751 [2024-07-16 00:27:46.338915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.751 qpair failed and we were unable to recover it. 00:26:27.751 [2024-07-16 00:27:46.339247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.751 [2024-07-16 00:27:46.339280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.751 qpair failed and we were unable to recover it. 00:26:27.751 [2024-07-16 00:27:46.339586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.751 [2024-07-16 00:27:46.339618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.751 qpair failed and we were unable to recover it. 00:26:27.751 [2024-07-16 00:27:46.339925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.751 [2024-07-16 00:27:46.339957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.751 qpair failed and we were unable to recover it. 00:26:27.751 [2024-07-16 00:27:46.340276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.751 [2024-07-16 00:27:46.340310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.751 qpair failed and we were unable to recover it. 00:26:27.751 [2024-07-16 00:27:46.340614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.751 [2024-07-16 00:27:46.340654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.751 qpair failed and we were unable to recover it. 00:26:27.751 [2024-07-16 00:27:46.340892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.751 [2024-07-16 00:27:46.340924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.751 qpair failed and we were unable to recover it. 00:26:27.751 [2024-07-16 00:27:46.341164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.751 [2024-07-16 00:27:46.341210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.751 qpair failed and we were unable to recover it. 00:26:27.751 [2024-07-16 00:27:46.341477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.751 [2024-07-16 00:27:46.341522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.751 qpair failed and we were unable to recover it. 00:26:27.751 [2024-07-16 00:27:46.341831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.751 [2024-07-16 00:27:46.341877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.751 qpair failed and we were unable to recover it. 00:26:27.751 [2024-07-16 00:27:46.342215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.751 [2024-07-16 00:27:46.342265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.751 qpair failed and we were unable to recover it. 00:26:27.751 [2024-07-16 00:27:46.342505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.751 [2024-07-16 00:27:46.342523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.751 qpair failed and we were unable to recover it. 00:26:27.751 [2024-07-16 00:27:46.342820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.751 [2024-07-16 00:27:46.342852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.751 qpair failed and we were unable to recover it. 00:26:27.751 [2024-07-16 00:27:46.343148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.751 [2024-07-16 00:27:46.343182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.751 qpair failed and we were unable to recover it. 00:26:27.751 [2024-07-16 00:27:46.343458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.751 [2024-07-16 00:27:46.343495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.751 qpair failed and we were unable to recover it. 00:26:27.751 [2024-07-16 00:27:46.343814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.751 [2024-07-16 00:27:46.343846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.751 qpair failed and we were unable to recover it. 00:26:27.751 [2024-07-16 00:27:46.344142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.751 [2024-07-16 00:27:46.344174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.751 qpair failed and we were unable to recover it. 00:26:27.751 [2024-07-16 00:27:46.344506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.751 [2024-07-16 00:27:46.344539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.751 qpair failed and we were unable to recover it. 00:26:27.751 [2024-07-16 00:27:46.344736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.751 [2024-07-16 00:27:46.344769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.751 qpair failed and we were unable to recover it. 00:26:27.751 [2024-07-16 00:27:46.345077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.751 [2024-07-16 00:27:46.345109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.751 qpair failed and we were unable to recover it. 00:26:27.751 [2024-07-16 00:27:46.345422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.751 [2024-07-16 00:27:46.345457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.751 qpair failed and we were unable to recover it. 00:26:27.751 [2024-07-16 00:27:46.345720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.751 [2024-07-16 00:27:46.345752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.751 qpair failed and we were unable to recover it. 00:26:27.751 [2024-07-16 00:27:46.346024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.751 [2024-07-16 00:27:46.346057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.751 qpair failed and we were unable to recover it. 00:26:27.751 [2024-07-16 00:27:46.346328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.751 [2024-07-16 00:27:46.346361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.751 qpair failed and we were unable to recover it. 00:26:27.751 [2024-07-16 00:27:46.346715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.751 [2024-07-16 00:27:46.346754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.751 qpair failed and we were unable to recover it. 00:26:27.751 [2024-07-16 00:27:46.347100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.751 [2024-07-16 00:27:46.347133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.751 qpair failed and we were unable to recover it. 00:26:27.751 [2024-07-16 00:27:46.347443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.752 [2024-07-16 00:27:46.347480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.752 qpair failed and we were unable to recover it. 00:26:27.752 [2024-07-16 00:27:46.347823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.752 [2024-07-16 00:27:46.347856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.752 qpair failed and we were unable to recover it. 00:26:27.752 [2024-07-16 00:27:46.348165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.752 [2024-07-16 00:27:46.348197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.752 qpair failed and we were unable to recover it. 00:26:27.752 [2024-07-16 00:27:46.348518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.752 [2024-07-16 00:27:46.348550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.752 qpair failed and we were unable to recover it. 00:26:27.752 [2024-07-16 00:27:46.348801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.752 [2024-07-16 00:27:46.348834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.752 qpair failed and we were unable to recover it. 00:26:27.752 [2024-07-16 00:27:46.349202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.752 [2024-07-16 00:27:46.349246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.752 qpair failed and we were unable to recover it. 00:26:27.752 [2024-07-16 00:27:46.349481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.752 [2024-07-16 00:27:46.349497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.752 qpair failed and we were unable to recover it. 00:26:27.752 [2024-07-16 00:27:46.349665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.752 [2024-07-16 00:27:46.349682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.752 qpair failed and we were unable to recover it. 00:26:27.752 [2024-07-16 00:27:46.350014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.752 [2024-07-16 00:27:46.350046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.752 qpair failed and we were unable to recover it. 00:26:27.752 [2024-07-16 00:27:46.350375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.752 [2024-07-16 00:27:46.350393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.752 qpair failed and we were unable to recover it. 00:26:27.752 [2024-07-16 00:27:46.350724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.752 [2024-07-16 00:27:46.350741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.752 qpair failed and we were unable to recover it. 00:26:27.752 [2024-07-16 00:27:46.350970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.752 [2024-07-16 00:27:46.351002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.752 qpair failed and we were unable to recover it. 00:26:27.752 [2024-07-16 00:27:46.351339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.752 [2024-07-16 00:27:46.351375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.752 qpair failed and we were unable to recover it. 00:26:27.752 [2024-07-16 00:27:46.351710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.752 [2024-07-16 00:27:46.351742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.752 qpair failed and we were unable to recover it. 00:26:27.752 [2024-07-16 00:27:46.352049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.752 [2024-07-16 00:27:46.352082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.752 qpair failed and we were unable to recover it. 00:26:27.752 [2024-07-16 00:27:46.352405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.752 [2024-07-16 00:27:46.352440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.752 qpair failed and we were unable to recover it. 00:26:27.752 [2024-07-16 00:27:46.352727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.752 [2024-07-16 00:27:46.352759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.752 qpair failed and we were unable to recover it. 00:26:27.752 [2024-07-16 00:27:46.353092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.752 [2024-07-16 00:27:46.353125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.752 qpair failed and we were unable to recover it. 00:26:27.752 [2024-07-16 00:27:46.353379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.752 [2024-07-16 00:27:46.353398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.752 qpair failed and we were unable to recover it. 00:26:27.752 [2024-07-16 00:27:46.353696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.752 [2024-07-16 00:27:46.353712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.752 qpair failed and we were unable to recover it. 00:26:27.752 [2024-07-16 00:27:46.354017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.752 [2024-07-16 00:27:46.354049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.752 qpair failed and we were unable to recover it. 00:26:27.752 [2024-07-16 00:27:46.354283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.752 [2024-07-16 00:27:46.354318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.752 qpair failed and we were unable to recover it. 00:26:27.752 [2024-07-16 00:27:46.354569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.752 [2024-07-16 00:27:46.354601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.752 qpair failed and we were unable to recover it. 00:26:27.752 [2024-07-16 00:27:46.354772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.752 [2024-07-16 00:27:46.354792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.752 qpair failed and we were unable to recover it. 00:26:27.752 [2024-07-16 00:27:46.355093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.752 [2024-07-16 00:27:46.355126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.752 qpair failed and we were unable to recover it. 00:26:27.752 [2024-07-16 00:27:46.355389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.752 [2024-07-16 00:27:46.355427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.752 qpair failed and we were unable to recover it. 00:26:27.752 [2024-07-16 00:27:46.355700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.752 [2024-07-16 00:27:46.355732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.752 qpair failed and we were unable to recover it. 00:26:27.752 [2024-07-16 00:27:46.356077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.752 [2024-07-16 00:27:46.356109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.752 qpair failed and we were unable to recover it. 00:26:27.752 [2024-07-16 00:27:46.356422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.752 [2024-07-16 00:27:46.356455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.752 qpair failed and we were unable to recover it. 00:26:27.752 [2024-07-16 00:27:46.356776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.752 [2024-07-16 00:27:46.356808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.752 qpair failed and we were unable to recover it. 00:26:27.752 [2024-07-16 00:27:46.357119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.752 [2024-07-16 00:27:46.357151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.752 qpair failed and we were unable to recover it. 00:26:27.752 [2024-07-16 00:27:46.357411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.752 [2024-07-16 00:27:46.357444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.752 qpair failed and we were unable to recover it. 00:26:27.752 [2024-07-16 00:27:46.357706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.752 [2024-07-16 00:27:46.357739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.752 qpair failed and we were unable to recover it. 00:26:27.752 [2024-07-16 00:27:46.358101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.752 [2024-07-16 00:27:46.358133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.752 qpair failed and we were unable to recover it. 00:26:27.752 [2024-07-16 00:27:46.358414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.752 [2024-07-16 00:27:46.358447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.752 qpair failed and we were unable to recover it. 00:26:27.752 [2024-07-16 00:27:46.358784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.752 [2024-07-16 00:27:46.358816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.752 qpair failed and we were unable to recover it. 00:26:27.752 [2024-07-16 00:27:46.359100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.752 [2024-07-16 00:27:46.359133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.752 qpair failed and we were unable to recover it. 00:26:27.752 [2024-07-16 00:27:46.359477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.752 [2024-07-16 00:27:46.359513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.752 qpair failed and we were unable to recover it. 00:26:27.752 [2024-07-16 00:27:46.359815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.752 [2024-07-16 00:27:46.359847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.752 qpair failed and we were unable to recover it. 00:26:27.752 [2024-07-16 00:27:46.360105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.752 [2024-07-16 00:27:46.360137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.752 qpair failed and we were unable to recover it. 00:26:27.752 [2024-07-16 00:27:46.360387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.752 [2024-07-16 00:27:46.360422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.752 qpair failed and we were unable to recover it. 00:26:27.752 [2024-07-16 00:27:46.360712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.753 [2024-07-16 00:27:46.360744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.753 qpair failed and we were unable to recover it. 00:26:27.753 [2024-07-16 00:27:46.361102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.753 [2024-07-16 00:27:46.361135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.753 qpair failed and we were unable to recover it. 00:26:27.753 [2024-07-16 00:27:46.361375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.753 [2024-07-16 00:27:46.361409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.753 qpair failed and we were unable to recover it. 00:26:27.753 [2024-07-16 00:27:46.361646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.753 [2024-07-16 00:27:46.361679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.753 qpair failed and we were unable to recover it. 00:26:27.753 [2024-07-16 00:27:46.361950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.753 [2024-07-16 00:27:46.361983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.753 qpair failed and we were unable to recover it. 00:26:27.753 [2024-07-16 00:27:46.362320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.753 [2024-07-16 00:27:46.362353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.753 qpair failed and we were unable to recover it. 00:26:27.753 [2024-07-16 00:27:46.362660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.753 [2024-07-16 00:27:46.362692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.753 qpair failed and we were unable to recover it. 00:26:27.753 [2024-07-16 00:27:46.363044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.753 [2024-07-16 00:27:46.363077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.753 qpair failed and we were unable to recover it. 00:26:27.753 [2024-07-16 00:27:46.363345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.753 [2024-07-16 00:27:46.363382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.753 qpair failed and we were unable to recover it. 00:26:27.753 [2024-07-16 00:27:46.363635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.753 [2024-07-16 00:27:46.363668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.753 qpair failed and we were unable to recover it. 00:26:27.753 [2024-07-16 00:27:46.363908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.753 [2024-07-16 00:27:46.363940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.753 qpair failed and we were unable to recover it. 00:26:27.753 [2024-07-16 00:27:46.364183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.753 [2024-07-16 00:27:46.364216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.753 qpair failed and we were unable to recover it. 00:26:27.753 [2024-07-16 00:27:46.364544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.753 [2024-07-16 00:27:46.364577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.753 qpair failed and we were unable to recover it. 00:26:27.753 [2024-07-16 00:27:46.364898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.753 [2024-07-16 00:27:46.364931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.753 qpair failed and we were unable to recover it. 00:26:27.753 [2024-07-16 00:27:46.365243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.753 [2024-07-16 00:27:46.365277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.753 qpair failed and we were unable to recover it. 00:26:27.753 [2024-07-16 00:27:46.365532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.753 [2024-07-16 00:27:46.365565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.753 qpair failed and we were unable to recover it. 00:26:27.753 [2024-07-16 00:27:46.365730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.753 [2024-07-16 00:27:46.365764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.753 qpair failed and we were unable to recover it. 00:26:27.753 [2024-07-16 00:27:46.366100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.753 [2024-07-16 00:27:46.366132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.753 qpair failed and we were unable to recover it. 00:26:27.753 [2024-07-16 00:27:46.366455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.753 [2024-07-16 00:27:46.366489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.753 qpair failed and we were unable to recover it. 00:26:27.753 [2024-07-16 00:27:46.366738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.753 [2024-07-16 00:27:46.366769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.753 qpair failed and we were unable to recover it. 00:26:27.753 [2024-07-16 00:27:46.367106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.753 [2024-07-16 00:27:46.367139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.753 qpair failed and we were unable to recover it. 00:26:27.753 [2024-07-16 00:27:46.367517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.753 [2024-07-16 00:27:46.367554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.753 qpair failed and we were unable to recover it. 00:26:27.753 [2024-07-16 00:27:46.367871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.753 [2024-07-16 00:27:46.367909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.753 qpair failed and we were unable to recover it. 00:26:27.753 [2024-07-16 00:27:46.368263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.753 [2024-07-16 00:27:46.368297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.753 qpair failed and we were unable to recover it. 00:26:27.753 [2024-07-16 00:27:46.368576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.753 [2024-07-16 00:27:46.368609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.753 qpair failed and we were unable to recover it. 00:26:27.753 [2024-07-16 00:27:46.368940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.753 [2024-07-16 00:27:46.368973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.753 qpair failed and we were unable to recover it. 00:26:27.753 [2024-07-16 00:27:46.369238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.753 [2024-07-16 00:27:46.369272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.753 qpair failed and we were unable to recover it. 00:26:27.753 [2024-07-16 00:27:46.369580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.753 [2024-07-16 00:27:46.369612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.753 qpair failed and we were unable to recover it. 00:26:27.753 [2024-07-16 00:27:46.369865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.753 [2024-07-16 00:27:46.369898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.753 qpair failed and we were unable to recover it. 00:26:27.753 [2024-07-16 00:27:46.370185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.753 [2024-07-16 00:27:46.370218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.753 qpair failed and we were unable to recover it. 00:26:27.753 [2024-07-16 00:27:46.370570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.753 [2024-07-16 00:27:46.370603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.753 qpair failed and we were unable to recover it. 00:26:27.753 [2024-07-16 00:27:46.370793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.753 [2024-07-16 00:27:46.370826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.753 qpair failed and we were unable to recover it. 00:26:27.753 [2024-07-16 00:27:46.371089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.753 [2024-07-16 00:27:46.371122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.753 qpair failed and we were unable to recover it. 00:26:27.753 [2024-07-16 00:27:46.371474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.753 [2024-07-16 00:27:46.371512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.753 qpair failed and we were unable to recover it. 00:26:27.753 [2024-07-16 00:27:46.371777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.753 [2024-07-16 00:27:46.371810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.753 qpair failed and we were unable to recover it. 00:26:27.753 [2024-07-16 00:27:46.372096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.753 [2024-07-16 00:27:46.372129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.753 qpair failed and we were unable to recover it. 00:26:27.753 [2024-07-16 00:27:46.372525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.753 [2024-07-16 00:27:46.372559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.753 qpair failed and we were unable to recover it. 00:26:27.753 [2024-07-16 00:27:46.372895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.753 [2024-07-16 00:27:46.372927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.753 qpair failed and we were unable to recover it. 00:26:27.753 [2024-07-16 00:27:46.373166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.753 [2024-07-16 00:27:46.373199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.753 qpair failed and we were unable to recover it. 00:26:27.753 [2024-07-16 00:27:46.373590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.753 [2024-07-16 00:27:46.373623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.753 qpair failed and we were unable to recover it. 00:26:27.753 [2024-07-16 00:27:46.373942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.753 [2024-07-16 00:27:46.373960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.753 qpair failed and we were unable to recover it. 00:26:27.753 [2024-07-16 00:27:46.374243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.754 [2024-07-16 00:27:46.374276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.754 qpair failed and we were unable to recover it. 00:26:27.754 [2024-07-16 00:27:46.374584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.754 [2024-07-16 00:27:46.374616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.754 qpair failed and we were unable to recover it. 00:26:27.754 [2024-07-16 00:27:46.374935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.754 [2024-07-16 00:27:46.374952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.754 qpair failed and we were unable to recover it. 00:26:27.754 [2024-07-16 00:27:46.375252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.754 [2024-07-16 00:27:46.375292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.754 qpair failed and we were unable to recover it. 00:26:27.754 [2024-07-16 00:27:46.375630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.754 [2024-07-16 00:27:46.375663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.754 qpair failed and we were unable to recover it. 00:26:27.754 [2024-07-16 00:27:46.375994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.754 [2024-07-16 00:27:46.376026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.754 qpair failed and we were unable to recover it. 00:26:27.754 [2024-07-16 00:27:46.376358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.754 [2024-07-16 00:27:46.376387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.754 qpair failed and we were unable to recover it. 00:26:27.754 [2024-07-16 00:27:46.376689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.754 [2024-07-16 00:27:46.376722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.754 qpair failed and we were unable to recover it. 00:26:27.754 [2024-07-16 00:27:46.376979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.754 [2024-07-16 00:27:46.377011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.754 qpair failed and we were unable to recover it. 00:26:27.754 [2024-07-16 00:27:46.377328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.754 [2024-07-16 00:27:46.377346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.754 qpair failed and we were unable to recover it. 00:26:27.754 [2024-07-16 00:27:46.377674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.754 [2024-07-16 00:27:46.377706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.754 qpair failed and we were unable to recover it. 00:26:27.754 [2024-07-16 00:27:46.378018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.754 [2024-07-16 00:27:46.378050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.754 qpair failed and we were unable to recover it. 00:26:27.754 [2024-07-16 00:27:46.378369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.754 [2024-07-16 00:27:46.378402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.754 qpair failed and we were unable to recover it. 00:26:27.754 [2024-07-16 00:27:46.378643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.754 [2024-07-16 00:27:46.378676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.754 qpair failed and we were unable to recover it. 00:26:27.754 [2024-07-16 00:27:46.379003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.754 [2024-07-16 00:27:46.379035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.754 qpair failed and we were unable to recover it. 00:26:27.754 [2024-07-16 00:27:46.379348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.754 [2024-07-16 00:27:46.379385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.754 qpair failed and we were unable to recover it. 00:26:27.754 [2024-07-16 00:27:46.379719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.754 [2024-07-16 00:27:46.379751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.754 qpair failed and we were unable to recover it. 00:26:27.754 [2024-07-16 00:27:46.380081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.754 [2024-07-16 00:27:46.380114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.754 qpair failed and we were unable to recover it. 00:26:27.754 [2024-07-16 00:27:46.380399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.754 [2024-07-16 00:27:46.380433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.754 qpair failed and we were unable to recover it. 00:26:27.754 [2024-07-16 00:27:46.380776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.754 [2024-07-16 00:27:46.380808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.754 qpair failed and we were unable to recover it. 00:26:27.754 [2024-07-16 00:27:46.381112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.754 [2024-07-16 00:27:46.381145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.754 qpair failed and we were unable to recover it. 00:26:27.754 [2024-07-16 00:27:46.381406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.754 [2024-07-16 00:27:46.381444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.754 qpair failed and we were unable to recover it. 00:26:27.754 [2024-07-16 00:27:46.381705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.754 [2024-07-16 00:27:46.381722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.754 qpair failed and we were unable to recover it. 00:26:27.754 [2024-07-16 00:27:46.382018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.754 [2024-07-16 00:27:46.382051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.754 qpair failed and we were unable to recover it. 00:26:27.754 [2024-07-16 00:27:46.382299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.754 [2024-07-16 00:27:46.382332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.754 qpair failed and we were unable to recover it. 00:26:27.754 [2024-07-16 00:27:46.382674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.754 [2024-07-16 00:27:46.382706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.754 qpair failed and we were unable to recover it. 00:26:27.754 [2024-07-16 00:27:46.382950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.754 [2024-07-16 00:27:46.382982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.754 qpair failed and we were unable to recover it. 00:26:27.754 [2024-07-16 00:27:46.383238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.754 [2024-07-16 00:27:46.383276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.754 qpair failed and we were unable to recover it. 00:26:27.754 [2024-07-16 00:27:46.383557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.754 [2024-07-16 00:27:46.383590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.754 qpair failed and we were unable to recover it. 00:26:27.754 [2024-07-16 00:27:46.383841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.754 [2024-07-16 00:27:46.383873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.754 qpair failed and we were unable to recover it. 00:26:27.754 [2024-07-16 00:27:46.384149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.754 [2024-07-16 00:27:46.384182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.754 qpair failed and we were unable to recover it. 00:26:27.754 [2024-07-16 00:27:46.384357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.754 [2024-07-16 00:27:46.384372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.754 qpair failed and we were unable to recover it. 00:26:27.754 [2024-07-16 00:27:46.384661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.754 [2024-07-16 00:27:46.384678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.754 qpair failed and we were unable to recover it. 00:26:27.754 [2024-07-16 00:27:46.384961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.754 [2024-07-16 00:27:46.384978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.754 qpair failed and we were unable to recover it. 00:26:27.754 [2024-07-16 00:27:46.385295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.754 [2024-07-16 00:27:46.385313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.754 qpair failed and we were unable to recover it. 00:26:27.754 [2024-07-16 00:27:46.385624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.754 [2024-07-16 00:27:46.385657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.754 qpair failed and we were unable to recover it. 00:26:27.754 [2024-07-16 00:27:46.385907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.754 [2024-07-16 00:27:46.385939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.754 qpair failed and we were unable to recover it. 00:26:27.754 [2024-07-16 00:27:46.386190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.754 [2024-07-16 00:27:46.386222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.754 qpair failed and we were unable to recover it. 00:26:27.754 [2024-07-16 00:27:46.386481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.754 [2024-07-16 00:27:46.386514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.754 qpair failed and we were unable to recover it. 00:26:27.754 [2024-07-16 00:27:46.386893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.754 [2024-07-16 00:27:46.386926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.754 qpair failed and we were unable to recover it. 00:26:27.754 [2024-07-16 00:27:46.387245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.754 [2024-07-16 00:27:46.387298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.754 qpair failed and we were unable to recover it. 00:26:27.754 [2024-07-16 00:27:46.387579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.755 [2024-07-16 00:27:46.387596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.755 qpair failed and we were unable to recover it. 00:26:27.755 [2024-07-16 00:27:46.387911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.755 [2024-07-16 00:27:46.387928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.755 qpair failed and we were unable to recover it. 00:26:27.755 [2024-07-16 00:27:46.388148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.755 [2024-07-16 00:27:46.388165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.755 qpair failed and we were unable to recover it. 00:26:27.755 [2024-07-16 00:27:46.388390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.755 [2024-07-16 00:27:46.388407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.755 qpair failed and we were unable to recover it. 00:26:27.755 [2024-07-16 00:27:46.388646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.755 [2024-07-16 00:27:46.388683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.755 qpair failed and we were unable to recover it. 00:26:27.755 [2024-07-16 00:27:46.388995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.755 [2024-07-16 00:27:46.389027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.755 qpair failed and we were unable to recover it. 00:26:27.755 [2024-07-16 00:27:46.389294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.755 [2024-07-16 00:27:46.389338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.755 qpair failed and we were unable to recover it. 00:26:27.755 [2024-07-16 00:27:46.389630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.755 [2024-07-16 00:27:46.389663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.755 qpair failed and we were unable to recover it. 00:26:27.755 [2024-07-16 00:27:46.389989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.755 [2024-07-16 00:27:46.390022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.755 qpair failed and we were unable to recover it. 00:26:27.755 [2024-07-16 00:27:46.390280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.755 [2024-07-16 00:27:46.390312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.755 qpair failed and we were unable to recover it. 00:26:27.755 [2024-07-16 00:27:46.390576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.755 [2024-07-16 00:27:46.390608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.755 qpair failed and we were unable to recover it. 00:26:27.755 [2024-07-16 00:27:46.390895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.755 [2024-07-16 00:27:46.390928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.755 qpair failed and we were unable to recover it. 00:26:27.755 [2024-07-16 00:27:46.391269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.755 [2024-07-16 00:27:46.391305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.755 qpair failed and we were unable to recover it. 00:26:27.755 [2024-07-16 00:27:46.391612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.755 [2024-07-16 00:27:46.391628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.755 qpair failed and we were unable to recover it. 00:26:27.755 [2024-07-16 00:27:46.391852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.755 [2024-07-16 00:27:46.391869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.755 qpair failed and we were unable to recover it. 00:26:27.755 [2024-07-16 00:27:46.392104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.755 [2024-07-16 00:27:46.392122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.755 qpair failed and we were unable to recover it. 00:26:27.755 [2024-07-16 00:27:46.392409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.755 [2024-07-16 00:27:46.392426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.755 qpair failed and we were unable to recover it. 00:26:27.755 [2024-07-16 00:27:46.392706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.755 [2024-07-16 00:27:46.392723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.755 qpair failed and we were unable to recover it. 00:26:27.755 [2024-07-16 00:27:46.392867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.755 [2024-07-16 00:27:46.392883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.755 qpair failed and we were unable to recover it. 00:26:27.755 [2024-07-16 00:27:46.393107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.755 [2024-07-16 00:27:46.393125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.755 qpair failed and we were unable to recover it. 00:26:27.755 [2024-07-16 00:27:46.393358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.755 [2024-07-16 00:27:46.393378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.755 qpair failed and we were unable to recover it. 00:26:27.755 [2024-07-16 00:27:46.393677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.755 [2024-07-16 00:27:46.393709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.755 qpair failed and we were unable to recover it. 00:26:27.755 [2024-07-16 00:27:46.394028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.755 [2024-07-16 00:27:46.394060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.755 qpair failed and we were unable to recover it. 00:26:27.755 [2024-07-16 00:27:46.394304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.755 [2024-07-16 00:27:46.394337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.755 qpair failed and we were unable to recover it. 00:26:27.755 [2024-07-16 00:27:46.394697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.755 [2024-07-16 00:27:46.394714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.755 qpair failed and we were unable to recover it. 00:26:27.755 [2024-07-16 00:27:46.394925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.755 [2024-07-16 00:27:46.394942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.755 qpair failed and we were unable to recover it. 00:26:27.755 [2024-07-16 00:27:46.395219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.755 [2024-07-16 00:27:46.395247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.755 qpair failed and we were unable to recover it. 00:26:27.755 [2024-07-16 00:27:46.395478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.755 [2024-07-16 00:27:46.395520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.755 qpair failed and we were unable to recover it. 00:26:27.755 [2024-07-16 00:27:46.395812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.755 [2024-07-16 00:27:46.395844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.755 qpair failed and we were unable to recover it. 00:26:27.755 [2024-07-16 00:27:46.396120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.755 [2024-07-16 00:27:46.396153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.755 qpair failed and we were unable to recover it. 00:26:27.755 [2024-07-16 00:27:46.396432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.755 [2024-07-16 00:27:46.396466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.755 qpair failed and we were unable to recover it. 00:26:27.755 [2024-07-16 00:27:46.396796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.755 [2024-07-16 00:27:46.396828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.755 qpair failed and we were unable to recover it. 00:26:27.755 [2024-07-16 00:27:46.397150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.755 [2024-07-16 00:27:46.397182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.755 qpair failed and we were unable to recover it. 00:26:27.755 [2024-07-16 00:27:46.397519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.755 [2024-07-16 00:27:46.397537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.755 qpair failed and we were unable to recover it. 00:26:27.755 [2024-07-16 00:27:46.397844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.755 [2024-07-16 00:27:46.397877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.755 qpair failed and we were unable to recover it. 00:26:27.755 [2024-07-16 00:27:46.398121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.755 [2024-07-16 00:27:46.398153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.755 qpair failed and we were unable to recover it. 00:26:27.756 [2024-07-16 00:27:46.398519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.756 [2024-07-16 00:27:46.398536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.756 qpair failed and we were unable to recover it. 00:26:27.756 [2024-07-16 00:27:46.398751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.756 [2024-07-16 00:27:46.398783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.756 qpair failed and we were unable to recover it. 00:26:27.756 [2024-07-16 00:27:46.399102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.756 [2024-07-16 00:27:46.399134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.756 qpair failed and we were unable to recover it. 00:26:27.756 [2024-07-16 00:27:46.399324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.756 [2024-07-16 00:27:46.399357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.756 qpair failed and we were unable to recover it. 00:26:27.756 [2024-07-16 00:27:46.399613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.756 [2024-07-16 00:27:46.399646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.756 qpair failed and we were unable to recover it. 00:26:27.756 [2024-07-16 00:27:46.399903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.756 [2024-07-16 00:27:46.399920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.756 qpair failed and we were unable to recover it. 00:26:27.756 [2024-07-16 00:27:46.400085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.756 [2024-07-16 00:27:46.400102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.756 qpair failed and we were unable to recover it. 00:26:27.756 [2024-07-16 00:27:46.400317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.756 [2024-07-16 00:27:46.400350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.756 qpair failed and we were unable to recover it. 00:26:27.756 [2024-07-16 00:27:46.400629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.756 [2024-07-16 00:27:46.400661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.756 qpair failed and we were unable to recover it. 00:26:27.756 [2024-07-16 00:27:46.400939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.756 [2024-07-16 00:27:46.400955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.756 qpair failed and we were unable to recover it. 00:26:27.756 [2024-07-16 00:27:46.401235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.756 [2024-07-16 00:27:46.401253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.756 qpair failed and we were unable to recover it. 00:26:27.756 [2024-07-16 00:27:46.401590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.756 [2024-07-16 00:27:46.401623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.756 qpair failed and we were unable to recover it. 00:26:27.756 [2024-07-16 00:27:46.401935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.756 [2024-07-16 00:27:46.401966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.756 qpair failed and we were unable to recover it. 00:26:27.756 [2024-07-16 00:27:46.402221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.756 [2024-07-16 00:27:46.402266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.756 qpair failed and we were unable to recover it. 00:26:27.756 [2024-07-16 00:27:46.402505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.756 [2024-07-16 00:27:46.402537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.756 qpair failed and we were unable to recover it. 00:26:27.756 [2024-07-16 00:27:46.402893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.756 [2024-07-16 00:27:46.402926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.756 qpair failed and we were unable to recover it. 00:26:27.756 [2024-07-16 00:27:46.403260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.756 [2024-07-16 00:27:46.403301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.756 qpair failed and we were unable to recover it. 00:26:27.756 [2024-07-16 00:27:46.403577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.756 [2024-07-16 00:27:46.403594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.756 qpair failed and we were unable to recover it. 00:26:27.756 [2024-07-16 00:27:46.403893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.756 [2024-07-16 00:27:46.403910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.756 qpair failed and we were unable to recover it. 00:26:27.756 [2024-07-16 00:27:46.404128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.756 [2024-07-16 00:27:46.404145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.756 qpair failed and we were unable to recover it. 00:26:27.756 [2024-07-16 00:27:46.404383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.756 [2024-07-16 00:27:46.404417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.756 qpair failed and we were unable to recover it. 00:26:27.756 [2024-07-16 00:27:46.404734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.756 [2024-07-16 00:27:46.404766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.756 qpair failed and we were unable to recover it. 00:26:27.756 [2024-07-16 00:27:46.405024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.756 [2024-07-16 00:27:46.405041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.756 qpair failed and we were unable to recover it. 00:26:27.756 [2024-07-16 00:27:46.405187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.756 [2024-07-16 00:27:46.405202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.756 qpair failed and we were unable to recover it. 00:26:27.756 [2024-07-16 00:27:46.405508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.756 [2024-07-16 00:27:46.405528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.756 qpair failed and we were unable to recover it. 00:26:27.756 [2024-07-16 00:27:46.405687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.756 [2024-07-16 00:27:46.405717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.756 qpair failed and we were unable to recover it. 00:26:27.756 [2024-07-16 00:27:46.405956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.756 [2024-07-16 00:27:46.405988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.756 qpair failed and we were unable to recover it. 00:26:27.756 [2024-07-16 00:27:46.406232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.756 [2024-07-16 00:27:46.406265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.756 qpair failed and we were unable to recover it. 00:26:27.756 [2024-07-16 00:27:46.406572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.756 [2024-07-16 00:27:46.406605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.756 qpair failed and we were unable to recover it. 00:26:27.756 [2024-07-16 00:27:46.406843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.756 [2024-07-16 00:27:46.406875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.756 qpair failed and we were unable to recover it. 00:26:27.756 [2024-07-16 00:27:46.407146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.756 [2024-07-16 00:27:46.407179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.756 qpair failed and we were unable to recover it. 00:26:27.756 [2024-07-16 00:27:46.407507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.756 [2024-07-16 00:27:46.407543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.756 qpair failed and we were unable to recover it. 00:26:27.756 [2024-07-16 00:27:46.407850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.756 [2024-07-16 00:27:46.407867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.756 qpair failed and we were unable to recover it. 00:26:27.756 [2024-07-16 00:27:46.408079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.756 [2024-07-16 00:27:46.408096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.756 qpair failed and we were unable to recover it. 00:26:27.756 [2024-07-16 00:27:46.408420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.756 [2024-07-16 00:27:46.408453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.756 qpair failed and we were unable to recover it. 00:26:27.756 [2024-07-16 00:27:46.408774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.756 [2024-07-16 00:27:46.408806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.756 qpair failed and we were unable to recover it. 00:26:27.756 [2024-07-16 00:27:46.409068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.756 [2024-07-16 00:27:46.409100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.756 qpair failed and we were unable to recover it. 00:26:27.756 [2024-07-16 00:27:46.409408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.756 [2024-07-16 00:27:46.409440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.756 qpair failed and we were unable to recover it. 00:26:27.756 [2024-07-16 00:27:46.409759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.756 [2024-07-16 00:27:46.409777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.756 qpair failed and we were unable to recover it. 00:26:27.756 [2024-07-16 00:27:46.409982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.756 [2024-07-16 00:27:46.409999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.756 qpair failed and we were unable to recover it. 00:26:27.756 [2024-07-16 00:27:46.410131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.757 [2024-07-16 00:27:46.410147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.757 qpair failed and we were unable to recover it. 00:26:27.757 [2024-07-16 00:27:46.410350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.757 [2024-07-16 00:27:46.410365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.757 qpair failed and we were unable to recover it. 00:26:27.757 [2024-07-16 00:27:46.410666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.757 [2024-07-16 00:27:46.410699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.757 qpair failed and we were unable to recover it. 00:26:27.757 [2024-07-16 00:27:46.410969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.757 [2024-07-16 00:27:46.411002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.757 qpair failed and we were unable to recover it. 00:26:27.757 [2024-07-16 00:27:46.411359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.757 [2024-07-16 00:27:46.411395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.757 qpair failed and we were unable to recover it. 00:26:27.757 [2024-07-16 00:27:46.411646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.757 [2024-07-16 00:27:46.411678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.757 qpair failed and we were unable to recover it. 00:26:27.757 [2024-07-16 00:27:46.411957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.757 [2024-07-16 00:27:46.411974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.757 qpair failed and we were unable to recover it. 00:26:27.757 [2024-07-16 00:27:46.412203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.757 [2024-07-16 00:27:46.412246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.757 qpair failed and we were unable to recover it. 00:26:27.757 [2024-07-16 00:27:46.412440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.757 [2024-07-16 00:27:46.412473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.757 qpair failed and we were unable to recover it. 00:26:27.757 [2024-07-16 00:27:46.412800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.757 [2024-07-16 00:27:46.412833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.757 qpair failed and we were unable to recover it. 00:26:27.757 [2024-07-16 00:27:46.413163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.757 [2024-07-16 00:27:46.413196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.757 qpair failed and we were unable to recover it. 00:26:27.757 [2024-07-16 00:27:46.413450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.757 [2024-07-16 00:27:46.413467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.757 qpair failed and we were unable to recover it. 00:26:27.757 [2024-07-16 00:27:46.413763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.757 [2024-07-16 00:27:46.413780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.757 qpair failed and we were unable to recover it. 00:26:27.757 [2024-07-16 00:27:46.414086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.757 [2024-07-16 00:27:46.414118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.757 qpair failed and we were unable to recover it. 00:26:27.757 [2024-07-16 00:27:46.414377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.757 [2024-07-16 00:27:46.414411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.757 qpair failed and we were unable to recover it. 00:26:27.757 [2024-07-16 00:27:46.414762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.757 [2024-07-16 00:27:46.414794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.757 qpair failed and we were unable to recover it. 00:26:27.757 [2024-07-16 00:27:46.415128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.757 [2024-07-16 00:27:46.415160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.757 qpair failed and we were unable to recover it. 00:26:27.757 [2024-07-16 00:27:46.415410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.757 [2024-07-16 00:27:46.415446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.757 qpair failed and we were unable to recover it. 00:26:27.757 [2024-07-16 00:27:46.415698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.757 [2024-07-16 00:27:46.415731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.757 qpair failed and we were unable to recover it. 00:26:27.757 [2024-07-16 00:27:46.416055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.757 [2024-07-16 00:27:46.416071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.757 qpair failed and we were unable to recover it. 00:26:27.757 [2024-07-16 00:27:46.416282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.757 [2024-07-16 00:27:46.416299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.757 qpair failed and we were unable to recover it. 00:26:27.757 [2024-07-16 00:27:46.416598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.757 [2024-07-16 00:27:46.416631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.757 qpair failed and we were unable to recover it. 00:26:27.757 [2024-07-16 00:27:46.416960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.757 [2024-07-16 00:27:46.416993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.757 qpair failed and we were unable to recover it. 00:26:27.757 [2024-07-16 00:27:46.417202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.757 [2024-07-16 00:27:46.417244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.757 qpair failed and we were unable to recover it. 00:26:27.757 [2024-07-16 00:27:46.417501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.757 [2024-07-16 00:27:46.417539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.757 qpair failed and we were unable to recover it. 00:26:27.757 [2024-07-16 00:27:46.417886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.757 [2024-07-16 00:27:46.417917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.757 qpair failed and we were unable to recover it. 00:26:27.757 [2024-07-16 00:27:46.418214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.757 [2024-07-16 00:27:46.418257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.757 qpair failed and we were unable to recover it. 00:26:27.757 [2024-07-16 00:27:46.418592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.757 [2024-07-16 00:27:46.418624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.757 qpair failed and we were unable to recover it. 00:26:27.757 [2024-07-16 00:27:46.418866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.757 [2024-07-16 00:27:46.418898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.757 qpair failed and we were unable to recover it. 00:26:27.757 [2024-07-16 00:27:46.419237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.757 [2024-07-16 00:27:46.419282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.757 qpair failed and we were unable to recover it. 00:26:27.757 [2024-07-16 00:27:46.419591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.757 [2024-07-16 00:27:46.419608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.757 qpair failed and we were unable to recover it. 00:26:27.757 [2024-07-16 00:27:46.419826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.757 [2024-07-16 00:27:46.419859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.757 qpair failed and we were unable to recover it. 00:26:27.757 [2024-07-16 00:27:46.420021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.757 [2024-07-16 00:27:46.420054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.757 qpair failed and we were unable to recover it. 00:26:27.757 [2024-07-16 00:27:46.420305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.757 [2024-07-16 00:27:46.420339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.757 qpair failed and we were unable to recover it. 00:26:27.757 [2024-07-16 00:27:46.420582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.757 [2024-07-16 00:27:46.420615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.757 qpair failed and we were unable to recover it. 00:26:27.757 [2024-07-16 00:27:46.420963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.757 [2024-07-16 00:27:46.420994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.757 qpair failed and we were unable to recover it. 00:26:27.757 [2024-07-16 00:27:46.421325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.757 [2024-07-16 00:27:46.421358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.757 qpair failed and we were unable to recover it. 00:26:27.757 [2024-07-16 00:27:46.421565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.757 [2024-07-16 00:27:46.421597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.757 qpair failed and we were unable to recover it. 00:26:27.757 [2024-07-16 00:27:46.421913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.757 [2024-07-16 00:27:46.421946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.757 qpair failed and we were unable to recover it. 00:26:27.757 [2024-07-16 00:27:46.422320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.757 [2024-07-16 00:27:46.422353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.757 qpair failed and we were unable to recover it. 00:26:27.757 [2024-07-16 00:27:46.422591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.757 [2024-07-16 00:27:46.422622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.757 qpair failed and we were unable to recover it. 00:26:27.757 [2024-07-16 00:27:46.422896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.758 [2024-07-16 00:27:46.422913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.758 qpair failed and we were unable to recover it. 00:26:27.758 [2024-07-16 00:27:46.423127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.758 [2024-07-16 00:27:46.423144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.758 qpair failed and we were unable to recover it. 00:26:27.758 [2024-07-16 00:27:46.423369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.758 [2024-07-16 00:27:46.423388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.758 qpair failed and we were unable to recover it. 00:26:27.758 [2024-07-16 00:27:46.423611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.758 [2024-07-16 00:27:46.423629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.758 qpair failed and we were unable to recover it. 00:26:27.758 [2024-07-16 00:27:46.423920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.758 [2024-07-16 00:27:46.423952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.758 qpair failed and we were unable to recover it. 00:26:27.758 [2024-07-16 00:27:46.424285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.758 [2024-07-16 00:27:46.424319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.758 qpair failed and we were unable to recover it. 00:26:27.758 [2024-07-16 00:27:46.424656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.758 [2024-07-16 00:27:46.424688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.758 qpair failed and we were unable to recover it. 00:26:27.758 [2024-07-16 00:27:46.425013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.758 [2024-07-16 00:27:46.425045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.758 qpair failed and we were unable to recover it. 00:26:27.758 [2024-07-16 00:27:46.425323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.758 [2024-07-16 00:27:46.425356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.758 qpair failed and we were unable to recover it. 00:26:27.758 [2024-07-16 00:27:46.425610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.758 [2024-07-16 00:27:46.425643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:27.758 qpair failed and we were unable to recover it. 00:26:27.758 [2024-07-16 00:27:46.425977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.758 [2024-07-16 00:27:46.426054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:27.758 qpair failed and we were unable to recover it. 00:26:27.758 [2024-07-16 00:27:46.426341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.758 [2024-07-16 00:27:46.426418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.758 qpair failed and we were unable to recover it. 00:26:27.758 [2024-07-16 00:27:46.426733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.758 [2024-07-16 00:27:46.426769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.758 qpair failed and we were unable to recover it. 00:26:27.758 [2024-07-16 00:27:46.427107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.758 [2024-07-16 00:27:46.427139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.758 qpair failed and we were unable to recover it. 00:26:27.758 [2024-07-16 00:27:46.427401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.758 [2024-07-16 00:27:46.427436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.758 qpair failed and we were unable to recover it. 00:26:27.758 [2024-07-16 00:27:46.427628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.758 [2024-07-16 00:27:46.427661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.758 qpair failed and we were unable to recover it. 00:26:27.758 [2024-07-16 00:27:46.427918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.758 [2024-07-16 00:27:46.427950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.758 qpair failed and we were unable to recover it. 00:26:27.758 [2024-07-16 00:27:46.428282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.758 [2024-07-16 00:27:46.428315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.758 qpair failed and we were unable to recover it. 00:26:27.758 [2024-07-16 00:27:46.428570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.758 [2024-07-16 00:27:46.428602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.758 qpair failed and we were unable to recover it. 00:26:27.758 [2024-07-16 00:27:46.428966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.758 [2024-07-16 00:27:46.428999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.758 qpair failed and we were unable to recover it. 00:26:27.758 [2024-07-16 00:27:46.429274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.758 [2024-07-16 00:27:46.429308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.758 qpair failed and we were unable to recover it. 00:26:27.758 [2024-07-16 00:27:46.429558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.758 [2024-07-16 00:27:46.429590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.758 qpair failed and we were unable to recover it. 00:26:27.758 [2024-07-16 00:27:46.429922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.758 [2024-07-16 00:27:46.429935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.758 qpair failed and we were unable to recover it. 00:26:27.758 [2024-07-16 00:27:46.430097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.758 [2024-07-16 00:27:46.430118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.758 qpair failed and we were unable to recover it. 00:26:27.758 [2024-07-16 00:27:46.430431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.758 [2024-07-16 00:27:46.430444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.758 qpair failed and we were unable to recover it. 00:26:27.758 [2024-07-16 00:27:46.430657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.758 [2024-07-16 00:27:46.430689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.758 qpair failed and we were unable to recover it. 00:26:27.758 [2024-07-16 00:27:46.431017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.758 [2024-07-16 00:27:46.431049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.758 qpair failed and we were unable to recover it. 00:26:27.758 [2024-07-16 00:27:46.431304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.758 [2024-07-16 00:27:46.431337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.758 qpair failed and we were unable to recover it. 00:26:27.758 [2024-07-16 00:27:46.431671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.758 [2024-07-16 00:27:46.431703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.758 qpair failed and we were unable to recover it. 00:26:27.758 [2024-07-16 00:27:46.432037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.758 [2024-07-16 00:27:46.432069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.758 qpair failed and we were unable to recover it. 00:26:27.758 [2024-07-16 00:27:46.432268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.758 [2024-07-16 00:27:46.432300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.758 qpair failed and we were unable to recover it. 00:26:27.758 [2024-07-16 00:27:46.432618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.758 [2024-07-16 00:27:46.432650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.758 qpair failed and we were unable to recover it. 00:26:27.758 [2024-07-16 00:27:46.432975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.758 [2024-07-16 00:27:46.433008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.758 qpair failed and we were unable to recover it. 00:26:27.758 [2024-07-16 00:27:46.433268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.758 [2024-07-16 00:27:46.433301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.758 qpair failed and we were unable to recover it. 00:26:27.758 [2024-07-16 00:27:46.433555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.758 [2024-07-16 00:27:46.433588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.758 qpair failed and we were unable to recover it. 00:26:27.758 [2024-07-16 00:27:46.433833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.758 [2024-07-16 00:27:46.433866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.758 qpair failed and we were unable to recover it. 00:26:27.758 [2024-07-16 00:27:46.434080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.758 [2024-07-16 00:27:46.434094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.758 qpair failed and we were unable to recover it. 00:26:27.758 [2024-07-16 00:27:46.434334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.758 [2024-07-16 00:27:46.434368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.758 qpair failed and we were unable to recover it. 00:26:27.758 [2024-07-16 00:27:46.434685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.758 [2024-07-16 00:27:46.434717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.758 qpair failed and we were unable to recover it. 00:26:27.758 [2024-07-16 00:27:46.435025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.758 [2024-07-16 00:27:46.435038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.758 qpair failed and we were unable to recover it. 00:26:27.758 [2024-07-16 00:27:46.435289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.758 [2024-07-16 00:27:46.435322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.758 qpair failed and we were unable to recover it. 00:26:27.758 [2024-07-16 00:27:46.435635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.759 [2024-07-16 00:27:46.435668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.759 qpair failed and we were unable to recover it. 00:26:27.759 [2024-07-16 00:27:46.435982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.759 [2024-07-16 00:27:46.436014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.759 qpair failed and we were unable to recover it. 00:26:27.759 [2024-07-16 00:27:46.436279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.759 [2024-07-16 00:27:46.436312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.759 qpair failed and we were unable to recover it. 00:26:27.759 [2024-07-16 00:27:46.436572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.759 [2024-07-16 00:27:46.436604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.759 qpair failed and we were unable to recover it. 00:26:27.759 [2024-07-16 00:27:46.436961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.759 [2024-07-16 00:27:46.436992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.759 qpair failed and we were unable to recover it. 00:26:27.759 [2024-07-16 00:27:46.437250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.759 [2024-07-16 00:27:46.437285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.759 qpair failed and we were unable to recover it. 00:26:27.759 [2024-07-16 00:27:46.437554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.759 [2024-07-16 00:27:46.437567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.759 qpair failed and we were unable to recover it. 00:26:27.759 [2024-07-16 00:27:46.437845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.759 [2024-07-16 00:27:46.437859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.759 qpair failed and we were unable to recover it. 00:26:27.759 [2024-07-16 00:27:46.438083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.759 [2024-07-16 00:27:46.438097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.759 qpair failed and we were unable to recover it. 00:26:27.759 [2024-07-16 00:27:46.438252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.759 [2024-07-16 00:27:46.438264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.759 qpair failed and we were unable to recover it. 00:26:27.759 [2024-07-16 00:27:46.438455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.759 [2024-07-16 00:27:46.438468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.759 qpair failed and we were unable to recover it. 00:26:27.759 [2024-07-16 00:27:46.438619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.759 [2024-07-16 00:27:46.438652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.759 qpair failed and we were unable to recover it. 00:26:27.759 [2024-07-16 00:27:46.438888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.759 [2024-07-16 00:27:46.438920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.759 qpair failed and we were unable to recover it. 00:26:27.759 [2024-07-16 00:27:46.439281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.759 [2024-07-16 00:27:46.439314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.759 qpair failed and we were unable to recover it. 00:26:27.759 [2024-07-16 00:27:46.439619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.759 [2024-07-16 00:27:46.439651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.759 qpair failed and we were unable to recover it. 00:26:27.759 [2024-07-16 00:27:46.439909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.759 [2024-07-16 00:27:46.439941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.759 qpair failed and we were unable to recover it. 00:26:27.759 [2024-07-16 00:27:46.440212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.759 [2024-07-16 00:27:46.440254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.759 qpair failed and we were unable to recover it. 00:26:27.759 [2024-07-16 00:27:46.440594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.759 [2024-07-16 00:27:46.440627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.759 qpair failed and we were unable to recover it. 00:26:27.759 [2024-07-16 00:27:46.440821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.759 [2024-07-16 00:27:46.440853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.759 qpair failed and we were unable to recover it. 00:26:27.759 [2024-07-16 00:27:46.441111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.759 [2024-07-16 00:27:46.441143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.759 qpair failed and we were unable to recover it. 00:26:27.759 [2024-07-16 00:27:46.441393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.759 [2024-07-16 00:27:46.441427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.759 qpair failed and we were unable to recover it. 00:26:27.759 [2024-07-16 00:27:46.441823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.759 [2024-07-16 00:27:46.441836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.759 qpair failed and we were unable to recover it. 00:26:27.759 [2024-07-16 00:27:46.442132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.759 [2024-07-16 00:27:46.442170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.759 qpair failed and we were unable to recover it. 00:26:27.759 [2024-07-16 00:27:46.442454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.759 [2024-07-16 00:27:46.442488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.759 qpair failed and we were unable to recover it. 00:26:27.759 [2024-07-16 00:27:46.442818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.759 [2024-07-16 00:27:46.442831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.759 qpair failed and we were unable to recover it. 00:26:27.759 [2024-07-16 00:27:46.442988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.759 [2024-07-16 00:27:46.443018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.759 qpair failed and we were unable to recover it. 00:26:27.759 [2024-07-16 00:27:46.443323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.759 [2024-07-16 00:27:46.443355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.759 qpair failed and we were unable to recover it. 00:26:27.759 [2024-07-16 00:27:46.443699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.759 [2024-07-16 00:27:46.443732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.759 qpair failed and we were unable to recover it. 00:26:27.759 [2024-07-16 00:27:46.443974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.759 [2024-07-16 00:27:46.444006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.759 qpair failed and we were unable to recover it. 00:26:27.759 [2024-07-16 00:27:46.444268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.759 [2024-07-16 00:27:46.444300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.759 qpair failed and we were unable to recover it. 00:26:27.759 [2024-07-16 00:27:46.444556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.759 [2024-07-16 00:27:46.444588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.759 qpair failed and we were unable to recover it. 00:26:27.759 [2024-07-16 00:27:46.444941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.759 [2024-07-16 00:27:46.444974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.759 qpair failed and we were unable to recover it. 00:26:27.759 [2024-07-16 00:27:46.445306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.759 [2024-07-16 00:27:46.445339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.759 qpair failed and we were unable to recover it. 00:26:27.759 [2024-07-16 00:27:46.445666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.759 [2024-07-16 00:27:46.445698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.759 qpair failed and we were unable to recover it. 00:26:27.759 [2024-07-16 00:27:46.446023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.759 [2024-07-16 00:27:46.446055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.759 qpair failed and we were unable to recover it. 00:26:27.759 [2024-07-16 00:27:46.446332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.759 [2024-07-16 00:27:46.446365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.759 qpair failed and we were unable to recover it. 00:26:27.759 [2024-07-16 00:27:46.446718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.759 [2024-07-16 00:27:46.446750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.759 qpair failed and we were unable to recover it. 00:26:27.759 [2024-07-16 00:27:46.447079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.759 [2024-07-16 00:27:46.447111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.759 qpair failed and we were unable to recover it. 00:26:27.759 [2024-07-16 00:27:46.447371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.759 [2024-07-16 00:27:46.447404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.759 qpair failed and we were unable to recover it. 00:26:27.759 [2024-07-16 00:27:46.447614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.759 [2024-07-16 00:27:46.447628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.759 qpair failed and we were unable to recover it. 00:26:27.759 [2024-07-16 00:27:46.447768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.759 [2024-07-16 00:27:46.447798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.760 qpair failed and we were unable to recover it. 00:26:27.760 [2024-07-16 00:27:46.448032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.760 [2024-07-16 00:27:46.448064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.760 qpair failed and we were unable to recover it. 00:26:27.760 [2024-07-16 00:27:46.448334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.760 [2024-07-16 00:27:46.448367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.760 qpair failed and we were unable to recover it. 00:26:27.760 [2024-07-16 00:27:46.448650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.760 [2024-07-16 00:27:46.448682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.760 qpair failed and we were unable to recover it. 00:26:27.760 [2024-07-16 00:27:46.448937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.760 [2024-07-16 00:27:46.448969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.760 qpair failed and we were unable to recover it. 00:26:27.760 [2024-07-16 00:27:46.449283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.760 [2024-07-16 00:27:46.449314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.760 qpair failed and we were unable to recover it. 00:26:27.760 [2024-07-16 00:27:46.449559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.760 [2024-07-16 00:27:46.449573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.760 qpair failed and we were unable to recover it. 00:26:27.760 [2024-07-16 00:27:46.449780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.760 [2024-07-16 00:27:46.449793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.760 qpair failed and we were unable to recover it. 00:26:27.760 [2024-07-16 00:27:46.450006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.760 [2024-07-16 00:27:46.450038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:27.760 qpair failed and we were unable to recover it. 00:26:27.760 [2024-07-16 00:27:46.450451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.760 [2024-07-16 00:27:46.450529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.760 qpair failed and we were unable to recover it. 00:26:27.760 [2024-07-16 00:27:46.450879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.760 [2024-07-16 00:27:46.450898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.760 qpair failed and we were unable to recover it. 00:26:27.760 [2024-07-16 00:27:46.451172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.760 [2024-07-16 00:27:46.451189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.760 qpair failed and we were unable to recover it. 00:26:27.760 [2024-07-16 00:27:46.451393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.760 [2024-07-16 00:27:46.451437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.760 qpair failed and we were unable to recover it. 00:26:27.760 [2024-07-16 00:27:46.451783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.760 [2024-07-16 00:27:46.451816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.760 qpair failed and we were unable to recover it. 00:26:27.760 [2024-07-16 00:27:46.452128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.760 [2024-07-16 00:27:46.452160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.760 qpair failed and we were unable to recover it. 00:26:27.760 [2024-07-16 00:27:46.452412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.760 [2024-07-16 00:27:46.452447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.760 qpair failed and we were unable to recover it. 00:26:27.760 [2024-07-16 00:27:46.452759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.760 [2024-07-16 00:27:46.452792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.760 qpair failed and we were unable to recover it. 00:26:27.760 [2024-07-16 00:27:46.453052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.760 [2024-07-16 00:27:46.453083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.760 qpair failed and we were unable to recover it. 00:26:27.760 [2024-07-16 00:27:46.453322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.760 [2024-07-16 00:27:46.453355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.760 qpair failed and we were unable to recover it. 00:26:27.760 [2024-07-16 00:27:46.453611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.760 [2024-07-16 00:27:46.453643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.760 qpair failed and we were unable to recover it. 00:26:27.760 [2024-07-16 00:27:46.453952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.760 [2024-07-16 00:27:46.453984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.760 qpair failed and we were unable to recover it. 00:26:27.760 [2024-07-16 00:27:46.454308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.760 [2024-07-16 00:27:46.454344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.760 qpair failed and we were unable to recover it. 00:26:27.760 [2024-07-16 00:27:46.454599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.760 [2024-07-16 00:27:46.454616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.760 qpair failed and we were unable to recover it. 00:26:27.760 [2024-07-16 00:27:46.454980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.760 [2024-07-16 00:27:46.454996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.760 qpair failed and we were unable to recover it. 00:26:27.760 [2024-07-16 00:27:46.455236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.760 [2024-07-16 00:27:46.455254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.760 qpair failed and we were unable to recover it. 00:26:27.760 [2024-07-16 00:27:46.455466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.760 [2024-07-16 00:27:46.455499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.760 qpair failed and we were unable to recover it. 00:26:27.760 [2024-07-16 00:27:46.455739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.760 [2024-07-16 00:27:46.455771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.760 qpair failed and we were unable to recover it. 00:26:27.760 [2024-07-16 00:27:46.456028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.760 [2024-07-16 00:27:46.456060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.760 qpair failed and we were unable to recover it. 00:26:27.760 [2024-07-16 00:27:46.456368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.760 [2024-07-16 00:27:46.456403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.760 qpair failed and we were unable to recover it. 00:26:27.760 [2024-07-16 00:27:46.456659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.760 [2024-07-16 00:27:46.456676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.760 qpair failed and we were unable to recover it. 00:26:27.760 [2024-07-16 00:27:46.456880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.760 [2024-07-16 00:27:46.456896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.760 qpair failed and we were unable to recover it. 00:26:27.760 [2024-07-16 00:27:46.457122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.760 [2024-07-16 00:27:46.457139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.760 qpair failed and we were unable to recover it. 00:26:27.760 [2024-07-16 00:27:46.457343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.760 [2024-07-16 00:27:46.457360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.760 qpair failed and we were unable to recover it. 00:26:27.760 [2024-07-16 00:27:46.457709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.760 [2024-07-16 00:27:46.457742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.760 qpair failed and we were unable to recover it. 00:26:27.760 [2024-07-16 00:27:46.458049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.760 [2024-07-16 00:27:46.458081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.760 qpair failed and we were unable to recover it. 00:26:27.760 [2024-07-16 00:27:46.458334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.760 [2024-07-16 00:27:46.458366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.760 qpair failed and we were unable to recover it. 00:26:27.760 [2024-07-16 00:27:46.458728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.760 [2024-07-16 00:27:46.458768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.760 qpair failed and we were unable to recover it. 00:26:27.761 [2024-07-16 00:27:46.459038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.761 [2024-07-16 00:27:46.459070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.761 qpair failed and we were unable to recover it. 00:26:27.761 [2024-07-16 00:27:46.459381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.761 [2024-07-16 00:27:46.459414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.761 qpair failed and we were unable to recover it. 00:26:27.761 [2024-07-16 00:27:46.459690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.761 [2024-07-16 00:27:46.459707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.761 qpair failed and we were unable to recover it. 00:26:27.761 [2024-07-16 00:27:46.460003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.761 [2024-07-16 00:27:46.460035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.761 qpair failed and we were unable to recover it. 00:26:27.761 [2024-07-16 00:27:46.460374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.761 [2024-07-16 00:27:46.460409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.761 qpair failed and we were unable to recover it. 00:26:27.761 [2024-07-16 00:27:46.460690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.761 [2024-07-16 00:27:46.460723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.761 qpair failed and we were unable to recover it. 00:26:27.761 [2024-07-16 00:27:46.461060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.761 [2024-07-16 00:27:46.461092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.761 qpair failed and we were unable to recover it. 00:26:27.761 [2024-07-16 00:27:46.461344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.761 [2024-07-16 00:27:46.461378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.761 qpair failed and we were unable to recover it. 00:26:27.761 [2024-07-16 00:27:46.461637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.761 [2024-07-16 00:27:46.461669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.761 qpair failed and we were unable to recover it. 00:26:27.761 [2024-07-16 00:27:46.462018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.761 [2024-07-16 00:27:46.462050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.761 qpair failed and we were unable to recover it. 00:26:27.761 [2024-07-16 00:27:46.462300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.761 [2024-07-16 00:27:46.462334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.761 qpair failed and we were unable to recover it. 00:26:27.761 [2024-07-16 00:27:46.462711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.761 [2024-07-16 00:27:46.462743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.761 qpair failed and we were unable to recover it. 00:26:27.761 [2024-07-16 00:27:46.463001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.761 [2024-07-16 00:27:46.463034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.761 qpair failed and we were unable to recover it. 00:26:27.761 [2024-07-16 00:27:46.463368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.761 [2024-07-16 00:27:46.463402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.761 qpair failed and we were unable to recover it. 00:26:27.761 [2024-07-16 00:27:46.463738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.761 [2024-07-16 00:27:46.463771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.761 qpair failed and we were unable to recover it. 00:26:27.761 [2024-07-16 00:27:46.464120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.761 [2024-07-16 00:27:46.464152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.761 qpair failed and we were unable to recover it. 00:26:27.761 [2024-07-16 00:27:46.464406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.761 [2024-07-16 00:27:46.464439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.761 qpair failed and we were unable to recover it. 00:26:27.761 [2024-07-16 00:27:46.464700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.761 [2024-07-16 00:27:46.464733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.761 qpair failed and we were unable to recover it. 00:26:27.761 [2024-07-16 00:27:46.464990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.761 [2024-07-16 00:27:46.465022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.761 qpair failed and we were unable to recover it. 00:26:27.761 [2024-07-16 00:27:46.465388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.761 [2024-07-16 00:27:46.465422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.761 qpair failed and we were unable to recover it. 00:26:27.761 [2024-07-16 00:27:46.465792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.761 [2024-07-16 00:27:46.465824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.761 qpair failed and we were unable to recover it. 00:26:27.761 [2024-07-16 00:27:46.466132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.761 [2024-07-16 00:27:46.466165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.761 qpair failed and we were unable to recover it. 00:26:27.761 [2024-07-16 00:27:46.466517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.761 [2024-07-16 00:27:46.466549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.761 qpair failed and we were unable to recover it. 00:26:27.761 [2024-07-16 00:27:46.466821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.761 [2024-07-16 00:27:46.466854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.761 qpair failed and we were unable to recover it. 00:26:27.761 [2024-07-16 00:27:46.467185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.761 [2024-07-16 00:27:46.467218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.761 qpair failed and we were unable to recover it. 00:26:27.761 [2024-07-16 00:27:46.467554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.761 [2024-07-16 00:27:46.467587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.761 qpair failed and we were unable to recover it. 00:26:27.761 [2024-07-16 00:27:46.467864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.761 [2024-07-16 00:27:46.467912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.761 qpair failed and we were unable to recover it. 00:26:27.761 [2024-07-16 00:27:46.468257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.761 [2024-07-16 00:27:46.468292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.761 qpair failed and we were unable to recover it. 00:26:27.761 [2024-07-16 00:27:46.468630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.761 [2024-07-16 00:27:46.468662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.761 qpair failed and we were unable to recover it. 00:26:27.761 [2024-07-16 00:27:46.469002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.761 [2024-07-16 00:27:46.469034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.761 qpair failed and we were unable to recover it. 00:26:27.761 [2024-07-16 00:27:46.469342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.761 [2024-07-16 00:27:46.469376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.761 qpair failed and we were unable to recover it. 00:26:27.761 [2024-07-16 00:27:46.469644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.761 [2024-07-16 00:27:46.469676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.761 qpair failed and we were unable to recover it. 00:26:27.761 [2024-07-16 00:27:46.469937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.761 [2024-07-16 00:27:46.469970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.761 qpair failed and we were unable to recover it. 00:26:27.761 [2024-07-16 00:27:46.470321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.761 [2024-07-16 00:27:46.470354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.761 qpair failed and we were unable to recover it. 00:26:27.761 [2024-07-16 00:27:46.470551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.761 [2024-07-16 00:27:46.470567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.761 qpair failed and we were unable to recover it. 00:26:27.761 [2024-07-16 00:27:46.470866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.761 [2024-07-16 00:27:46.470898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.761 qpair failed and we were unable to recover it. 00:26:27.761 [2024-07-16 00:27:46.471164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.761 [2024-07-16 00:27:46.471196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.761 qpair failed and we were unable to recover it. 00:26:27.761 [2024-07-16 00:27:46.471558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.761 [2024-07-16 00:27:46.471591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.761 qpair failed and we were unable to recover it. 00:26:27.761 [2024-07-16 00:27:46.471917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.761 [2024-07-16 00:27:46.471949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.761 qpair failed and we were unable to recover it. 00:26:27.761 [2024-07-16 00:27:46.472249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.761 [2024-07-16 00:27:46.472283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.761 qpair failed and we were unable to recover it. 00:26:27.761 [2024-07-16 00:27:46.472542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.761 [2024-07-16 00:27:46.472576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.761 qpair failed and we were unable to recover it. 00:26:27.761 [2024-07-16 00:27:46.472828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.762 [2024-07-16 00:27:46.472860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.762 qpair failed and we were unable to recover it. 00:26:27.762 [2024-07-16 00:27:46.473126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.762 [2024-07-16 00:27:46.473158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.762 qpair failed and we were unable to recover it. 00:26:27.762 [2024-07-16 00:27:46.473513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.762 [2024-07-16 00:27:46.473547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.762 qpair failed and we were unable to recover it. 00:26:27.762 [2024-07-16 00:27:46.473923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.762 [2024-07-16 00:27:46.473955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.762 qpair failed and we were unable to recover it. 00:26:27.762 [2024-07-16 00:27:46.474181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.762 [2024-07-16 00:27:46.474198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.762 qpair failed and we were unable to recover it. 00:26:27.762 [2024-07-16 00:27:46.474516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.762 [2024-07-16 00:27:46.474549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.762 qpair failed and we were unable to recover it. 00:26:27.762 [2024-07-16 00:27:46.474718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.762 [2024-07-16 00:27:46.474750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.762 qpair failed and we were unable to recover it. 00:26:27.762 [2024-07-16 00:27:46.475057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.762 [2024-07-16 00:27:46.475089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.762 qpair failed and we were unable to recover it. 00:26:27.762 [2024-07-16 00:27:46.475284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.762 [2024-07-16 00:27:46.475316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.762 qpair failed and we were unable to recover it. 00:26:27.762 [2024-07-16 00:27:46.475652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.762 [2024-07-16 00:27:46.475685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.762 qpair failed and we were unable to recover it. 00:26:27.762 [2024-07-16 00:27:46.475938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.762 [2024-07-16 00:27:46.475970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.762 qpair failed and we were unable to recover it. 00:26:27.762 [2024-07-16 00:27:46.476302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.762 [2024-07-16 00:27:46.476335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.762 qpair failed and we were unable to recover it. 00:26:27.762 [2024-07-16 00:27:46.476638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.762 [2024-07-16 00:27:46.476658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.762 qpair failed and we were unable to recover it. 00:26:27.762 [2024-07-16 00:27:46.476983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.762 [2024-07-16 00:27:46.477016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.762 qpair failed and we were unable to recover it. 00:26:27.762 [2024-07-16 00:27:46.477304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.762 [2024-07-16 00:27:46.477338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.762 qpair failed and we were unable to recover it. 00:26:27.762 [2024-07-16 00:27:46.477550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.762 [2024-07-16 00:27:46.477582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.762 qpair failed and we were unable to recover it. 00:26:27.762 [2024-07-16 00:27:46.477922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.762 [2024-07-16 00:27:46.477939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.762 qpair failed and we were unable to recover it. 00:26:27.762 [2024-07-16 00:27:46.478270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.762 [2024-07-16 00:27:46.478303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.762 qpair failed and we were unable to recover it. 00:26:27.762 [2024-07-16 00:27:46.478644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.762 [2024-07-16 00:27:46.478676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.762 qpair failed and we were unable to recover it. 00:26:27.762 [2024-07-16 00:27:46.479007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.762 [2024-07-16 00:27:46.479039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.762 qpair failed and we were unable to recover it. 00:26:27.762 [2024-07-16 00:27:46.479373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.762 [2024-07-16 00:27:46.479405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.762 qpair failed and we were unable to recover it. 00:26:27.762 [2024-07-16 00:27:46.479732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.762 [2024-07-16 00:27:46.479749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.762 qpair failed and we were unable to recover it. 00:26:27.762 [2024-07-16 00:27:46.480086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.762 [2024-07-16 00:27:46.480103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.762 qpair failed and we were unable to recover it. 00:26:27.762 [2024-07-16 00:27:46.480431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.762 [2024-07-16 00:27:46.480481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.762 qpair failed and we were unable to recover it. 00:26:27.762 [2024-07-16 00:27:46.480714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.762 [2024-07-16 00:27:46.480731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.762 qpair failed and we were unable to recover it. 00:26:27.762 [2024-07-16 00:27:46.481012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.762 [2024-07-16 00:27:46.481044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.762 qpair failed and we were unable to recover it. 00:26:27.762 [2024-07-16 00:27:46.481387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.762 [2024-07-16 00:27:46.481422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.762 qpair failed and we were unable to recover it. 00:26:27.762 [2024-07-16 00:27:46.481602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.762 [2024-07-16 00:27:46.481635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.762 qpair failed and we were unable to recover it. 00:26:27.762 [2024-07-16 00:27:46.481901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.762 [2024-07-16 00:27:46.481933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.762 qpair failed and we were unable to recover it. 00:26:27.762 [2024-07-16 00:27:46.482257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.762 [2024-07-16 00:27:46.482290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.762 qpair failed and we were unable to recover it. 00:26:27.762 [2024-07-16 00:27:46.482476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.762 [2024-07-16 00:27:46.482509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.762 qpair failed and we were unable to recover it. 00:26:27.762 [2024-07-16 00:27:46.482839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.762 [2024-07-16 00:27:46.482871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.762 qpair failed and we were unable to recover it. 00:26:27.762 [2024-07-16 00:27:46.483097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.762 [2024-07-16 00:27:46.483114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.762 qpair failed and we were unable to recover it. 00:26:27.762 [2024-07-16 00:27:46.483391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.762 [2024-07-16 00:27:46.483424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.762 qpair failed and we were unable to recover it. 00:26:27.762 [2024-07-16 00:27:46.483699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.762 [2024-07-16 00:27:46.483730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.762 qpair failed and we were unable to recover it. 00:26:27.762 [2024-07-16 00:27:46.484050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.762 [2024-07-16 00:27:46.484083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.762 qpair failed and we were unable to recover it. 00:26:27.762 [2024-07-16 00:27:46.484397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.762 [2024-07-16 00:27:46.484430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.762 qpair failed and we were unable to recover it. 00:26:27.762 [2024-07-16 00:27:46.484755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.762 [2024-07-16 00:27:46.484772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.762 qpair failed and we were unable to recover it. 00:26:27.762 [2024-07-16 00:27:46.485068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.762 [2024-07-16 00:27:46.485101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.762 qpair failed and we were unable to recover it. 00:26:27.762 [2024-07-16 00:27:46.485436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.762 [2024-07-16 00:27:46.485471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.762 qpair failed and we were unable to recover it. 00:26:27.762 [2024-07-16 00:27:46.485671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.762 [2024-07-16 00:27:46.485704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.762 qpair failed and we were unable to recover it. 00:26:27.762 [2024-07-16 00:27:46.486032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.762 [2024-07-16 00:27:46.486064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.763 qpair failed and we were unable to recover it. 00:26:27.763 [2024-07-16 00:27:46.486374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.763 [2024-07-16 00:27:46.486408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.763 qpair failed and we were unable to recover it. 00:26:27.763 [2024-07-16 00:27:46.486687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.763 [2024-07-16 00:27:46.486704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.763 qpair failed and we were unable to recover it. 00:26:27.763 [2024-07-16 00:27:46.487025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.763 [2024-07-16 00:27:46.487056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.763 qpair failed and we were unable to recover it. 00:26:27.763 [2024-07-16 00:27:46.487386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.763 [2024-07-16 00:27:46.487418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.763 qpair failed and we were unable to recover it. 00:26:27.763 [2024-07-16 00:27:46.487701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.763 [2024-07-16 00:27:46.487732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.763 qpair failed and we were unable to recover it. 00:26:27.763 [2024-07-16 00:27:46.488075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.763 [2024-07-16 00:27:46.488092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.763 qpair failed and we were unable to recover it. 00:26:27.763 [2024-07-16 00:27:46.488237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.763 [2024-07-16 00:27:46.488255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.763 qpair failed and we were unable to recover it. 00:26:27.763 [2024-07-16 00:27:46.488401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.763 [2024-07-16 00:27:46.488419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.763 qpair failed and we were unable to recover it. 00:26:27.763 [2024-07-16 00:27:46.488719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.763 [2024-07-16 00:27:46.488751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.763 qpair failed and we were unable to recover it. 00:26:27.763 [2024-07-16 00:27:46.489077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.763 [2024-07-16 00:27:46.489109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.763 qpair failed and we were unable to recover it. 00:26:27.763 [2024-07-16 00:27:46.489404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.763 [2024-07-16 00:27:46.489439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.763 qpair failed and we were unable to recover it. 00:26:27.763 [2024-07-16 00:27:46.489788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.763 [2024-07-16 00:27:46.489820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.763 qpair failed and we were unable to recover it. 00:26:27.763 [2024-07-16 00:27:46.490150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.763 [2024-07-16 00:27:46.490183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.763 qpair failed and we were unable to recover it. 00:26:27.763 [2024-07-16 00:27:46.490526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.763 [2024-07-16 00:27:46.490559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.763 qpair failed and we were unable to recover it. 00:26:27.763 [2024-07-16 00:27:46.490875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.763 [2024-07-16 00:27:46.490908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.763 qpair failed and we were unable to recover it. 00:26:27.763 [2024-07-16 00:27:46.491141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.763 [2024-07-16 00:27:46.491158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.763 qpair failed and we were unable to recover it. 00:26:27.763 [2024-07-16 00:27:46.491314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.763 [2024-07-16 00:27:46.491331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.763 qpair failed and we were unable to recover it. 00:26:27.763 [2024-07-16 00:27:46.491582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.763 [2024-07-16 00:27:46.491614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.763 qpair failed and we were unable to recover it. 00:26:27.763 [2024-07-16 00:27:46.491884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.763 [2024-07-16 00:27:46.491915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.763 qpair failed and we were unable to recover it. 00:26:27.763 [2024-07-16 00:27:46.492161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.763 [2024-07-16 00:27:46.492193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.763 qpair failed and we were unable to recover it. 00:26:27.763 [2024-07-16 00:27:46.492506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.763 [2024-07-16 00:27:46.492540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.763 qpair failed and we were unable to recover it. 00:26:27.763 [2024-07-16 00:27:46.492886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.763 [2024-07-16 00:27:46.492917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.763 qpair failed and we were unable to recover it. 00:26:27.763 [2024-07-16 00:27:46.493221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.763 [2024-07-16 00:27:46.493267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.763 qpair failed and we were unable to recover it. 00:26:27.763 [2024-07-16 00:27:46.493554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.763 [2024-07-16 00:27:46.493586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.763 qpair failed and we were unable to recover it. 00:26:27.763 [2024-07-16 00:27:46.493914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.763 [2024-07-16 00:27:46.493946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.763 qpair failed and we were unable to recover it. 00:26:27.763 [2024-07-16 00:27:46.494260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.763 [2024-07-16 00:27:46.494293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.763 qpair failed and we were unable to recover it. 00:26:27.763 [2024-07-16 00:27:46.494560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.763 [2024-07-16 00:27:46.494592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.763 qpair failed and we were unable to recover it. 00:26:27.763 [2024-07-16 00:27:46.494949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.763 [2024-07-16 00:27:46.494982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.763 qpair failed and we were unable to recover it. 00:26:27.763 [2024-07-16 00:27:46.495254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.763 [2024-07-16 00:27:46.495287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.763 qpair failed and we were unable to recover it. 00:26:27.763 [2024-07-16 00:27:46.495612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.763 [2024-07-16 00:27:46.495645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.763 qpair failed and we were unable to recover it. 00:26:27.763 [2024-07-16 00:27:46.495971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.763 [2024-07-16 00:27:46.496003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.763 qpair failed and we were unable to recover it. 00:26:27.763 [2024-07-16 00:27:46.496257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.763 [2024-07-16 00:27:46.496290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.763 qpair failed and we were unable to recover it. 00:26:27.763 [2024-07-16 00:27:46.496619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.763 [2024-07-16 00:27:46.496652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.763 qpair failed and we were unable to recover it. 00:26:27.763 [2024-07-16 00:27:46.496866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.763 [2024-07-16 00:27:46.496898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.763 qpair failed and we were unable to recover it. 00:26:27.763 [2024-07-16 00:27:46.497176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.763 [2024-07-16 00:27:46.497208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.763 qpair failed and we were unable to recover it. 00:26:27.763 [2024-07-16 00:27:46.497458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.763 [2024-07-16 00:27:46.497491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.763 qpair failed and we were unable to recover it. 00:26:27.763 [2024-07-16 00:27:46.497694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.763 [2024-07-16 00:27:46.497727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.763 qpair failed and we were unable to recover it. 00:26:27.763 [2024-07-16 00:27:46.497986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.763 [2024-07-16 00:27:46.498003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.763 qpair failed and we were unable to recover it. 00:26:27.763 [2024-07-16 00:27:46.498245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.763 [2024-07-16 00:27:46.498265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.763 qpair failed and we were unable to recover it. 00:26:27.763 [2024-07-16 00:27:46.498483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.763 [2024-07-16 00:27:46.498500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.763 qpair failed and we were unable to recover it. 00:26:27.763 [2024-07-16 00:27:46.498833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.763 [2024-07-16 00:27:46.498869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.763 qpair failed and we were unable to recover it. 00:26:27.764 [2024-07-16 00:27:46.499180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.764 [2024-07-16 00:27:46.499214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.764 qpair failed and we were unable to recover it. 00:26:27.764 [2024-07-16 00:27:46.499474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.764 [2024-07-16 00:27:46.499507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.764 qpair failed and we were unable to recover it. 00:26:27.764 [2024-07-16 00:27:46.499762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.764 [2024-07-16 00:27:46.499795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.764 qpair failed and we were unable to recover it. 00:26:27.764 [2024-07-16 00:27:46.500117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.764 [2024-07-16 00:27:46.500134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.764 qpair failed and we were unable to recover it. 00:26:27.764 [2024-07-16 00:27:46.500337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.764 [2024-07-16 00:27:46.500354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.764 qpair failed and we were unable to recover it. 00:26:27.764 [2024-07-16 00:27:46.500658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.764 [2024-07-16 00:27:46.500692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.764 qpair failed and we were unable to recover it. 00:26:27.764 [2024-07-16 00:27:46.500899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.764 [2024-07-16 00:27:46.500930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.764 qpair failed and we were unable to recover it. 00:26:27.764 [2024-07-16 00:27:46.501185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.764 [2024-07-16 00:27:46.501218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.764 qpair failed and we were unable to recover it. 00:26:27.764 [2024-07-16 00:27:46.501567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.764 [2024-07-16 00:27:46.501600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.764 qpair failed and we were unable to recover it. 00:26:27.764 [2024-07-16 00:27:46.501861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.764 [2024-07-16 00:27:46.501893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.764 qpair failed and we were unable to recover it. 00:26:27.764 [2024-07-16 00:27:46.502132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.764 [2024-07-16 00:27:46.502165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.764 qpair failed and we were unable to recover it. 00:26:27.764 [2024-07-16 00:27:46.502503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.764 [2024-07-16 00:27:46.502546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.764 qpair failed and we were unable to recover it. 00:26:27.764 [2024-07-16 00:27:46.502843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.764 [2024-07-16 00:27:46.502861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.764 qpair failed and we were unable to recover it. 00:26:27.764 [2024-07-16 00:27:46.503110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.764 [2024-07-16 00:27:46.503142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.764 qpair failed and we were unable to recover it. 00:26:27.764 [2024-07-16 00:27:46.503424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.764 [2024-07-16 00:27:46.503457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.764 qpair failed and we were unable to recover it. 00:26:27.764 [2024-07-16 00:27:46.503703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.764 [2024-07-16 00:27:46.503719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.764 qpair failed and we were unable to recover it. 00:26:27.764 [2024-07-16 00:27:46.503943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.764 [2024-07-16 00:27:46.503960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.764 qpair failed and we were unable to recover it. 00:26:27.764 [2024-07-16 00:27:46.504175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.764 [2024-07-16 00:27:46.504192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.764 qpair failed and we were unable to recover it. 00:26:27.764 [2024-07-16 00:27:46.504349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.764 [2024-07-16 00:27:46.504366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.764 qpair failed and we were unable to recover it. 00:26:27.764 [2024-07-16 00:27:46.504575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.764 [2024-07-16 00:27:46.504607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.764 qpair failed and we were unable to recover it. 00:26:27.764 [2024-07-16 00:27:46.504786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.764 [2024-07-16 00:27:46.504818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.764 qpair failed and we were unable to recover it. 00:26:27.764 [2024-07-16 00:27:46.505076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.764 [2024-07-16 00:27:46.505109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.764 qpair failed and we were unable to recover it. 00:26:27.764 [2024-07-16 00:27:46.505375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.764 [2024-07-16 00:27:46.505410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.764 qpair failed and we were unable to recover it. 00:26:27.764 [2024-07-16 00:27:46.505652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.764 [2024-07-16 00:27:46.505684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.764 qpair failed and we were unable to recover it. 00:26:27.764 [2024-07-16 00:27:46.505926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.764 [2024-07-16 00:27:46.505976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.764 qpair failed and we were unable to recover it. 00:26:27.764 [2024-07-16 00:27:46.506234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.764 [2024-07-16 00:27:46.506278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.764 qpair failed and we were unable to recover it. 00:26:27.764 [2024-07-16 00:27:46.506534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.764 [2024-07-16 00:27:46.506565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.764 qpair failed and we were unable to recover it. 00:26:27.764 [2024-07-16 00:27:46.506843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.764 [2024-07-16 00:27:46.506876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.764 qpair failed and we were unable to recover it. 00:26:27.764 [2024-07-16 00:27:46.507074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.764 [2024-07-16 00:27:46.507106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.764 qpair failed and we were unable to recover it. 00:26:27.764 [2024-07-16 00:27:46.507446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.764 [2024-07-16 00:27:46.507480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.764 qpair failed and we were unable to recover it. 00:26:27.764 [2024-07-16 00:27:46.507842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.764 [2024-07-16 00:27:46.507875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.764 qpair failed and we were unable to recover it. 00:26:27.764 [2024-07-16 00:27:46.508203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.764 [2024-07-16 00:27:46.508256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.764 qpair failed and we were unable to recover it. 00:26:27.764 [2024-07-16 00:27:46.508536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.764 [2024-07-16 00:27:46.508568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.764 qpair failed and we were unable to recover it. 00:26:27.764 [2024-07-16 00:27:46.508884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.764 [2024-07-16 00:27:46.508917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.764 qpair failed and we were unable to recover it. 00:26:27.764 [2024-07-16 00:27:46.509242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.764 [2024-07-16 00:27:46.509275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.764 qpair failed and we were unable to recover it. 00:26:27.764 [2024-07-16 00:27:46.509514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.764 [2024-07-16 00:27:46.509546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.764 qpair failed and we were unable to recover it. 00:26:27.764 [2024-07-16 00:27:46.509806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.765 [2024-07-16 00:27:46.509840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.765 qpair failed and we were unable to recover it. 00:26:27.765 [2024-07-16 00:27:46.510085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.765 [2024-07-16 00:27:46.510102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.765 qpair failed and we were unable to recover it. 00:26:27.765 [2024-07-16 00:27:46.510311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.765 [2024-07-16 00:27:46.510329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.765 qpair failed and we were unable to recover it. 00:26:27.765 [2024-07-16 00:27:46.510567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.765 [2024-07-16 00:27:46.510600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.765 qpair failed and we were unable to recover it. 00:26:27.765 [2024-07-16 00:27:46.510930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.765 [2024-07-16 00:27:46.510963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.765 qpair failed and we were unable to recover it. 00:26:27.765 [2024-07-16 00:27:46.511275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.765 [2024-07-16 00:27:46.511309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.765 qpair failed and we were unable to recover it. 00:26:27.765 [2024-07-16 00:27:46.511552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.765 [2024-07-16 00:27:46.511598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.765 qpair failed and we were unable to recover it. 00:26:27.765 [2024-07-16 00:27:46.511820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.765 [2024-07-16 00:27:46.511838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.765 qpair failed and we were unable to recover it. 00:26:27.765 [2024-07-16 00:27:46.512140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.765 [2024-07-16 00:27:46.512174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.765 qpair failed and we were unable to recover it. 00:26:27.765 [2024-07-16 00:27:46.512481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.765 [2024-07-16 00:27:46.512514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.765 qpair failed and we were unable to recover it. 00:26:27.765 [2024-07-16 00:27:46.512858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.765 [2024-07-16 00:27:46.512890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.765 qpair failed and we were unable to recover it. 00:26:27.765 [2024-07-16 00:27:46.513235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.765 [2024-07-16 00:27:46.513269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.765 qpair failed and we were unable to recover it. 00:26:27.765 [2024-07-16 00:27:46.513556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.765 [2024-07-16 00:27:46.513588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.765 qpair failed and we were unable to recover it. 00:26:27.765 [2024-07-16 00:27:46.513934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.765 [2024-07-16 00:27:46.513968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.765 qpair failed and we were unable to recover it. 00:26:27.765 [2024-07-16 00:27:46.514311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.765 [2024-07-16 00:27:46.514344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.765 qpair failed and we were unable to recover it. 00:26:27.765 [2024-07-16 00:27:46.514673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.765 [2024-07-16 00:27:46.514705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.765 qpair failed and we were unable to recover it. 00:26:27.765 [2024-07-16 00:27:46.515017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.765 [2024-07-16 00:27:46.515049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.765 qpair failed and we were unable to recover it. 00:26:27.765 [2024-07-16 00:27:46.515299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.765 [2024-07-16 00:27:46.515334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.765 qpair failed and we were unable to recover it. 00:26:27.765 [2024-07-16 00:27:46.515664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.765 [2024-07-16 00:27:46.515696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.765 qpair failed and we were unable to recover it. 00:26:27.765 [2024-07-16 00:27:46.515997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.765 [2024-07-16 00:27:46.516030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.765 qpair failed and we were unable to recover it. 00:26:27.765 [2024-07-16 00:27:46.516357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.765 [2024-07-16 00:27:46.516390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.765 qpair failed and we were unable to recover it. 00:26:27.765 [2024-07-16 00:27:46.516656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.765 [2024-07-16 00:27:46.516689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.765 qpair failed and we were unable to recover it. 00:26:27.765 [2024-07-16 00:27:46.517013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.765 [2024-07-16 00:27:46.517047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.765 qpair failed and we were unable to recover it. 00:26:27.765 [2024-07-16 00:27:46.517354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.765 [2024-07-16 00:27:46.517389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.765 qpair failed and we were unable to recover it. 00:26:27.765 [2024-07-16 00:27:46.517637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.765 [2024-07-16 00:27:46.517669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.765 qpair failed and we were unable to recover it. 00:26:27.765 [2024-07-16 00:27:46.518006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.765 [2024-07-16 00:27:46.518038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.765 qpair failed and we were unable to recover it. 00:26:27.765 [2024-07-16 00:27:46.518380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.765 [2024-07-16 00:27:46.518413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.765 qpair failed and we were unable to recover it. 00:26:27.765 [2024-07-16 00:27:46.518672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.765 [2024-07-16 00:27:46.518705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.765 qpair failed and we were unable to recover it. 00:26:27.765 [2024-07-16 00:27:46.518973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.765 [2024-07-16 00:27:46.518989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.765 qpair failed and we were unable to recover it. 00:26:27.765 [2024-07-16 00:27:46.519205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.765 [2024-07-16 00:27:46.519222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.765 qpair failed and we were unable to recover it. 00:26:27.765 [2024-07-16 00:27:46.519524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.765 [2024-07-16 00:27:46.519542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.765 qpair failed and we were unable to recover it. 00:26:27.765 [2024-07-16 00:27:46.519798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.765 [2024-07-16 00:27:46.519830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.765 qpair failed and we were unable to recover it. 00:26:27.765 [2024-07-16 00:27:46.520069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.765 [2024-07-16 00:27:46.520103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.765 qpair failed and we were unable to recover it. 00:26:27.765 [2024-07-16 00:27:46.520415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.765 [2024-07-16 00:27:46.520447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.765 qpair failed and we were unable to recover it. 00:26:27.765 [2024-07-16 00:27:46.520787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.765 [2024-07-16 00:27:46.520821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.765 qpair failed and we were unable to recover it. 00:26:27.765 [2024-07-16 00:27:46.521057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.765 [2024-07-16 00:27:46.521089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.765 qpair failed and we were unable to recover it. 00:26:27.765 [2024-07-16 00:27:46.521431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.765 [2024-07-16 00:27:46.521464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.765 qpair failed and we were unable to recover it. 00:26:27.765 [2024-07-16 00:27:46.521852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.765 [2024-07-16 00:27:46.521884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.765 qpair failed and we were unable to recover it. 00:26:27.765 [2024-07-16 00:27:46.522136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.765 [2024-07-16 00:27:46.522167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.765 qpair failed and we were unable to recover it. 00:26:27.765 [2024-07-16 00:27:46.522361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.765 [2024-07-16 00:27:46.522395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.765 qpair failed and we were unable to recover it. 00:26:27.765 [2024-07-16 00:27:46.522650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.765 [2024-07-16 00:27:46.522682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.765 qpair failed and we were unable to recover it. 00:26:27.765 [2024-07-16 00:27:46.522952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.765 [2024-07-16 00:27:46.522970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.766 qpair failed and we were unable to recover it. 00:26:27.766 [2024-07-16 00:27:46.523247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.766 [2024-07-16 00:27:46.523264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.766 qpair failed and we were unable to recover it. 00:26:27.766 [2024-07-16 00:27:46.523565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.766 [2024-07-16 00:27:46.523582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.766 qpair failed and we were unable to recover it. 00:26:27.766 [2024-07-16 00:27:46.523830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.766 [2024-07-16 00:27:46.523863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.766 qpair failed and we were unable to recover it. 00:26:27.766 [2024-07-16 00:27:46.524170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.766 [2024-07-16 00:27:46.524204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.766 qpair failed and we were unable to recover it. 00:26:27.766 [2024-07-16 00:27:46.524486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.766 [2024-07-16 00:27:46.524519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.766 qpair failed and we were unable to recover it. 00:26:27.766 [2024-07-16 00:27:46.524824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.766 [2024-07-16 00:27:46.524856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.766 qpair failed and we were unable to recover it. 00:26:27.766 [2024-07-16 00:27:46.525206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.766 [2024-07-16 00:27:46.525252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.766 qpair failed and we were unable to recover it. 00:26:27.766 [2024-07-16 00:27:46.525526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.766 [2024-07-16 00:27:46.525558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.766 qpair failed and we were unable to recover it. 00:26:27.766 [2024-07-16 00:27:46.525736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.766 [2024-07-16 00:27:46.525768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.766 qpair failed and we were unable to recover it. 00:26:27.766 [2024-07-16 00:27:46.526119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.766 [2024-07-16 00:27:46.526153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.766 qpair failed and we were unable to recover it. 00:26:27.766 [2024-07-16 00:27:46.526464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.766 [2024-07-16 00:27:46.526498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.766 qpair failed and we were unable to recover it. 00:26:27.766 [2024-07-16 00:27:46.526749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.766 [2024-07-16 00:27:46.526783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.766 qpair failed and we were unable to recover it. 00:26:27.766 [2024-07-16 00:27:46.527143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.766 [2024-07-16 00:27:46.527175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.766 qpair failed and we were unable to recover it. 00:26:27.766 [2024-07-16 00:27:46.527398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.766 [2024-07-16 00:27:46.527433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.766 qpair failed and we were unable to recover it. 00:26:27.766 [2024-07-16 00:27:46.527687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.766 [2024-07-16 00:27:46.527725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.766 qpair failed and we were unable to recover it. 00:26:27.766 [2024-07-16 00:27:46.527982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.766 [2024-07-16 00:27:46.527999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.766 qpair failed and we were unable to recover it. 00:26:27.766 [2024-07-16 00:27:46.528280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.766 [2024-07-16 00:27:46.528298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.766 qpair failed and we were unable to recover it. 00:26:27.766 [2024-07-16 00:27:46.528604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.766 [2024-07-16 00:27:46.528637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.766 qpair failed and we were unable to recover it. 00:26:27.766 [2024-07-16 00:27:46.528940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.766 [2024-07-16 00:27:46.528974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.766 qpair failed and we were unable to recover it. 00:26:27.766 [2024-07-16 00:27:46.529240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.766 [2024-07-16 00:27:46.529275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.766 qpair failed and we were unable to recover it. 00:26:27.766 [2024-07-16 00:27:46.529549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.766 [2024-07-16 00:27:46.529581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.766 qpair failed and we were unable to recover it. 00:26:27.766 [2024-07-16 00:27:46.529911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.766 [2024-07-16 00:27:46.529946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.766 qpair failed and we were unable to recover it. 00:26:27.766 [2024-07-16 00:27:46.530276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.766 [2024-07-16 00:27:46.530293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.766 qpair failed and we were unable to recover it. 00:26:27.766 [2024-07-16 00:27:46.530551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.766 [2024-07-16 00:27:46.530569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.766 qpair failed and we were unable to recover it. 00:26:27.766 [2024-07-16 00:27:46.530791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.766 [2024-07-16 00:27:46.530810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.766 qpair failed and we were unable to recover it. 00:26:27.766 [2024-07-16 00:27:46.531116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.766 [2024-07-16 00:27:46.531149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.766 qpair failed and we were unable to recover it. 00:26:27.766 [2024-07-16 00:27:46.531443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.766 [2024-07-16 00:27:46.531476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.766 qpair failed and we were unable to recover it. 00:26:27.766 [2024-07-16 00:27:46.531698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.766 [2024-07-16 00:27:46.531730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.766 qpair failed and we were unable to recover it. 00:26:27.766 [2024-07-16 00:27:46.532049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.766 [2024-07-16 00:27:46.532066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.766 qpair failed and we were unable to recover it. 00:26:27.766 [2024-07-16 00:27:46.532266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.766 [2024-07-16 00:27:46.532284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.766 qpair failed and we were unable to recover it. 00:26:27.766 [2024-07-16 00:27:46.532522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.766 [2024-07-16 00:27:46.532538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.766 qpair failed and we were unable to recover it. 00:26:27.766 [2024-07-16 00:27:46.532701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.766 [2024-07-16 00:27:46.532717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.766 qpair failed and we were unable to recover it. 00:26:27.766 [2024-07-16 00:27:46.533012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.766 [2024-07-16 00:27:46.533029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.766 qpair failed and we were unable to recover it. 00:26:27.766 [2024-07-16 00:27:46.533244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.766 [2024-07-16 00:27:46.533260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.766 qpair failed and we were unable to recover it. 00:26:27.766 [2024-07-16 00:27:46.533543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.766 [2024-07-16 00:27:46.533576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.766 qpair failed and we were unable to recover it. 00:26:27.766 [2024-07-16 00:27:46.533819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.766 [2024-07-16 00:27:46.533852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.766 qpair failed and we were unable to recover it. 00:26:27.766 [2024-07-16 00:27:46.534133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.766 [2024-07-16 00:27:46.534166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.766 qpair failed and we were unable to recover it. 00:26:27.766 [2024-07-16 00:27:46.534513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.766 [2024-07-16 00:27:46.534547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.766 qpair failed and we were unable to recover it. 00:26:27.766 [2024-07-16 00:27:46.534728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.766 [2024-07-16 00:27:46.534761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.766 qpair failed and we were unable to recover it. 00:26:27.766 [2024-07-16 00:27:46.535025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.766 [2024-07-16 00:27:46.535067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.766 qpair failed and we were unable to recover it. 00:26:27.766 [2024-07-16 00:27:46.535300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.766 [2024-07-16 00:27:46.535318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.766 qpair failed and we were unable to recover it. 00:26:27.767 [2024-07-16 00:27:46.535546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.767 [2024-07-16 00:27:46.535567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.767 qpair failed and we were unable to recover it. 00:26:27.767 [2024-07-16 00:27:46.535790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.767 [2024-07-16 00:27:46.535807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.767 qpair failed and we were unable to recover it. 00:26:27.767 [2024-07-16 00:27:46.536015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.767 [2024-07-16 00:27:46.536032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.767 qpair failed and we were unable to recover it. 00:26:27.767 [2024-07-16 00:27:46.536258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.767 [2024-07-16 00:27:46.536275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.767 qpair failed and we were unable to recover it. 00:26:27.767 [2024-07-16 00:27:46.536497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.767 [2024-07-16 00:27:46.536518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.767 qpair failed and we were unable to recover it. 00:26:27.767 [2024-07-16 00:27:46.536733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.767 [2024-07-16 00:27:46.536750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.767 qpair failed and we were unable to recover it. 00:26:27.767 [2024-07-16 00:27:46.537070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.767 [2024-07-16 00:27:46.537105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.767 qpair failed and we were unable to recover it. 00:26:27.767 [2024-07-16 00:27:46.537369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.767 [2024-07-16 00:27:46.537405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.767 qpair failed and we were unable to recover it. 00:26:27.767 [2024-07-16 00:27:46.537749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.767 [2024-07-16 00:27:46.537784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.767 qpair failed and we were unable to recover it. 00:26:27.767 [2024-07-16 00:27:46.538120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.767 [2024-07-16 00:27:46.538154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.767 qpair failed and we were unable to recover it. 00:26:27.767 [2024-07-16 00:27:46.538413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.767 [2024-07-16 00:27:46.538447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.767 qpair failed and we were unable to recover it. 00:26:27.767 [2024-07-16 00:27:46.538817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.767 [2024-07-16 00:27:46.538849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.767 qpair failed and we were unable to recover it. 00:26:27.767 [2024-07-16 00:27:46.539206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.767 [2024-07-16 00:27:46.539256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.767 qpair failed and we were unable to recover it. 00:26:27.767 [2024-07-16 00:27:46.539527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.767 [2024-07-16 00:27:46.539559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.767 qpair failed and we were unable to recover it. 00:26:27.767 [2024-07-16 00:27:46.539850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.767 [2024-07-16 00:27:46.539883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.767 qpair failed and we were unable to recover it. 00:26:27.767 [2024-07-16 00:27:46.540123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.767 [2024-07-16 00:27:46.540140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.767 qpair failed and we were unable to recover it. 00:26:27.767 [2024-07-16 00:27:46.540357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.767 [2024-07-16 00:27:46.540374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.767 qpair failed and we were unable to recover it. 00:26:27.767 [2024-07-16 00:27:46.540610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.767 [2024-07-16 00:27:46.540626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.767 qpair failed and we were unable to recover it. 00:26:27.767 [2024-07-16 00:27:46.540927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.767 [2024-07-16 00:27:46.540960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.767 qpair failed and we were unable to recover it. 00:26:27.767 [2024-07-16 00:27:46.541316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.767 [2024-07-16 00:27:46.541349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.767 qpair failed and we were unable to recover it. 00:26:27.767 [2024-07-16 00:27:46.541687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.767 [2024-07-16 00:27:46.541722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.767 qpair failed and we were unable to recover it. 00:26:27.767 [2024-07-16 00:27:46.542032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.767 [2024-07-16 00:27:46.542066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.767 qpair failed and we were unable to recover it. 00:26:27.767 [2024-07-16 00:27:46.542407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.767 [2024-07-16 00:27:46.542441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.767 qpair failed and we were unable to recover it. 00:26:27.767 [2024-07-16 00:27:46.542772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.767 [2024-07-16 00:27:46.542789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.767 qpair failed and we were unable to recover it. 00:26:27.767 [2024-07-16 00:27:46.543026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.767 [2024-07-16 00:27:46.543043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.767 qpair failed and we were unable to recover it. 00:26:27.767 [2024-07-16 00:27:46.543258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.767 [2024-07-16 00:27:46.543275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.767 qpair failed and we were unable to recover it. 00:26:27.767 [2024-07-16 00:27:46.543527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.767 [2024-07-16 00:27:46.543544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.767 qpair failed and we were unable to recover it. 00:26:27.767 [2024-07-16 00:27:46.543838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.767 [2024-07-16 00:27:46.543858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.767 qpair failed and we were unable to recover it. 00:26:27.767 [2024-07-16 00:27:46.544162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.767 [2024-07-16 00:27:46.544194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.767 qpair failed and we were unable to recover it. 00:26:27.767 [2024-07-16 00:27:46.544549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.767 [2024-07-16 00:27:46.544583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.767 qpair failed and we were unable to recover it. 00:26:27.767 [2024-07-16 00:27:46.546102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.767 [2024-07-16 00:27:46.546138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.767 qpair failed and we were unable to recover it. 00:26:27.767 [2024-07-16 00:27:46.546461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.767 [2024-07-16 00:27:46.546482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.767 qpair failed and we were unable to recover it. 00:26:27.767 [2024-07-16 00:27:46.546706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.767 [2024-07-16 00:27:46.546740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.767 qpair failed and we were unable to recover it. 00:26:27.767 [2024-07-16 00:27:46.547041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.767 [2024-07-16 00:27:46.547074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.767 qpair failed and we were unable to recover it. 00:26:27.767 [2024-07-16 00:27:46.547347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.767 [2024-07-16 00:27:46.547365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.767 qpair failed and we were unable to recover it. 00:26:27.767 [2024-07-16 00:27:46.547622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.767 [2024-07-16 00:27:46.547642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.767 qpair failed and we were unable to recover it. 00:26:27.767 [2024-07-16 00:27:46.547930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.767 [2024-07-16 00:27:46.547965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.767 qpair failed and we were unable to recover it. 00:26:27.767 [2024-07-16 00:27:46.548207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.767 [2024-07-16 00:27:46.548254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.767 qpair failed and we were unable to recover it. 00:26:27.767 [2024-07-16 00:27:46.548523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.767 [2024-07-16 00:27:46.548557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.767 qpair failed and we were unable to recover it. 00:26:27.767 [2024-07-16 00:27:46.548757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.767 [2024-07-16 00:27:46.548789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.767 qpair failed and we were unable to recover it. 00:26:27.767 [2024-07-16 00:27:46.549062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.767 [2024-07-16 00:27:46.549094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.767 qpair failed and we were unable to recover it. 00:26:27.767 [2024-07-16 00:27:46.549442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.768 [2024-07-16 00:27:46.549475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.768 qpair failed and we were unable to recover it. 00:26:27.768 [2024-07-16 00:27:46.549736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.768 [2024-07-16 00:27:46.549770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.768 qpair failed and we were unable to recover it. 00:26:27.768 [2024-07-16 00:27:46.550086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.768 [2024-07-16 00:27:46.550118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.768 qpair failed and we were unable to recover it. 00:26:27.768 [2024-07-16 00:27:46.550444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.768 [2024-07-16 00:27:46.550479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.768 qpair failed and we were unable to recover it. 00:26:27.768 [2024-07-16 00:27:46.550728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.768 [2024-07-16 00:27:46.550759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.768 qpair failed and we were unable to recover it. 00:26:27.768 [2024-07-16 00:27:46.551024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.768 [2024-07-16 00:27:46.551056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.768 qpair failed and we were unable to recover it. 00:26:27.768 [2024-07-16 00:27:46.551321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.768 [2024-07-16 00:27:46.551355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.768 qpair failed and we were unable to recover it. 00:26:27.768 [2024-07-16 00:27:46.551563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.768 [2024-07-16 00:27:46.551596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.768 qpair failed and we were unable to recover it. 00:26:27.768 [2024-07-16 00:27:46.551929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.768 [2024-07-16 00:27:46.551962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.768 qpair failed and we were unable to recover it. 00:26:27.768 [2024-07-16 00:27:46.552291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.768 [2024-07-16 00:27:46.552325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.768 qpair failed and we were unable to recover it. 00:26:27.768 [2024-07-16 00:27:46.552569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.768 [2024-07-16 00:27:46.552603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.768 qpair failed and we were unable to recover it. 00:26:27.768 [2024-07-16 00:27:46.552844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.768 [2024-07-16 00:27:46.552876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.768 qpair failed and we were unable to recover it. 00:26:27.768 [2024-07-16 00:27:46.553188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.768 [2024-07-16 00:27:46.553205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.768 qpair failed and we were unable to recover it. 00:26:27.768 [2024-07-16 00:27:46.553430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.768 [2024-07-16 00:27:46.553447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.768 qpair failed and we were unable to recover it. 00:26:27.768 [2024-07-16 00:27:46.553725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.768 [2024-07-16 00:27:46.553742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.768 qpair failed and we were unable to recover it. 00:26:27.768 [2024-07-16 00:27:46.554086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.768 [2024-07-16 00:27:46.554118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.768 qpair failed and we were unable to recover it. 00:26:27.768 [2024-07-16 00:27:46.554371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.768 [2024-07-16 00:27:46.554404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.768 qpair failed and we were unable to recover it. 00:26:27.768 [2024-07-16 00:27:46.554657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.768 [2024-07-16 00:27:46.554690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.768 qpair failed and we were unable to recover it. 00:26:27.768 [2024-07-16 00:27:46.554938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.768 [2024-07-16 00:27:46.554970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.768 qpair failed and we were unable to recover it. 00:26:27.768 [2024-07-16 00:27:46.555250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.768 [2024-07-16 00:27:46.555267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.768 qpair failed and we were unable to recover it. 00:26:27.768 [2024-07-16 00:27:46.555543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.768 [2024-07-16 00:27:46.555560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.768 qpair failed and we were unable to recover it. 00:26:27.768 [2024-07-16 00:27:46.555818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.768 [2024-07-16 00:27:46.555835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.768 qpair failed and we were unable to recover it. 00:26:27.768 [2024-07-16 00:27:46.556128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.768 [2024-07-16 00:27:46.556160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.768 qpair failed and we were unable to recover it. 00:26:27.768 [2024-07-16 00:27:46.556418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.768 [2024-07-16 00:27:46.556451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.768 qpair failed and we were unable to recover it. 00:26:27.768 [2024-07-16 00:27:46.556713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.768 [2024-07-16 00:27:46.556746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.768 qpair failed and we were unable to recover it. 00:26:27.768 [2024-07-16 00:27:46.557009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.768 [2024-07-16 00:27:46.557041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.768 qpair failed and we were unable to recover it. 00:26:27.768 [2024-07-16 00:27:46.557305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.768 [2024-07-16 00:27:46.557339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.768 qpair failed and we were unable to recover it. 00:26:27.768 [2024-07-16 00:27:46.557681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.768 [2024-07-16 00:27:46.557715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.768 qpair failed and we were unable to recover it. 00:26:27.768 [2024-07-16 00:27:46.558062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.768 [2024-07-16 00:27:46.558093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.768 qpair failed and we were unable to recover it. 00:26:27.768 [2024-07-16 00:27:46.558438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.768 [2024-07-16 00:27:46.558473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.768 qpair failed and we were unable to recover it. 00:26:27.768 [2024-07-16 00:27:46.558762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.768 [2024-07-16 00:27:46.558795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.768 qpair failed and we were unable to recover it. 00:26:27.768 [2024-07-16 00:27:46.559111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.768 [2024-07-16 00:27:46.559146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.768 qpair failed and we were unable to recover it. 00:26:27.768 [2024-07-16 00:27:46.559407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.768 [2024-07-16 00:27:46.559440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.768 qpair failed and we were unable to recover it. 00:26:27.768 [2024-07-16 00:27:46.559677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.768 [2024-07-16 00:27:46.559709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.768 qpair failed and we were unable to recover it. 00:26:27.768 [2024-07-16 00:27:46.560068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.768 [2024-07-16 00:27:46.560101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.768 qpair failed and we were unable to recover it. 00:26:27.768 [2024-07-16 00:27:46.560308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.768 [2024-07-16 00:27:46.560341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.768 qpair failed and we were unable to recover it. 00:26:27.768 [2024-07-16 00:27:46.560610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.769 [2024-07-16 00:27:46.560642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.769 qpair failed and we were unable to recover it. 00:26:27.769 [2024-07-16 00:27:46.560975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.769 [2024-07-16 00:27:46.561008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.769 qpair failed and we were unable to recover it. 00:26:27.769 [2024-07-16 00:27:46.561258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.769 [2024-07-16 00:27:46.561291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.769 qpair failed and we were unable to recover it. 00:26:27.769 [2024-07-16 00:27:46.561492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.769 [2024-07-16 00:27:46.561524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.769 qpair failed and we were unable to recover it. 00:26:27.769 [2024-07-16 00:27:46.561730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.769 [2024-07-16 00:27:46.561762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.769 qpair failed and we were unable to recover it. 00:26:27.769 [2024-07-16 00:27:46.562147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.769 [2024-07-16 00:27:46.562180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.769 qpair failed and we were unable to recover it. 00:26:27.769 [2024-07-16 00:27:46.562387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.769 [2024-07-16 00:27:46.562424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.769 qpair failed and we were unable to recover it. 00:26:27.769 [2024-07-16 00:27:46.562608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.769 [2024-07-16 00:27:46.562641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.769 qpair failed and we were unable to recover it. 00:26:27.769 [2024-07-16 00:27:46.562952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.769 [2024-07-16 00:27:46.562986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.769 qpair failed and we were unable to recover it. 00:26:27.769 [2024-07-16 00:27:46.563274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.769 [2024-07-16 00:27:46.563308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.769 qpair failed and we were unable to recover it. 00:26:27.769 [2024-07-16 00:27:46.563619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.769 [2024-07-16 00:27:46.563666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.769 qpair failed and we were unable to recover it. 00:26:27.769 [2024-07-16 00:27:46.564017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.769 [2024-07-16 00:27:46.564050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.769 qpair failed and we were unable to recover it. 00:26:27.769 [2024-07-16 00:27:46.564303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.769 [2024-07-16 00:27:46.564336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.769 qpair failed and we were unable to recover it. 00:26:27.769 [2024-07-16 00:27:46.564601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.769 [2024-07-16 00:27:46.564633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.769 qpair failed and we were unable to recover it. 00:26:27.769 [2024-07-16 00:27:46.564981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.769 [2024-07-16 00:27:46.565013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.769 qpair failed and we were unable to recover it. 00:26:27.769 [2024-07-16 00:27:46.565266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.769 [2024-07-16 00:27:46.565284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.769 qpair failed and we were unable to recover it. 00:26:27.769 [2024-07-16 00:27:46.565503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.769 [2024-07-16 00:27:46.565521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.769 qpair failed and we were unable to recover it. 00:26:27.769 [2024-07-16 00:27:46.565749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.769 [2024-07-16 00:27:46.565781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.769 qpair failed and we were unable to recover it. 00:26:27.769 [2024-07-16 00:27:46.566085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.769 [2024-07-16 00:27:46.566105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.769 qpair failed and we were unable to recover it. 00:26:27.769 [2024-07-16 00:27:46.566346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.769 [2024-07-16 00:27:46.566363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.769 qpair failed and we were unable to recover it. 00:26:27.769 [2024-07-16 00:27:46.566512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.769 [2024-07-16 00:27:46.566529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.769 qpair failed and we were unable to recover it. 00:26:27.769 [2024-07-16 00:27:46.566828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.769 [2024-07-16 00:27:46.566860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.769 qpair failed and we were unable to recover it. 00:26:27.769 [2024-07-16 00:27:46.567115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.769 [2024-07-16 00:27:46.567147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.769 qpair failed and we were unable to recover it. 00:26:27.769 [2024-07-16 00:27:46.567345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.769 [2024-07-16 00:27:46.567364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.769 qpair failed and we were unable to recover it. 00:26:27.769 [2024-07-16 00:27:46.567660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.769 [2024-07-16 00:27:46.567693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.769 qpair failed and we were unable to recover it. 00:26:27.769 [2024-07-16 00:27:46.568063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.769 [2024-07-16 00:27:46.568107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.769 qpair failed and we were unable to recover it. 00:26:27.769 [2024-07-16 00:27:46.568405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.769 [2024-07-16 00:27:46.568422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.769 qpair failed and we were unable to recover it. 00:26:27.769 [2024-07-16 00:27:46.568750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.769 [2024-07-16 00:27:46.568782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.769 qpair failed and we were unable to recover it. 00:26:27.769 [2024-07-16 00:27:46.569107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.769 [2024-07-16 00:27:46.569140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.769 qpair failed and we were unable to recover it. 00:26:27.769 [2024-07-16 00:27:46.569451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.769 [2024-07-16 00:27:46.569468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.769 qpair failed and we were unable to recover it. 00:26:27.769 [2024-07-16 00:27:46.569635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.769 [2024-07-16 00:27:46.569651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.769 qpair failed and we were unable to recover it. 00:26:27.769 [2024-07-16 00:27:46.569877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.769 [2024-07-16 00:27:46.569894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.769 qpair failed and we were unable to recover it. 00:26:27.769 [2024-07-16 00:27:46.570068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.769 [2024-07-16 00:27:46.570084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.769 qpair failed and we were unable to recover it. 00:26:27.769 [2024-07-16 00:27:46.570261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.769 [2024-07-16 00:27:46.570294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.769 qpair failed and we were unable to recover it. 00:26:27.769 [2024-07-16 00:27:46.570641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.769 [2024-07-16 00:27:46.570673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.769 qpair failed and we were unable to recover it. 00:26:27.769 [2024-07-16 00:27:46.570917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.769 [2024-07-16 00:27:46.570949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.769 qpair failed and we were unable to recover it. 00:26:27.769 [2024-07-16 00:27:46.574247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.769 [2024-07-16 00:27:46.574290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.769 qpair failed and we were unable to recover it. 00:26:27.769 [2024-07-16 00:27:46.574467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.769 [2024-07-16 00:27:46.574483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.769 qpair failed and we were unable to recover it. 00:26:27.769 [2024-07-16 00:27:46.574729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.769 [2024-07-16 00:27:46.574747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.769 qpair failed and we were unable to recover it. 00:26:27.769 [2024-07-16 00:27:46.575013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.769 [2024-07-16 00:27:46.575030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.769 qpair failed and we were unable to recover it. 00:26:27.769 [2024-07-16 00:27:46.575324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.769 [2024-07-16 00:27:46.575343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.769 qpair failed and we were unable to recover it. 00:26:27.769 [2024-07-16 00:27:46.575525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.769 [2024-07-16 00:27:46.575541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.769 qpair failed and we were unable to recover it. 00:26:27.770 [2024-07-16 00:27:46.575761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.770 [2024-07-16 00:27:46.575777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.770 qpair failed and we were unable to recover it. 00:26:27.770 [2024-07-16 00:27:46.576068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.770 [2024-07-16 00:27:46.576085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.770 qpair failed and we were unable to recover it. 00:26:27.770 [2024-07-16 00:27:46.576360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.770 [2024-07-16 00:27:46.576378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.770 qpair failed and we were unable to recover it. 00:26:27.770 [2024-07-16 00:27:46.576595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.770 [2024-07-16 00:27:46.576617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.770 qpair failed and we were unable to recover it. 00:26:27.770 [2024-07-16 00:27:46.576807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.770 [2024-07-16 00:27:46.576824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.770 qpair failed and we were unable to recover it. 00:26:27.770 [2024-07-16 00:27:46.577048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.770 [2024-07-16 00:27:46.577065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.770 qpair failed and we were unable to recover it. 00:26:27.770 [2024-07-16 00:27:46.577366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.770 [2024-07-16 00:27:46.577384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.770 qpair failed and we were unable to recover it. 00:26:27.770 [2024-07-16 00:27:46.577610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.770 [2024-07-16 00:27:46.577627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.770 qpair failed and we were unable to recover it. 00:26:27.770 [2024-07-16 00:27:46.577754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.770 [2024-07-16 00:27:46.577771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.770 qpair failed and we were unable to recover it. 00:26:27.770 [2024-07-16 00:27:46.577988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.770 [2024-07-16 00:27:46.578004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.770 qpair failed and we were unable to recover it. 00:26:27.770 [2024-07-16 00:27:46.578244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.770 [2024-07-16 00:27:46.578262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.770 qpair failed and we were unable to recover it. 00:26:27.770 [2024-07-16 00:27:46.578422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.770 [2024-07-16 00:27:46.578440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.770 qpair failed and we were unable to recover it. 00:26:27.770 [2024-07-16 00:27:46.578606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.770 [2024-07-16 00:27:46.578628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.770 qpair failed and we were unable to recover it. 00:26:27.770 [2024-07-16 00:27:46.578863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.770 [2024-07-16 00:27:46.578881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.770 qpair failed and we were unable to recover it. 00:26:27.770 [2024-07-16 00:27:46.579183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.770 [2024-07-16 00:27:46.579200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.770 qpair failed and we were unable to recover it. 00:26:27.770 [2024-07-16 00:27:46.579416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.770 [2024-07-16 00:27:46.579434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.770 qpair failed and we were unable to recover it. 00:26:27.770 [2024-07-16 00:27:46.579589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.770 [2024-07-16 00:27:46.579604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.770 qpair failed and we were unable to recover it. 00:26:27.770 [2024-07-16 00:27:46.579933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.770 [2024-07-16 00:27:46.579949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.770 qpair failed and we were unable to recover it. 00:26:27.770 [2024-07-16 00:27:46.580220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.770 [2024-07-16 00:27:46.580250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.770 qpair failed and we were unable to recover it. 00:26:27.770 [2024-07-16 00:27:46.580476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.770 [2024-07-16 00:27:46.580491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.770 qpair failed and we were unable to recover it. 00:26:27.770 [2024-07-16 00:27:46.580708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.770 [2024-07-16 00:27:46.580724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.770 qpair failed and we were unable to recover it. 00:26:27.770 [2024-07-16 00:27:46.580961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.770 [2024-07-16 00:27:46.580976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.770 qpair failed and we were unable to recover it. 00:26:27.770 [2024-07-16 00:27:46.581249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.770 [2024-07-16 00:27:46.581266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.770 qpair failed and we were unable to recover it. 00:26:27.770 [2024-07-16 00:27:46.581488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.770 [2024-07-16 00:27:46.581503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.770 qpair failed and we were unable to recover it. 00:26:27.770 [2024-07-16 00:27:46.581669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.770 [2024-07-16 00:27:46.581685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.770 qpair failed and we were unable to recover it. 00:26:27.770 [2024-07-16 00:27:46.581838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.770 [2024-07-16 00:27:46.581853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.770 qpair failed and we were unable to recover it. 00:26:27.770 [2024-07-16 00:27:46.582173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.770 [2024-07-16 00:27:46.582188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.770 qpair failed and we were unable to recover it. 00:26:27.770 [2024-07-16 00:27:46.582467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.770 [2024-07-16 00:27:46.582483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.770 qpair failed and we were unable to recover it. 00:26:27.770 [2024-07-16 00:27:46.582684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.770 [2024-07-16 00:27:46.582699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.770 qpair failed and we were unable to recover it. 00:26:27.770 [2024-07-16 00:27:46.583035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.770 [2024-07-16 00:27:46.583051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.770 qpair failed and we were unable to recover it. 00:26:27.770 [2024-07-16 00:27:46.583330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.770 [2024-07-16 00:27:46.583346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.770 qpair failed and we were unable to recover it. 00:26:27.770 [2024-07-16 00:27:46.583526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.770 [2024-07-16 00:27:46.583541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.770 qpair failed and we were unable to recover it. 00:26:27.770 [2024-07-16 00:27:46.583863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.770 [2024-07-16 00:27:46.583878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.770 qpair failed and we were unable to recover it. 00:26:27.770 [2024-07-16 00:27:46.584056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.770 [2024-07-16 00:27:46.584072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.770 qpair failed and we were unable to recover it. 00:26:27.770 [2024-07-16 00:27:46.584270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.770 [2024-07-16 00:27:46.584294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.770 qpair failed and we were unable to recover it. 00:26:27.770 [2024-07-16 00:27:46.584493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.770 [2024-07-16 00:27:46.584509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.770 qpair failed and we were unable to recover it. 00:26:27.770 [2024-07-16 00:27:46.584757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.770 [2024-07-16 00:27:46.584779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.770 qpair failed and we were unable to recover it. 00:26:27.770 [2024-07-16 00:27:46.585045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.770 [2024-07-16 00:27:46.585064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.770 qpair failed and we were unable to recover it. 00:26:27.770 [2024-07-16 00:27:46.585286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.770 [2024-07-16 00:27:46.585302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.770 qpair failed and we were unable to recover it. 00:26:27.770 [2024-07-16 00:27:46.585528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.770 [2024-07-16 00:27:46.585562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.770 qpair failed and we were unable to recover it. 00:26:27.770 [2024-07-16 00:27:46.585780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.770 [2024-07-16 00:27:46.585812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:27.770 qpair failed and we were unable to recover it. 00:26:28.049 [2024-07-16 00:27:46.586081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.049 [2024-07-16 00:27:46.586115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.049 qpair failed and we were unable to recover it. 00:26:28.049 [2024-07-16 00:27:46.586374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.049 [2024-07-16 00:27:46.586407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.049 qpair failed and we were unable to recover it. 00:26:28.049 [2024-07-16 00:27:46.586767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.049 [2024-07-16 00:27:46.586800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.049 qpair failed and we were unable to recover it. 00:26:28.049 [2024-07-16 00:27:46.587112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.049 [2024-07-16 00:27:46.587175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.049 qpair failed and we were unable to recover it. 00:26:28.049 [2024-07-16 00:27:46.587431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.049 [2024-07-16 00:27:46.587451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.049 qpair failed and we were unable to recover it. 00:26:28.049 [2024-07-16 00:27:46.587633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.049 [2024-07-16 00:27:46.587650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.049 qpair failed and we were unable to recover it. 00:26:28.049 [2024-07-16 00:27:46.587877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.049 [2024-07-16 00:27:46.587909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.049 qpair failed and we were unable to recover it. 00:26:28.049 [2024-07-16 00:27:46.588222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.049 [2024-07-16 00:27:46.588279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.049 qpair failed and we were unable to recover it. 00:26:28.049 [2024-07-16 00:27:46.588557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.049 [2024-07-16 00:27:46.588573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.049 qpair failed and we were unable to recover it. 00:26:28.049 [2024-07-16 00:27:46.588751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.049 [2024-07-16 00:27:46.588768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.049 qpair failed and we were unable to recover it. 00:26:28.049 [2024-07-16 00:27:46.588980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.049 [2024-07-16 00:27:46.588998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.049 qpair failed and we were unable to recover it. 00:26:28.049 [2024-07-16 00:27:46.589137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.049 [2024-07-16 00:27:46.589155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.049 qpair failed and we were unable to recover it. 00:26:28.049 [2024-07-16 00:27:46.589391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.049 [2024-07-16 00:27:46.589408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.049 qpair failed and we were unable to recover it. 00:26:28.049 [2024-07-16 00:27:46.589634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.049 [2024-07-16 00:27:46.589667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.049 qpair failed and we were unable to recover it. 00:26:28.049 [2024-07-16 00:27:46.589928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.049 [2024-07-16 00:27:46.589960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.049 qpair failed and we were unable to recover it. 00:26:28.049 [2024-07-16 00:27:46.590240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.049 [2024-07-16 00:27:46.590260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.049 qpair failed and we were unable to recover it. 00:26:28.049 [2024-07-16 00:27:46.590434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.049 [2024-07-16 00:27:46.590459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.049 qpair failed and we were unable to recover it. 00:26:28.049 [2024-07-16 00:27:46.590679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.049 [2024-07-16 00:27:46.590696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.049 qpair failed and we were unable to recover it. 00:26:28.049 [2024-07-16 00:27:46.590949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.049 [2024-07-16 00:27:46.590967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.049 qpair failed and we were unable to recover it. 00:26:28.049 [2024-07-16 00:27:46.591173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.049 [2024-07-16 00:27:46.591191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.049 qpair failed and we were unable to recover it. 00:26:28.049 [2024-07-16 00:27:46.592308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.049 [2024-07-16 00:27:46.592344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.049 qpair failed and we were unable to recover it. 00:26:28.049 [2024-07-16 00:27:46.592619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.049 [2024-07-16 00:27:46.592658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.049 qpair failed and we were unable to recover it. 00:26:28.049 [2024-07-16 00:27:46.592934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.049 [2024-07-16 00:27:46.592967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.049 qpair failed and we were unable to recover it. 00:26:28.049 [2024-07-16 00:27:46.593300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.049 [2024-07-16 00:27:46.593335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.049 qpair failed and we were unable to recover it. 00:26:28.049 [2024-07-16 00:27:46.593528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.049 [2024-07-16 00:27:46.593561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.049 qpair failed and we were unable to recover it. 00:26:28.050 [2024-07-16 00:27:46.593764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.050 [2024-07-16 00:27:46.593796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.050 qpair failed and we were unable to recover it. 00:26:28.050 [2024-07-16 00:27:46.594054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.050 [2024-07-16 00:27:46.594086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.050 qpair failed and we were unable to recover it. 00:26:28.050 [2024-07-16 00:27:46.594378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.050 [2024-07-16 00:27:46.594413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.050 qpair failed and we were unable to recover it. 00:26:28.050 [2024-07-16 00:27:46.594677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.050 [2024-07-16 00:27:46.594710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.050 qpair failed and we were unable to recover it. 00:26:28.050 [2024-07-16 00:27:46.595097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.050 [2024-07-16 00:27:46.595130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.050 qpair failed and we were unable to recover it. 00:26:28.050 [2024-07-16 00:27:46.595507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.050 [2024-07-16 00:27:46.595541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.050 qpair failed and we were unable to recover it. 00:26:28.050 [2024-07-16 00:27:46.595741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.050 [2024-07-16 00:27:46.595773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.050 qpair failed and we were unable to recover it. 00:26:28.050 [2024-07-16 00:27:46.596061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.050 [2024-07-16 00:27:46.596093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.050 qpair failed and we were unable to recover it. 00:26:28.050 [2024-07-16 00:27:46.596430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.050 [2024-07-16 00:27:46.596447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.050 qpair failed and we were unable to recover it. 00:26:28.050 [2024-07-16 00:27:46.596685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.050 [2024-07-16 00:27:46.596717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.050 qpair failed and we were unable to recover it. 00:26:28.050 [2024-07-16 00:27:46.597006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.050 [2024-07-16 00:27:46.597038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.050 qpair failed and we were unable to recover it. 00:26:28.050 [2024-07-16 00:27:46.597369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.050 [2024-07-16 00:27:46.597385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.050 qpair failed and we were unable to recover it. 00:26:28.050 [2024-07-16 00:27:46.597637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.050 [2024-07-16 00:27:46.597653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.050 qpair failed and we were unable to recover it. 00:26:28.050 [2024-07-16 00:27:46.597900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.050 [2024-07-16 00:27:46.597917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.050 qpair failed and we were unable to recover it. 00:26:28.050 [2024-07-16 00:27:46.598187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.050 [2024-07-16 00:27:46.598204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.050 qpair failed and we were unable to recover it. 00:26:28.050 [2024-07-16 00:27:46.598444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.050 [2024-07-16 00:27:46.598461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.050 qpair failed and we were unable to recover it. 00:26:28.050 [2024-07-16 00:27:46.598677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.050 [2024-07-16 00:27:46.598693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.050 qpair failed and we were unable to recover it. 00:26:28.050 [2024-07-16 00:27:46.598947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.050 [2024-07-16 00:27:46.598965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.050 qpair failed and we were unable to recover it. 00:26:28.050 [2024-07-16 00:27:46.599231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.050 [2024-07-16 00:27:46.599248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.050 qpair failed and we were unable to recover it. 00:26:28.050 [2024-07-16 00:27:46.599413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.050 [2024-07-16 00:27:46.599430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.050 qpair failed and we were unable to recover it. 00:26:28.050 [2024-07-16 00:27:46.599710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.050 [2024-07-16 00:27:46.599727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.050 qpair failed and we were unable to recover it. 00:26:28.050 [2024-07-16 00:27:46.599984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.050 [2024-07-16 00:27:46.600000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.050 qpair failed and we were unable to recover it. 00:26:28.050 [2024-07-16 00:27:46.600256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.050 [2024-07-16 00:27:46.600273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.050 qpair failed and we were unable to recover it. 00:26:28.050 [2024-07-16 00:27:46.600529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.050 [2024-07-16 00:27:46.600546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.050 qpair failed and we were unable to recover it. 00:26:28.050 [2024-07-16 00:27:46.600777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.050 [2024-07-16 00:27:46.600794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.050 qpair failed and we were unable to recover it. 00:26:28.050 [2024-07-16 00:27:46.601077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.050 [2024-07-16 00:27:46.601095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.050 qpair failed and we were unable to recover it. 00:26:28.050 [2024-07-16 00:27:46.601393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.050 [2024-07-16 00:27:46.601410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.050 qpair failed and we were unable to recover it. 00:26:28.050 [2024-07-16 00:27:46.601573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.050 [2024-07-16 00:27:46.601590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.050 qpair failed and we were unable to recover it. 00:26:28.050 [2024-07-16 00:27:46.601760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.050 [2024-07-16 00:27:46.601776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.050 qpair failed and we were unable to recover it. 00:26:28.050 [2024-07-16 00:27:46.601995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.050 [2024-07-16 00:27:46.602012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.050 qpair failed and we were unable to recover it. 00:26:28.050 [2024-07-16 00:27:46.602238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.050 [2024-07-16 00:27:46.602255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.050 qpair failed and we were unable to recover it. 00:26:28.050 [2024-07-16 00:27:46.602514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.050 [2024-07-16 00:27:46.602534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.050 qpair failed and we were unable to recover it. 00:26:28.050 [2024-07-16 00:27:46.602701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.050 [2024-07-16 00:27:46.602718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.050 qpair failed and we were unable to recover it. 00:26:28.050 [2024-07-16 00:27:46.602953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.050 [2024-07-16 00:27:46.602969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.050 qpair failed and we were unable to recover it. 00:26:28.050 [2024-07-16 00:27:46.603208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.050 [2024-07-16 00:27:46.603255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.050 qpair failed and we were unable to recover it. 00:26:28.050 [2024-07-16 00:27:46.603506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.050 [2024-07-16 00:27:46.603539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.050 qpair failed and we were unable to recover it. 00:26:28.050 [2024-07-16 00:27:46.603786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.050 [2024-07-16 00:27:46.603818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.050 qpair failed and we were unable to recover it. 00:26:28.050 [2024-07-16 00:27:46.604172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.050 [2024-07-16 00:27:46.604204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.050 qpair failed and we were unable to recover it. 00:26:28.050 [2024-07-16 00:27:46.604414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.050 [2024-07-16 00:27:46.604447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.050 qpair failed and we were unable to recover it. 00:26:28.050 [2024-07-16 00:27:46.604715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.050 [2024-07-16 00:27:46.604758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.050 qpair failed and we were unable to recover it. 00:26:28.050 [2024-07-16 00:27:46.605000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.050 [2024-07-16 00:27:46.605017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.050 qpair failed and we were unable to recover it. 00:26:28.051 [2024-07-16 00:27:46.605239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.051 [2024-07-16 00:27:46.605256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.051 qpair failed and we were unable to recover it. 00:26:28.051 [2024-07-16 00:27:46.605487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.051 [2024-07-16 00:27:46.605503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.051 qpair failed and we were unable to recover it. 00:26:28.051 [2024-07-16 00:27:46.605699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.051 [2024-07-16 00:27:46.605716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.051 qpair failed and we were unable to recover it. 00:26:28.051 [2024-07-16 00:27:46.605929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.051 [2024-07-16 00:27:46.605946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.051 qpair failed and we were unable to recover it. 00:26:28.051 [2024-07-16 00:27:46.606246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.051 [2024-07-16 00:27:46.606263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.051 qpair failed and we were unable to recover it. 00:26:28.051 [2024-07-16 00:27:46.606437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.051 [2024-07-16 00:27:46.606468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.051 qpair failed and we were unable to recover it. 00:26:28.051 [2024-07-16 00:27:46.606665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.051 [2024-07-16 00:27:46.606696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.051 qpair failed and we were unable to recover it. 00:26:28.051 [2024-07-16 00:27:46.606957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.051 [2024-07-16 00:27:46.606988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.051 qpair failed and we were unable to recover it. 00:26:28.051 [2024-07-16 00:27:46.607236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.051 [2024-07-16 00:27:46.607253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.051 qpair failed and we were unable to recover it. 00:26:28.051 [2024-07-16 00:27:46.607414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.051 [2024-07-16 00:27:46.607430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.051 qpair failed and we were unable to recover it. 00:26:28.051 [2024-07-16 00:27:46.607657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.051 [2024-07-16 00:27:46.607688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.051 qpair failed and we were unable to recover it. 00:26:28.051 [2024-07-16 00:27:46.608028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.051 [2024-07-16 00:27:46.608060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.051 qpair failed and we were unable to recover it. 00:26:28.051 [2024-07-16 00:27:46.608299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.051 [2024-07-16 00:27:46.608316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.051 qpair failed and we were unable to recover it. 00:26:28.051 [2024-07-16 00:27:46.609297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.051 [2024-07-16 00:27:46.609328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.051 qpair failed and we were unable to recover it. 00:26:28.051 [2024-07-16 00:27:46.609561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.051 [2024-07-16 00:27:46.609595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.051 qpair failed and we were unable to recover it. 00:26:28.051 [2024-07-16 00:27:46.609937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.051 [2024-07-16 00:27:46.609969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.051 qpair failed and we were unable to recover it. 00:26:28.051 [2024-07-16 00:27:46.610291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.051 [2024-07-16 00:27:46.610324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.051 qpair failed and we were unable to recover it. 00:26:28.051 [2024-07-16 00:27:46.610645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.051 [2024-07-16 00:27:46.610717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.051 qpair failed and we were unable to recover it. 00:26:28.051 [2024-07-16 00:27:46.611116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.051 [2024-07-16 00:27:46.611151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.051 qpair failed and we were unable to recover it. 00:26:28.051 [2024-07-16 00:27:46.611430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.051 [2024-07-16 00:27:46.611445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.051 qpair failed and we were unable to recover it. 00:26:28.051 [2024-07-16 00:27:46.611604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.051 [2024-07-16 00:27:46.611617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.051 qpair failed and we were unable to recover it. 00:26:28.051 [2024-07-16 00:27:46.611839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.051 [2024-07-16 00:27:46.611871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.051 qpair failed and we were unable to recover it. 00:26:28.051 [2024-07-16 00:27:46.612152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.051 [2024-07-16 00:27:46.612184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.051 qpair failed and we were unable to recover it. 00:26:28.051 [2024-07-16 00:27:46.612378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.051 [2024-07-16 00:27:46.612412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.051 qpair failed and we were unable to recover it. 00:26:28.051 [2024-07-16 00:27:46.612607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.051 [2024-07-16 00:27:46.612638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.051 qpair failed and we were unable to recover it. 00:26:28.051 [2024-07-16 00:27:46.612834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.051 [2024-07-16 00:27:46.612866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.051 qpair failed and we were unable to recover it. 00:26:28.051 [2024-07-16 00:27:46.613166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.051 [2024-07-16 00:27:46.613198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.051 qpair failed and we were unable to recover it. 00:26:28.051 [2024-07-16 00:27:46.613562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.051 [2024-07-16 00:27:46.613634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.051 qpair failed and we were unable to recover it. 00:26:28.051 [2024-07-16 00:27:46.613901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.051 [2024-07-16 00:27:46.613937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.051 qpair failed and we were unable to recover it. 00:26:28.051 [2024-07-16 00:27:46.615330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.051 [2024-07-16 00:27:46.615361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.051 qpair failed and we were unable to recover it. 00:26:28.051 [2024-07-16 00:27:46.615663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.051 [2024-07-16 00:27:46.615706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.051 qpair failed and we were unable to recover it. 00:26:28.051 [2024-07-16 00:27:46.615912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.051 [2024-07-16 00:27:46.615945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.051 qpair failed and we were unable to recover it. 00:26:28.051 [2024-07-16 00:27:46.616249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.051 [2024-07-16 00:27:46.616283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.051 qpair failed and we were unable to recover it. 00:26:28.051 [2024-07-16 00:27:46.616539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.051 [2024-07-16 00:27:46.616570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.051 qpair failed and we were unable to recover it. 00:26:28.051 [2024-07-16 00:27:46.616754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.051 [2024-07-16 00:27:46.616787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.051 qpair failed and we were unable to recover it. 00:26:28.051 [2024-07-16 00:27:46.616972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.051 [2024-07-16 00:27:46.616989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.051 qpair failed and we were unable to recover it. 00:26:28.051 [2024-07-16 00:27:46.617273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.051 [2024-07-16 00:27:46.617306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.051 qpair failed and we were unable to recover it. 00:26:28.051 [2024-07-16 00:27:46.617489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.051 [2024-07-16 00:27:46.617521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.051 qpair failed and we were unable to recover it. 00:26:28.051 [2024-07-16 00:27:46.617772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.051 [2024-07-16 00:27:46.617804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.051 qpair failed and we were unable to recover it. 00:26:28.051 [2024-07-16 00:27:46.618069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.051 [2024-07-16 00:27:46.618102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.051 qpair failed and we were unable to recover it. 00:26:28.051 [2024-07-16 00:27:46.618296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.052 [2024-07-16 00:27:46.618329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.052 qpair failed and we were unable to recover it. 00:26:28.052 [2024-07-16 00:27:46.618582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.052 [2024-07-16 00:27:46.618614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.052 qpair failed and we were unable to recover it. 00:26:28.052 [2024-07-16 00:27:46.618914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.052 [2024-07-16 00:27:46.618947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.052 qpair failed and we were unable to recover it. 00:26:28.052 [2024-07-16 00:27:46.619273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.052 [2024-07-16 00:27:46.619309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.052 qpair failed and we were unable to recover it. 00:26:28.052 [2024-07-16 00:27:46.619593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.052 [2024-07-16 00:27:46.619625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.052 qpair failed and we were unable to recover it. 00:26:28.052 [2024-07-16 00:27:46.619801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.052 [2024-07-16 00:27:46.619832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.052 qpair failed and we were unable to recover it. 00:26:28.052 [2024-07-16 00:27:46.620143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.052 [2024-07-16 00:27:46.620175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.052 qpair failed and we were unable to recover it. 00:26:28.052 [2024-07-16 00:27:46.620387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.052 [2024-07-16 00:27:46.620404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.052 qpair failed and we were unable to recover it. 00:26:28.052 [2024-07-16 00:27:46.620605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.052 [2024-07-16 00:27:46.620621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.052 qpair failed and we were unable to recover it. 00:26:28.052 [2024-07-16 00:27:46.620890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.052 [2024-07-16 00:27:46.620906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.052 qpair failed and we were unable to recover it. 00:26:28.052 [2024-07-16 00:27:46.621215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.052 [2024-07-16 00:27:46.621235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.052 qpair failed and we were unable to recover it. 00:26:28.052 [2024-07-16 00:27:46.621440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.052 [2024-07-16 00:27:46.621456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.052 qpair failed and we were unable to recover it. 00:26:28.052 [2024-07-16 00:27:46.621624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.052 [2024-07-16 00:27:46.621640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.052 qpair failed and we were unable to recover it. 00:26:28.052 [2024-07-16 00:27:46.621906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.052 [2024-07-16 00:27:46.621923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.052 qpair failed and we were unable to recover it. 00:26:28.052 [2024-07-16 00:27:46.622129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.052 [2024-07-16 00:27:46.622145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.052 qpair failed and we were unable to recover it. 00:26:28.052 [2024-07-16 00:27:46.622383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.052 [2024-07-16 00:27:46.622400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.052 qpair failed and we were unable to recover it. 00:26:28.052 [2024-07-16 00:27:46.622565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.052 [2024-07-16 00:27:46.622581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.052 qpair failed and we were unable to recover it. 00:26:28.052 [2024-07-16 00:27:46.622737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.052 [2024-07-16 00:27:46.622756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.052 qpair failed and we were unable to recover it. 00:26:28.052 [2024-07-16 00:27:46.623041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.052 [2024-07-16 00:27:46.623057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.052 qpair failed and we were unable to recover it. 00:26:28.052 [2024-07-16 00:27:46.623328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.052 [2024-07-16 00:27:46.623346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.052 qpair failed and we were unable to recover it. 00:26:28.052 [2024-07-16 00:27:46.623631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.052 [2024-07-16 00:27:46.623647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.052 qpair failed and we were unable to recover it. 00:26:28.052 [2024-07-16 00:27:46.623865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.052 [2024-07-16 00:27:46.623881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.052 qpair failed and we were unable to recover it. 00:26:28.052 [2024-07-16 00:27:46.624057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.052 [2024-07-16 00:27:46.624074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.052 qpair failed and we were unable to recover it. 00:26:28.052 [2024-07-16 00:27:46.624210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.052 [2024-07-16 00:27:46.624233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.052 qpair failed and we were unable to recover it. 00:26:28.052 [2024-07-16 00:27:46.624400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.052 [2024-07-16 00:27:46.624417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.052 qpair failed and we were unable to recover it. 00:26:28.052 [2024-07-16 00:27:46.624687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.052 [2024-07-16 00:27:46.624704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.052 qpair failed and we were unable to recover it. 00:26:28.052 [2024-07-16 00:27:46.624919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.052 [2024-07-16 00:27:46.624936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.052 qpair failed and we were unable to recover it. 00:26:28.052 [2024-07-16 00:27:46.625103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.052 [2024-07-16 00:27:46.625119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.052 qpair failed and we were unable to recover it. 00:26:28.052 [2024-07-16 00:27:46.625412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.052 [2024-07-16 00:27:46.625429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.052 qpair failed and we were unable to recover it. 00:26:28.052 [2024-07-16 00:27:46.625602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.052 [2024-07-16 00:27:46.625618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.052 qpair failed and we were unable to recover it. 00:26:28.052 [2024-07-16 00:27:46.625972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.052 [2024-07-16 00:27:46.625989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.052 qpair failed and we were unable to recover it. 00:26:28.052 [2024-07-16 00:27:46.626138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.052 [2024-07-16 00:27:46.626155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.052 qpair failed and we were unable to recover it. 00:26:28.052 [2024-07-16 00:27:46.626364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.052 [2024-07-16 00:27:46.626381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.052 qpair failed and we were unable to recover it. 00:26:28.052 [2024-07-16 00:27:46.626615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.052 [2024-07-16 00:27:46.626632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.052 qpair failed and we were unable to recover it. 00:26:28.052 [2024-07-16 00:27:46.626795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.052 [2024-07-16 00:27:46.626812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.052 qpair failed and we were unable to recover it. 00:26:28.052 [2024-07-16 00:27:46.626960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.052 [2024-07-16 00:27:46.626976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.052 qpair failed and we were unable to recover it. 00:26:28.052 [2024-07-16 00:27:46.627198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.052 [2024-07-16 00:27:46.627215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.052 qpair failed and we were unable to recover it. 00:26:28.052 [2024-07-16 00:27:46.627369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.052 [2024-07-16 00:27:46.627388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.052 qpair failed and we were unable to recover it. 00:26:28.052 [2024-07-16 00:27:46.627610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.052 [2024-07-16 00:27:46.627626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.052 qpair failed and we were unable to recover it. 00:26:28.052 [2024-07-16 00:27:46.627843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.052 [2024-07-16 00:27:46.627859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.052 qpair failed and we were unable to recover it. 00:26:28.052 [2024-07-16 00:27:46.628004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.052 [2024-07-16 00:27:46.628020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.052 qpair failed and we were unable to recover it. 00:26:28.052 [2024-07-16 00:27:46.628284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.053 [2024-07-16 00:27:46.628302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.053 qpair failed and we were unable to recover it. 00:26:28.053 [2024-07-16 00:27:46.628568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.053 [2024-07-16 00:27:46.628585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.053 qpair failed and we were unable to recover it. 00:26:28.053 [2024-07-16 00:27:46.628799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.053 [2024-07-16 00:27:46.628816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.053 qpair failed and we were unable to recover it. 00:26:28.053 [2024-07-16 00:27:46.629036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.053 [2024-07-16 00:27:46.629053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.053 qpair failed and we were unable to recover it. 00:26:28.053 [2024-07-16 00:27:46.629257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.053 [2024-07-16 00:27:46.629274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.053 qpair failed and we were unable to recover it. 00:26:28.053 [2024-07-16 00:27:46.629485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.053 [2024-07-16 00:27:46.629501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.053 qpair failed and we were unable to recover it. 00:26:28.053 [2024-07-16 00:27:46.629818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.053 [2024-07-16 00:27:46.629834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.053 qpair failed and we were unable to recover it. 00:26:28.053 [2024-07-16 00:27:46.630083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.053 [2024-07-16 00:27:46.630099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.053 qpair failed and we were unable to recover it. 00:26:28.053 [2024-07-16 00:27:46.630373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.053 [2024-07-16 00:27:46.630390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.053 qpair failed and we were unable to recover it. 00:26:28.053 [2024-07-16 00:27:46.630541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.053 [2024-07-16 00:27:46.630557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.053 qpair failed and we were unable to recover it. 00:26:28.053 [2024-07-16 00:27:46.630843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.053 [2024-07-16 00:27:46.630860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.053 qpair failed and we were unable to recover it. 00:26:28.053 [2024-07-16 00:27:46.631131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.053 [2024-07-16 00:27:46.631147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.053 qpair failed and we were unable to recover it. 00:26:28.053 [2024-07-16 00:27:46.631427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.053 [2024-07-16 00:27:46.631445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.053 qpair failed and we were unable to recover it. 00:26:28.053 [2024-07-16 00:27:46.631778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.053 [2024-07-16 00:27:46.631795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.053 qpair failed and we were unable to recover it. 00:26:28.053 [2024-07-16 00:27:46.632052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.053 [2024-07-16 00:27:46.632069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.053 qpair failed and we were unable to recover it. 00:26:28.053 [2024-07-16 00:27:46.632287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.053 [2024-07-16 00:27:46.632304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.053 qpair failed and we were unable to recover it. 00:26:28.053 [2024-07-16 00:27:46.632509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.053 [2024-07-16 00:27:46.632528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.053 qpair failed and we were unable to recover it. 00:26:28.053 [2024-07-16 00:27:46.632741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.053 [2024-07-16 00:27:46.632757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.053 qpair failed and we were unable to recover it. 00:26:28.053 [2024-07-16 00:27:46.633024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.053 [2024-07-16 00:27:46.633040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.053 qpair failed and we were unable to recover it. 00:26:28.053 [2024-07-16 00:27:46.633178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.053 [2024-07-16 00:27:46.633195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.053 qpair failed and we were unable to recover it. 00:26:28.053 [2024-07-16 00:27:46.633349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.053 [2024-07-16 00:27:46.633365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.053 qpair failed and we were unable to recover it. 00:26:28.053 [2024-07-16 00:27:46.633576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.053 [2024-07-16 00:27:46.633592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.053 qpair failed and we were unable to recover it. 00:26:28.053 [2024-07-16 00:27:46.633709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.053 [2024-07-16 00:27:46.633725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.053 qpair failed and we were unable to recover it. 00:26:28.053 [2024-07-16 00:27:46.633926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.053 [2024-07-16 00:27:46.633941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.053 qpair failed and we were unable to recover it. 00:26:28.053 [2024-07-16 00:27:46.634159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.053 [2024-07-16 00:27:46.634176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.053 qpair failed and we were unable to recover it. 00:26:28.053 [2024-07-16 00:27:46.634394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.053 [2024-07-16 00:27:46.634412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.053 qpair failed and we were unable to recover it. 00:26:28.053 [2024-07-16 00:27:46.634575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.053 [2024-07-16 00:27:46.634591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.053 qpair failed and we were unable to recover it. 00:26:28.053 [2024-07-16 00:27:46.634791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.053 [2024-07-16 00:27:46.634808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.053 qpair failed and we were unable to recover it. 00:26:28.053 [2024-07-16 00:27:46.635099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.053 [2024-07-16 00:27:46.635115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.053 qpair failed and we were unable to recover it. 00:26:28.053 [2024-07-16 00:27:46.635269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.053 [2024-07-16 00:27:46.635287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.053 qpair failed and we were unable to recover it. 00:26:28.053 [2024-07-16 00:27:46.635493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.053 [2024-07-16 00:27:46.635509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.053 qpair failed and we were unable to recover it. 00:26:28.053 [2024-07-16 00:27:46.635649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.053 [2024-07-16 00:27:46.635665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.053 qpair failed and we were unable to recover it. 00:26:28.053 [2024-07-16 00:27:46.635846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.053 [2024-07-16 00:27:46.635863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.053 qpair failed and we were unable to recover it. 00:26:28.053 [2024-07-16 00:27:46.636016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.053 [2024-07-16 00:27:46.636032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.053 qpair failed and we were unable to recover it. 00:26:28.053 [2024-07-16 00:27:46.636296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.054 [2024-07-16 00:27:46.636314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.054 qpair failed and we were unable to recover it. 00:26:28.054 [2024-07-16 00:27:46.636464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.054 [2024-07-16 00:27:46.636481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.054 qpair failed and we were unable to recover it. 00:26:28.054 [2024-07-16 00:27:46.636613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.054 [2024-07-16 00:27:46.636629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.054 qpair failed and we were unable to recover it. 00:26:28.054 [2024-07-16 00:27:46.636762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.054 [2024-07-16 00:27:46.636779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.054 qpair failed and we were unable to recover it. 00:26:28.054 [2024-07-16 00:27:46.636974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.054 [2024-07-16 00:27:46.636990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.054 qpair failed and we were unable to recover it. 00:26:28.054 [2024-07-16 00:27:46.637203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.054 [2024-07-16 00:27:46.637220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.054 qpair failed and we were unable to recover it. 00:26:28.054 [2024-07-16 00:27:46.637460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.054 [2024-07-16 00:27:46.637477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.054 qpair failed and we were unable to recover it. 00:26:28.054 [2024-07-16 00:27:46.637685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.054 [2024-07-16 00:27:46.637702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.054 qpair failed and we were unable to recover it. 00:26:28.054 [2024-07-16 00:27:46.637830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.054 [2024-07-16 00:27:46.637846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.054 qpair failed and we were unable to recover it. 00:26:28.054 [2024-07-16 00:27:46.637961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.054 [2024-07-16 00:27:46.637977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.054 qpair failed and we were unable to recover it. 00:26:28.054 [2024-07-16 00:27:46.638193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.054 [2024-07-16 00:27:46.638209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.054 qpair failed and we were unable to recover it. 00:26:28.054 [2024-07-16 00:27:46.638451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.054 [2024-07-16 00:27:46.638492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.054 qpair failed and we were unable to recover it. 00:26:28.054 [2024-07-16 00:27:46.638808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.054 [2024-07-16 00:27:46.638841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.054 qpair failed and we were unable to recover it. 00:26:28.054 [2024-07-16 00:27:46.639123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.054 [2024-07-16 00:27:46.639137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.054 qpair failed and we were unable to recover it. 00:26:28.054 [2024-07-16 00:27:46.639433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.054 [2024-07-16 00:27:46.639447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.054 qpair failed and we were unable to recover it. 00:26:28.054 [2024-07-16 00:27:46.639609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.054 [2024-07-16 00:27:46.639623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.054 qpair failed and we were unable to recover it. 00:26:28.054 [2024-07-16 00:27:46.639748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.054 [2024-07-16 00:27:46.639762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.054 qpair failed and we were unable to recover it. 00:26:28.054 [2024-07-16 00:27:46.639919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.054 [2024-07-16 00:27:46.639931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.054 qpair failed and we were unable to recover it. 00:26:28.054 [2024-07-16 00:27:46.640080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.054 [2024-07-16 00:27:46.640094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.054 qpair failed and we were unable to recover it. 00:26:28.054 [2024-07-16 00:27:46.640236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.054 [2024-07-16 00:27:46.640249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.054 qpair failed and we were unable to recover it. 00:26:28.054 [2024-07-16 00:27:46.640471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.054 [2024-07-16 00:27:46.640483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.054 qpair failed and we were unable to recover it. 00:26:28.054 [2024-07-16 00:27:46.640609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.054 [2024-07-16 00:27:46.640622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.054 qpair failed and we were unable to recover it. 00:26:28.054 [2024-07-16 00:27:46.640854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.054 [2024-07-16 00:27:46.640871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.054 qpair failed and we were unable to recover it. 00:26:28.054 [2024-07-16 00:27:46.641065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.054 [2024-07-16 00:27:46.641079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.054 qpair failed and we were unable to recover it. 00:26:28.054 [2024-07-16 00:27:46.641288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.054 [2024-07-16 00:27:46.641301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.054 qpair failed and we were unable to recover it. 00:26:28.054 [2024-07-16 00:27:46.641420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.054 [2024-07-16 00:27:46.641433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.054 qpair failed and we were unable to recover it. 00:26:28.054 [2024-07-16 00:27:46.641617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.054 [2024-07-16 00:27:46.641629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.054 qpair failed and we were unable to recover it. 00:26:28.054 [2024-07-16 00:27:46.641838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.054 [2024-07-16 00:27:46.641850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.054 qpair failed and we were unable to recover it. 00:26:28.054 [2024-07-16 00:27:46.642072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.054 [2024-07-16 00:27:46.642084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.054 qpair failed and we were unable to recover it. 00:26:28.054 [2024-07-16 00:27:46.642383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.054 [2024-07-16 00:27:46.642395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.054 qpair failed and we were unable to recover it. 00:26:28.054 [2024-07-16 00:27:46.642628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.054 [2024-07-16 00:27:46.642641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.054 qpair failed and we were unable to recover it. 00:26:28.054 [2024-07-16 00:27:46.642831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.054 [2024-07-16 00:27:46.642844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.054 qpair failed and we were unable to recover it. 00:26:28.054 [2024-07-16 00:27:46.642975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.054 [2024-07-16 00:27:46.642988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.054 qpair failed and we were unable to recover it. 00:26:28.054 [2024-07-16 00:27:46.643181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.054 [2024-07-16 00:27:46.643194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.054 qpair failed and we were unable to recover it. 00:26:28.054 [2024-07-16 00:27:46.643327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.054 [2024-07-16 00:27:46.643341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.054 qpair failed and we were unable to recover it. 00:26:28.054 [2024-07-16 00:27:46.643475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.054 [2024-07-16 00:27:46.643487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.054 qpair failed and we were unable to recover it. 00:26:28.054 [2024-07-16 00:27:46.643687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.054 [2024-07-16 00:27:46.643700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.054 qpair failed and we were unable to recover it. 00:26:28.054 [2024-07-16 00:27:46.643896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.054 [2024-07-16 00:27:46.643908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.054 qpair failed and we were unable to recover it. 00:26:28.054 [2024-07-16 00:27:46.644047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.054 [2024-07-16 00:27:46.644060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.054 qpair failed and we were unable to recover it. 00:26:28.054 [2024-07-16 00:27:46.644342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.054 [2024-07-16 00:27:46.644355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.054 qpair failed and we were unable to recover it. 00:26:28.054 [2024-07-16 00:27:46.644633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.054 [2024-07-16 00:27:46.644645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.054 qpair failed and we were unable to recover it. 00:26:28.055 [2024-07-16 00:27:46.644850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.055 [2024-07-16 00:27:46.644863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.055 qpair failed and we were unable to recover it. 00:26:28.055 [2024-07-16 00:27:46.644967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.055 [2024-07-16 00:27:46.644979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.055 qpair failed and we were unable to recover it. 00:26:28.055 [2024-07-16 00:27:46.645119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.055 [2024-07-16 00:27:46.645131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.055 qpair failed and we were unable to recover it. 00:26:28.055 [2024-07-16 00:27:46.645330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.055 [2024-07-16 00:27:46.645343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.055 qpair failed and we were unable to recover it. 00:26:28.055 [2024-07-16 00:27:46.645546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.055 [2024-07-16 00:27:46.645558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.055 qpair failed and we were unable to recover it. 00:26:28.055 [2024-07-16 00:27:46.645707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.055 [2024-07-16 00:27:46.645720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.055 qpair failed and we were unable to recover it. 00:26:28.055 [2024-07-16 00:27:46.645933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.055 [2024-07-16 00:27:46.645945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.055 qpair failed and we were unable to recover it. 00:26:28.055 [2024-07-16 00:27:46.646139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.055 [2024-07-16 00:27:46.646152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.055 qpair failed and we were unable to recover it. 00:26:28.055 [2024-07-16 00:27:46.646342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.055 [2024-07-16 00:27:46.646355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.055 qpair failed and we were unable to recover it. 00:26:28.055 [2024-07-16 00:27:46.646549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.055 [2024-07-16 00:27:46.646561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.055 qpair failed and we were unable to recover it. 00:26:28.055 [2024-07-16 00:27:46.646701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.055 [2024-07-16 00:27:46.646713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.055 qpair failed and we were unable to recover it. 00:26:28.055 [2024-07-16 00:27:46.647012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.055 [2024-07-16 00:27:46.647024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.055 qpair failed and we were unable to recover it. 00:26:28.055 [2024-07-16 00:27:46.647146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.055 [2024-07-16 00:27:46.647159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.055 qpair failed and we were unable to recover it. 00:26:28.055 [2024-07-16 00:27:46.647251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.055 [2024-07-16 00:27:46.647262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.055 qpair failed and we were unable to recover it. 00:26:28.055 [2024-07-16 00:27:46.647348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.055 [2024-07-16 00:27:46.647359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.055 qpair failed and we were unable to recover it. 00:26:28.055 [2024-07-16 00:27:46.647618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.055 [2024-07-16 00:27:46.647631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.055 qpair failed and we were unable to recover it. 00:26:28.055 [2024-07-16 00:27:46.647800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.055 [2024-07-16 00:27:46.647813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.055 qpair failed and we were unable to recover it. 00:26:28.055 [2024-07-16 00:27:46.648031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.055 [2024-07-16 00:27:46.648044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.055 qpair failed and we were unable to recover it. 00:26:28.055 [2024-07-16 00:27:46.648172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.055 [2024-07-16 00:27:46.648185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.055 qpair failed and we were unable to recover it. 00:26:28.055 [2024-07-16 00:27:46.648319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.055 [2024-07-16 00:27:46.648332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.055 qpair failed and we were unable to recover it. 00:26:28.055 [2024-07-16 00:27:46.648485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.055 [2024-07-16 00:27:46.648497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.055 qpair failed and we were unable to recover it. 00:26:28.055 [2024-07-16 00:27:46.648624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.055 [2024-07-16 00:27:46.648638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.055 qpair failed and we were unable to recover it. 00:26:28.055 [2024-07-16 00:27:46.648898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.055 [2024-07-16 00:27:46.648911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.055 qpair failed and we were unable to recover it. 00:26:28.055 [2024-07-16 00:27:46.649103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.055 [2024-07-16 00:27:46.649116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.055 qpair failed and we were unable to recover it. 00:26:28.055 [2024-07-16 00:27:46.649392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.055 [2024-07-16 00:27:46.649405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.055 qpair failed and we were unable to recover it. 00:26:28.055 [2024-07-16 00:27:46.649600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.055 [2024-07-16 00:27:46.649612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.055 qpair failed and we were unable to recover it. 00:26:28.055 [2024-07-16 00:27:46.649751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.055 [2024-07-16 00:27:46.649764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.055 qpair failed and we were unable to recover it. 00:26:28.055 [2024-07-16 00:27:46.649977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.055 [2024-07-16 00:27:46.649989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.055 qpair failed and we were unable to recover it. 00:26:28.055 [2024-07-16 00:27:46.650176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.055 [2024-07-16 00:27:46.650188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.055 qpair failed and we were unable to recover it. 00:26:28.055 [2024-07-16 00:27:46.650366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.055 [2024-07-16 00:27:46.650378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.055 qpair failed and we were unable to recover it. 00:26:28.055 [2024-07-16 00:27:46.650514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.055 [2024-07-16 00:27:46.650526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.055 qpair failed and we were unable to recover it. 00:26:28.055 [2024-07-16 00:27:46.650740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.055 [2024-07-16 00:27:46.650753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.055 qpair failed and we were unable to recover it. 00:26:28.055 [2024-07-16 00:27:46.650881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.055 [2024-07-16 00:27:46.650893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.055 qpair failed and we were unable to recover it. 00:26:28.055 [2024-07-16 00:27:46.651149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.055 [2024-07-16 00:27:46.651162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.055 qpair failed and we were unable to recover it. 00:26:28.055 [2024-07-16 00:27:46.651296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.055 [2024-07-16 00:27:46.651309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.055 qpair failed and we were unable to recover it. 00:26:28.055 [2024-07-16 00:27:46.651408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.055 [2024-07-16 00:27:46.651419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.055 qpair failed and we were unable to recover it. 00:26:28.055 [2024-07-16 00:27:46.651696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.055 [2024-07-16 00:27:46.651708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.055 qpair failed and we were unable to recover it. 00:26:28.055 [2024-07-16 00:27:46.651854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.055 [2024-07-16 00:27:46.651866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.055 qpair failed and we were unable to recover it. 00:26:28.055 [2024-07-16 00:27:46.652124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.055 [2024-07-16 00:27:46.652136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.055 qpair failed and we were unable to recover it. 00:26:28.055 [2024-07-16 00:27:46.652291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.055 [2024-07-16 00:27:46.652304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.055 qpair failed and we were unable to recover it. 00:26:28.055 [2024-07-16 00:27:46.652455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.055 [2024-07-16 00:27:46.652467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.055 qpair failed and we were unable to recover it. 00:26:28.056 [2024-07-16 00:27:46.652599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.056 [2024-07-16 00:27:46.652610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.056 qpair failed and we were unable to recover it. 00:26:28.056 [2024-07-16 00:27:46.652757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.056 [2024-07-16 00:27:46.652770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.056 qpair failed and we were unable to recover it. 00:26:28.056 [2024-07-16 00:27:46.652895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.056 [2024-07-16 00:27:46.652907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.056 qpair failed and we were unable to recover it. 00:26:28.056 [2024-07-16 00:27:46.653134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.056 [2024-07-16 00:27:46.653146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.056 qpair failed and we were unable to recover it. 00:26:28.056 [2024-07-16 00:27:46.653292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.056 [2024-07-16 00:27:46.653304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.056 qpair failed and we were unable to recover it. 00:26:28.056 [2024-07-16 00:27:46.653431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.056 [2024-07-16 00:27:46.653443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.056 qpair failed and we were unable to recover it. 00:26:28.056 [2024-07-16 00:27:46.653652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.056 [2024-07-16 00:27:46.653664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.056 qpair failed and we were unable to recover it. 00:26:28.056 [2024-07-16 00:27:46.653874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.056 [2024-07-16 00:27:46.653886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.056 qpair failed and we were unable to recover it. 00:26:28.056 [2024-07-16 00:27:46.654095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.056 [2024-07-16 00:27:46.654107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.056 qpair failed and we were unable to recover it. 00:26:28.056 [2024-07-16 00:27:46.654336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.056 [2024-07-16 00:27:46.654348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.056 qpair failed and we were unable to recover it. 00:26:28.056 [2024-07-16 00:27:46.654557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.056 [2024-07-16 00:27:46.654569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.056 qpair failed and we were unable to recover it. 00:26:28.056 [2024-07-16 00:27:46.654688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.056 [2024-07-16 00:27:46.654700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.056 qpair failed and we were unable to recover it. 00:26:28.056 [2024-07-16 00:27:46.654921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.056 [2024-07-16 00:27:46.654934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.056 qpair failed and we were unable to recover it. 00:26:28.056 [2024-07-16 00:27:46.655128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.056 [2024-07-16 00:27:46.655140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.056 qpair failed and we were unable to recover it. 00:26:28.056 [2024-07-16 00:27:46.655265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.056 [2024-07-16 00:27:46.655279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.056 qpair failed and we were unable to recover it. 00:26:28.056 [2024-07-16 00:27:46.655543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.056 [2024-07-16 00:27:46.655556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.056 qpair failed and we were unable to recover it. 00:26:28.056 [2024-07-16 00:27:46.655746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.056 [2024-07-16 00:27:46.655759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.056 qpair failed and we were unable to recover it. 00:26:28.056 [2024-07-16 00:27:46.655947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.056 [2024-07-16 00:27:46.655960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.056 qpair failed and we were unable to recover it. 00:26:28.056 [2024-07-16 00:27:46.656116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.056 [2024-07-16 00:27:46.656128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.056 qpair failed and we were unable to recover it. 00:26:28.056 [2024-07-16 00:27:46.656409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.056 [2024-07-16 00:27:46.656421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.056 qpair failed and we were unable to recover it. 00:26:28.056 [2024-07-16 00:27:46.656558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.056 [2024-07-16 00:27:46.656573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.056 qpair failed and we were unable to recover it. 00:26:28.056 [2024-07-16 00:27:46.656716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.056 [2024-07-16 00:27:46.656729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.056 qpair failed and we were unable to recover it. 00:26:28.056 [2024-07-16 00:27:46.656984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.056 [2024-07-16 00:27:46.656996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.056 qpair failed and we were unable to recover it. 00:26:28.056 [2024-07-16 00:27:46.657259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.056 [2024-07-16 00:27:46.657272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.056 qpair failed and we were unable to recover it. 00:26:28.056 [2024-07-16 00:27:46.657414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.056 [2024-07-16 00:27:46.657427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.056 qpair failed and we were unable to recover it. 00:26:28.056 [2024-07-16 00:27:46.657630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.056 [2024-07-16 00:27:46.657642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.056 qpair failed and we were unable to recover it. 00:26:28.056 [2024-07-16 00:27:46.657815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.056 [2024-07-16 00:27:46.657827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.056 qpair failed and we were unable to recover it. 00:26:28.056 [2024-07-16 00:27:46.657951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.056 [2024-07-16 00:27:46.657964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.056 qpair failed and we were unable to recover it. 00:26:28.056 [2024-07-16 00:27:46.658161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.056 [2024-07-16 00:27:46.658174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.056 qpair failed and we were unable to recover it. 00:26:28.056 [2024-07-16 00:27:46.658380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.056 [2024-07-16 00:27:46.658393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.056 qpair failed and we were unable to recover it. 00:26:28.056 [2024-07-16 00:27:46.658552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.056 [2024-07-16 00:27:46.658564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.056 qpair failed and we were unable to recover it. 00:26:28.056 [2024-07-16 00:27:46.658765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.056 [2024-07-16 00:27:46.658777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.056 qpair failed and we were unable to recover it. 00:26:28.056 [2024-07-16 00:27:46.658891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.056 [2024-07-16 00:27:46.658903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.056 qpair failed and we were unable to recover it. 00:26:28.056 [2024-07-16 00:27:46.659121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.056 [2024-07-16 00:27:46.659133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.056 qpair failed and we were unable to recover it. 00:26:28.056 [2024-07-16 00:27:46.659320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.056 [2024-07-16 00:27:46.659333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.056 qpair failed and we were unable to recover it. 00:26:28.056 [2024-07-16 00:27:46.659545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.056 [2024-07-16 00:27:46.659558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.056 qpair failed and we were unable to recover it. 00:26:28.056 [2024-07-16 00:27:46.659753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.056 [2024-07-16 00:27:46.659766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.056 qpair failed and we were unable to recover it. 00:26:28.056 [2024-07-16 00:27:46.659885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.056 [2024-07-16 00:27:46.659898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.056 qpair failed and we were unable to recover it. 00:26:28.056 [2024-07-16 00:27:46.660153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.056 [2024-07-16 00:27:46.660165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.056 qpair failed and we were unable to recover it. 00:26:28.056 [2024-07-16 00:27:46.660424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.056 [2024-07-16 00:27:46.660437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.056 qpair failed and we were unable to recover it. 00:26:28.056 [2024-07-16 00:27:46.660573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.056 [2024-07-16 00:27:46.660586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.056 qpair failed and we were unable to recover it. 00:26:28.057 [2024-07-16 00:27:46.660715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.057 [2024-07-16 00:27:46.660727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.057 qpair failed and we were unable to recover it. 00:26:28.057 [2024-07-16 00:27:46.660934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.057 [2024-07-16 00:27:46.660947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.057 qpair failed and we were unable to recover it. 00:26:28.057 [2024-07-16 00:27:46.661143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.057 [2024-07-16 00:27:46.661155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.057 qpair failed and we were unable to recover it. 00:26:28.057 [2024-07-16 00:27:46.661287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.057 [2024-07-16 00:27:46.661298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.057 qpair failed and we were unable to recover it. 00:26:28.057 [2024-07-16 00:27:46.661496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.057 [2024-07-16 00:27:46.661509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.057 qpair failed and we were unable to recover it. 00:26:28.057 [2024-07-16 00:27:46.661740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.057 [2024-07-16 00:27:46.661752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.057 qpair failed and we were unable to recover it. 00:26:28.057 [2024-07-16 00:27:46.661960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.057 [2024-07-16 00:27:46.661972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.057 qpair failed and we were unable to recover it. 00:26:28.057 [2024-07-16 00:27:46.662187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.057 [2024-07-16 00:27:46.662200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.057 qpair failed and we were unable to recover it. 00:26:28.057 [2024-07-16 00:27:46.662411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.057 [2024-07-16 00:27:46.662424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.057 qpair failed and we were unable to recover it. 00:26:28.057 [2024-07-16 00:27:46.662644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.057 [2024-07-16 00:27:46.662657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.057 qpair failed and we were unable to recover it. 00:26:28.057 [2024-07-16 00:27:46.662890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.057 [2024-07-16 00:27:46.662902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.057 qpair failed and we were unable to recover it. 00:26:28.057 [2024-07-16 00:27:46.663046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.057 [2024-07-16 00:27:46.663058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.057 qpair failed and we were unable to recover it. 00:26:28.057 [2024-07-16 00:27:46.663252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.057 [2024-07-16 00:27:46.663264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.057 qpair failed and we were unable to recover it. 00:26:28.057 [2024-07-16 00:27:46.663461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.057 [2024-07-16 00:27:46.663474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.057 qpair failed and we were unable to recover it. 00:26:28.057 [2024-07-16 00:27:46.663602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.057 [2024-07-16 00:27:46.663614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.057 qpair failed and we were unable to recover it. 00:26:28.057 [2024-07-16 00:27:46.663772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.057 [2024-07-16 00:27:46.663785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.057 qpair failed and we were unable to recover it. 00:26:28.057 [2024-07-16 00:27:46.663979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.057 [2024-07-16 00:27:46.663991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.057 qpair failed and we were unable to recover it. 00:26:28.057 [2024-07-16 00:27:46.664121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.057 [2024-07-16 00:27:46.664132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.057 qpair failed and we were unable to recover it. 00:26:28.057 [2024-07-16 00:27:46.664282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.057 [2024-07-16 00:27:46.664294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.057 qpair failed and we were unable to recover it. 00:26:28.057 [2024-07-16 00:27:46.664483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.057 [2024-07-16 00:27:46.664501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.057 qpair failed and we were unable to recover it. 00:26:28.057 [2024-07-16 00:27:46.664789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.057 [2024-07-16 00:27:46.664801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.057 qpair failed and we were unable to recover it. 00:26:28.057 [2024-07-16 00:27:46.664998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.057 [2024-07-16 00:27:46.665011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.057 qpair failed and we were unable to recover it. 00:26:28.057 [2024-07-16 00:27:46.665215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.057 [2024-07-16 00:27:46.665238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.057 qpair failed and we were unable to recover it. 00:26:28.057 [2024-07-16 00:27:46.665477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.057 [2024-07-16 00:27:46.665489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.057 qpair failed and we were unable to recover it. 00:26:28.057 [2024-07-16 00:27:46.665634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.057 [2024-07-16 00:27:46.665647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.057 qpair failed and we were unable to recover it. 00:26:28.057 [2024-07-16 00:27:46.665927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.057 [2024-07-16 00:27:46.665939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.057 qpair failed and we were unable to recover it. 00:26:28.057 [2024-07-16 00:27:46.666063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.057 [2024-07-16 00:27:46.666075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.057 qpair failed and we were unable to recover it. 00:26:28.057 [2024-07-16 00:27:46.666210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.057 [2024-07-16 00:27:46.666223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.057 qpair failed and we were unable to recover it. 00:26:28.057 [2024-07-16 00:27:46.666438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.057 [2024-07-16 00:27:46.666451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.057 qpair failed and we were unable to recover it. 00:26:28.057 [2024-07-16 00:27:46.666645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.057 [2024-07-16 00:27:46.666657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.057 qpair failed and we were unable to recover it. 00:26:28.057 [2024-07-16 00:27:46.666877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.057 [2024-07-16 00:27:46.666889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.057 qpair failed and we were unable to recover it. 00:26:28.057 [2024-07-16 00:27:46.667110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.057 [2024-07-16 00:27:46.667124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.057 qpair failed and we were unable to recover it. 00:26:28.057 [2024-07-16 00:27:46.667347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.057 [2024-07-16 00:27:46.667360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.057 qpair failed and we were unable to recover it. 00:26:28.057 [2024-07-16 00:27:46.667505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.057 [2024-07-16 00:27:46.667518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.057 qpair failed and we were unable to recover it. 00:26:28.057 [2024-07-16 00:27:46.667708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.057 [2024-07-16 00:27:46.667720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.057 qpair failed and we were unable to recover it. 00:26:28.057 [2024-07-16 00:27:46.667913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.058 [2024-07-16 00:27:46.667926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.058 qpair failed and we were unable to recover it. 00:26:28.058 [2024-07-16 00:27:46.668055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.058 [2024-07-16 00:27:46.668068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.058 qpair failed and we were unable to recover it. 00:26:28.058 [2024-07-16 00:27:46.668276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.058 [2024-07-16 00:27:46.668289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.058 qpair failed and we were unable to recover it. 00:26:28.058 [2024-07-16 00:27:46.668413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.058 [2024-07-16 00:27:46.668425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.058 qpair failed and we were unable to recover it. 00:26:28.058 [2024-07-16 00:27:46.668631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.058 [2024-07-16 00:27:46.668644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.058 qpair failed and we were unable to recover it. 00:26:28.058 [2024-07-16 00:27:46.668835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.058 [2024-07-16 00:27:46.668847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.058 qpair failed and we were unable to recover it. 00:26:28.058 [2024-07-16 00:27:46.669038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.058 [2024-07-16 00:27:46.669050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.058 qpair failed and we were unable to recover it. 00:26:28.058 [2024-07-16 00:27:46.669246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.058 [2024-07-16 00:27:46.669260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.058 qpair failed and we were unable to recover it. 00:26:28.058 [2024-07-16 00:27:46.669455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.058 [2024-07-16 00:27:46.669466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.058 qpair failed and we were unable to recover it. 00:26:28.058 [2024-07-16 00:27:46.669611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.058 [2024-07-16 00:27:46.669623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.058 qpair failed and we were unable to recover it. 00:26:28.058 [2024-07-16 00:27:46.669836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.058 [2024-07-16 00:27:46.669848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.058 qpair failed and we were unable to recover it. 00:26:28.058 [2024-07-16 00:27:46.670089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.058 [2024-07-16 00:27:46.670101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.058 qpair failed and we were unable to recover it. 00:26:28.058 [2024-07-16 00:27:46.670305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.058 [2024-07-16 00:27:46.670317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.058 qpair failed and we were unable to recover it. 00:26:28.058 [2024-07-16 00:27:46.670604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.058 [2024-07-16 00:27:46.670616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.058 qpair failed and we were unable to recover it. 00:26:28.058 [2024-07-16 00:27:46.670736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.058 [2024-07-16 00:27:46.670749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.058 qpair failed and we were unable to recover it. 00:26:28.058 [2024-07-16 00:27:46.670906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.058 [2024-07-16 00:27:46.670918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.058 qpair failed and we were unable to recover it. 00:26:28.058 [2024-07-16 00:27:46.671104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.058 [2024-07-16 00:27:46.671117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.058 qpair failed and we were unable to recover it. 00:26:28.058 [2024-07-16 00:27:46.671305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.058 [2024-07-16 00:27:46.671318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.058 qpair failed and we were unable to recover it. 00:26:28.058 [2024-07-16 00:27:46.671510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.058 [2024-07-16 00:27:46.671522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.058 qpair failed and we were unable to recover it. 00:26:28.058 [2024-07-16 00:27:46.671602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.058 [2024-07-16 00:27:46.671614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.058 qpair failed and we were unable to recover it. 00:26:28.058 [2024-07-16 00:27:46.671807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.058 [2024-07-16 00:27:46.671820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.058 qpair failed and we were unable to recover it. 00:26:28.058 [2024-07-16 00:27:46.671948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.058 [2024-07-16 00:27:46.671960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.058 qpair failed and we were unable to recover it. 00:26:28.058 [2024-07-16 00:27:46.672170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.058 [2024-07-16 00:27:46.672182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.058 qpair failed and we were unable to recover it. 00:26:28.058 [2024-07-16 00:27:46.672380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.058 [2024-07-16 00:27:46.672392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.058 qpair failed and we were unable to recover it. 00:26:28.058 [2024-07-16 00:27:46.672588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.058 [2024-07-16 00:27:46.672602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.058 qpair failed and we were unable to recover it. 00:26:28.058 [2024-07-16 00:27:46.672753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.058 [2024-07-16 00:27:46.672765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.058 qpair failed and we were unable to recover it. 00:26:28.058 [2024-07-16 00:27:46.672986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.058 [2024-07-16 00:27:46.672999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.058 qpair failed and we were unable to recover it. 00:26:28.058 [2024-07-16 00:27:46.673124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.058 [2024-07-16 00:27:46.673137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.058 qpair failed and we were unable to recover it. 00:26:28.058 [2024-07-16 00:27:46.673349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.058 [2024-07-16 00:27:46.673361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.058 qpair failed and we were unable to recover it. 00:26:28.058 [2024-07-16 00:27:46.673617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.058 [2024-07-16 00:27:46.673630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.058 qpair failed and we were unable to recover it. 00:26:28.058 [2024-07-16 00:27:46.673895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.058 [2024-07-16 00:27:46.673907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.058 qpair failed and we were unable to recover it. 00:26:28.058 [2024-07-16 00:27:46.674048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.058 [2024-07-16 00:27:46.674060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.058 qpair failed and we were unable to recover it. 00:26:28.058 [2024-07-16 00:27:46.674248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.058 [2024-07-16 00:27:46.674261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.058 qpair failed and we were unable to recover it. 00:26:28.058 [2024-07-16 00:27:46.674464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.058 [2024-07-16 00:27:46.674476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.058 qpair failed and we were unable to recover it. 00:26:28.058 [2024-07-16 00:27:46.674595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.058 [2024-07-16 00:27:46.674607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.058 qpair failed and we were unable to recover it. 00:26:28.058 [2024-07-16 00:27:46.674726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.058 [2024-07-16 00:27:46.674739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.058 qpair failed and we were unable to recover it. 00:26:28.058 [2024-07-16 00:27:46.674880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.058 [2024-07-16 00:27:46.674893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.058 qpair failed and we were unable to recover it. 00:26:28.058 [2024-07-16 00:27:46.675032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.058 [2024-07-16 00:27:46.675045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.058 qpair failed and we were unable to recover it. 00:26:28.058 [2024-07-16 00:27:46.675236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.058 [2024-07-16 00:27:46.675249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.058 qpair failed and we were unable to recover it. 00:26:28.058 [2024-07-16 00:27:46.675469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.058 [2024-07-16 00:27:46.675482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.058 qpair failed and we were unable to recover it. 00:26:28.058 [2024-07-16 00:27:46.675674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.058 [2024-07-16 00:27:46.675686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.058 qpair failed and we were unable to recover it. 00:26:28.058 [2024-07-16 00:27:46.675881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.059 [2024-07-16 00:27:46.675892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.059 qpair failed and we were unable to recover it. 00:26:28.059 [2024-07-16 00:27:46.676076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.059 [2024-07-16 00:27:46.676089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.059 qpair failed and we were unable to recover it. 00:26:28.059 [2024-07-16 00:27:46.676288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.059 [2024-07-16 00:27:46.676301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.059 qpair failed and we were unable to recover it. 00:26:28.059 [2024-07-16 00:27:46.676551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.059 [2024-07-16 00:27:46.676563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.059 qpair failed and we were unable to recover it. 00:26:28.059 [2024-07-16 00:27:46.676696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.059 [2024-07-16 00:27:46.676708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.059 qpair failed and we were unable to recover it. 00:26:28.059 [2024-07-16 00:27:46.676838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.059 [2024-07-16 00:27:46.676850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.059 qpair failed and we were unable to recover it. 00:26:28.059 [2024-07-16 00:27:46.677050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.059 [2024-07-16 00:27:46.677062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.059 qpair failed and we were unable to recover it. 00:26:28.059 [2024-07-16 00:27:46.677246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.059 [2024-07-16 00:27:46.677259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.059 qpair failed and we were unable to recover it. 00:26:28.059 [2024-07-16 00:27:46.677379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.059 [2024-07-16 00:27:46.677391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.059 qpair failed and we were unable to recover it. 00:26:28.059 [2024-07-16 00:27:46.677576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.059 [2024-07-16 00:27:46.677588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.059 qpair failed and we were unable to recover it. 00:26:28.059 [2024-07-16 00:27:46.677844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.059 [2024-07-16 00:27:46.677856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.059 qpair failed and we were unable to recover it. 00:26:28.059 [2024-07-16 00:27:46.678134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.059 [2024-07-16 00:27:46.678147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.059 qpair failed and we were unable to recover it. 00:26:28.059 [2024-07-16 00:27:46.678343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.059 [2024-07-16 00:27:46.678355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.059 qpair failed and we were unable to recover it. 00:26:28.059 [2024-07-16 00:27:46.678548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.059 [2024-07-16 00:27:46.678561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.059 qpair failed and we were unable to recover it. 00:26:28.059 [2024-07-16 00:27:46.678698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.059 [2024-07-16 00:27:46.678710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.059 qpair failed and we were unable to recover it. 00:26:28.059 [2024-07-16 00:27:46.678963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.059 [2024-07-16 00:27:46.678975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.059 qpair failed and we were unable to recover it. 00:26:28.059 [2024-07-16 00:27:46.679165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.059 [2024-07-16 00:27:46.679177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.059 qpair failed and we were unable to recover it. 00:26:28.059 [2024-07-16 00:27:46.679362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.059 [2024-07-16 00:27:46.679375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.059 qpair failed and we were unable to recover it. 00:26:28.059 [2024-07-16 00:27:46.679499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.059 [2024-07-16 00:27:46.679510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.059 qpair failed and we were unable to recover it. 00:26:28.059 [2024-07-16 00:27:46.679641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.059 [2024-07-16 00:27:46.679653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.059 qpair failed and we were unable to recover it. 00:26:28.059 [2024-07-16 00:27:46.679852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.059 [2024-07-16 00:27:46.679864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.059 qpair failed and we were unable to recover it. 00:26:28.059 [2024-07-16 00:27:46.680020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.059 [2024-07-16 00:27:46.680032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.059 qpair failed and we were unable to recover it. 00:26:28.059 [2024-07-16 00:27:46.680230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.059 [2024-07-16 00:27:46.680243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.059 qpair failed and we were unable to recover it. 00:26:28.059 [2024-07-16 00:27:46.680495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.059 [2024-07-16 00:27:46.680510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.059 qpair failed and we were unable to recover it. 00:26:28.059 [2024-07-16 00:27:46.680717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.059 [2024-07-16 00:27:46.680729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.059 qpair failed and we were unable to recover it. 00:26:28.059 [2024-07-16 00:27:46.680916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.059 [2024-07-16 00:27:46.680928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.059 qpair failed and we were unable to recover it. 00:26:28.059 [2024-07-16 00:27:46.681046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.059 [2024-07-16 00:27:46.681057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.059 qpair failed and we were unable to recover it. 00:26:28.059 [2024-07-16 00:27:46.681254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.059 [2024-07-16 00:27:46.681268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.059 qpair failed and we were unable to recover it. 00:26:28.059 [2024-07-16 00:27:46.681396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.059 [2024-07-16 00:27:46.681409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.059 qpair failed and we were unable to recover it. 00:26:28.059 [2024-07-16 00:27:46.681593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.059 [2024-07-16 00:27:46.681606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.059 qpair failed and we were unable to recover it. 00:26:28.059 [2024-07-16 00:27:46.681730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.059 [2024-07-16 00:27:46.681742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.059 qpair failed and we were unable to recover it. 00:26:28.059 [2024-07-16 00:27:46.681887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.059 [2024-07-16 00:27:46.681900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.059 qpair failed and we were unable to recover it. 00:26:28.059 [2024-07-16 00:27:46.682016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.059 [2024-07-16 00:27:46.682045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.059 qpair failed and we were unable to recover it. 00:26:28.059 [2024-07-16 00:27:46.682160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.059 [2024-07-16 00:27:46.682172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.059 qpair failed and we were unable to recover it. 00:26:28.059 [2024-07-16 00:27:46.682458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.059 [2024-07-16 00:27:46.682470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.059 qpair failed and we were unable to recover it. 00:26:28.059 [2024-07-16 00:27:46.682676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.059 [2024-07-16 00:27:46.682688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.059 qpair failed and we were unable to recover it. 00:26:28.059 [2024-07-16 00:27:46.682874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.059 [2024-07-16 00:27:46.682887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.059 qpair failed and we were unable to recover it. 00:26:28.059 [2024-07-16 00:27:46.683171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.059 [2024-07-16 00:27:46.683183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.059 qpair failed and we were unable to recover it. 00:26:28.059 [2024-07-16 00:27:46.683375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.059 [2024-07-16 00:27:46.683387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.059 qpair failed and we were unable to recover it. 00:26:28.059 [2024-07-16 00:27:46.683667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.059 [2024-07-16 00:27:46.683679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.059 qpair failed and we were unable to recover it. 00:26:28.059 [2024-07-16 00:27:46.683804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.059 [2024-07-16 00:27:46.683815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.059 qpair failed and we were unable to recover it. 00:26:28.059 [2024-07-16 00:27:46.683951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.060 [2024-07-16 00:27:46.683963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.060 qpair failed and we were unable to recover it. 00:26:28.060 [2024-07-16 00:27:46.684111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.060 [2024-07-16 00:27:46.684123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.060 qpair failed and we were unable to recover it. 00:26:28.060 [2024-07-16 00:27:46.684251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.060 [2024-07-16 00:27:46.684263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.060 qpair failed and we were unable to recover it. 00:26:28.060 [2024-07-16 00:27:46.684466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.060 [2024-07-16 00:27:46.684477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.060 qpair failed and we were unable to recover it. 00:26:28.060 [2024-07-16 00:27:46.684609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.060 [2024-07-16 00:27:46.684621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.060 qpair failed and we were unable to recover it. 00:26:28.060 [2024-07-16 00:27:46.684772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.060 [2024-07-16 00:27:46.684784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.060 qpair failed and we were unable to recover it. 00:26:28.060 [2024-07-16 00:27:46.684984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.060 [2024-07-16 00:27:46.684996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.060 qpair failed and we were unable to recover it. 00:26:28.060 [2024-07-16 00:27:46.685139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.060 [2024-07-16 00:27:46.685152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.060 qpair failed and we were unable to recover it. 00:26:28.060 [2024-07-16 00:27:46.685360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.060 [2024-07-16 00:27:46.685372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.060 qpair failed and we were unable to recover it. 00:26:28.060 [2024-07-16 00:27:46.685497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.060 [2024-07-16 00:27:46.685509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.060 qpair failed and we were unable to recover it. 00:26:28.060 [2024-07-16 00:27:46.685768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.060 [2024-07-16 00:27:46.685780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.060 qpair failed and we were unable to recover it. 00:26:28.060 [2024-07-16 00:27:46.685971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.060 [2024-07-16 00:27:46.685983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.060 qpair failed and we were unable to recover it. 00:26:28.060 [2024-07-16 00:27:46.686215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.060 [2024-07-16 00:27:46.686231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.060 qpair failed and we were unable to recover it. 00:26:28.060 [2024-07-16 00:27:46.686438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.060 [2024-07-16 00:27:46.686450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.060 qpair failed and we were unable to recover it. 00:26:28.060 [2024-07-16 00:27:46.686569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.060 [2024-07-16 00:27:46.686581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.060 qpair failed and we were unable to recover it. 00:26:28.060 [2024-07-16 00:27:46.686726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.060 [2024-07-16 00:27:46.686738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.060 qpair failed and we were unable to recover it. 00:26:28.060 [2024-07-16 00:27:46.687020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.060 [2024-07-16 00:27:46.687031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.060 qpair failed and we were unable to recover it. 00:26:28.060 [2024-07-16 00:27:46.687181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.060 [2024-07-16 00:27:46.687193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.060 qpair failed and we were unable to recover it. 00:26:28.060 [2024-07-16 00:27:46.687400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.060 [2024-07-16 00:27:46.687412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.060 qpair failed and we were unable to recover it. 00:26:28.060 [2024-07-16 00:27:46.687542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.060 [2024-07-16 00:27:46.687554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.060 qpair failed and we were unable to recover it. 00:26:28.060 [2024-07-16 00:27:46.687693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.060 [2024-07-16 00:27:46.687705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.060 qpair failed and we were unable to recover it. 00:26:28.060 [2024-07-16 00:27:46.687926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.060 [2024-07-16 00:27:46.687938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.060 qpair failed and we were unable to recover it. 00:26:28.060 [2024-07-16 00:27:46.688141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.060 [2024-07-16 00:27:46.688155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.060 qpair failed and we were unable to recover it. 00:26:28.060 [2024-07-16 00:27:46.688289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.060 [2024-07-16 00:27:46.688301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.060 qpair failed and we were unable to recover it. 00:26:28.060 [2024-07-16 00:27:46.688600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.060 [2024-07-16 00:27:46.688612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.060 qpair failed and we were unable to recover it. 00:26:28.060 [2024-07-16 00:27:46.688803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.060 [2024-07-16 00:27:46.688815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.060 qpair failed and we were unable to recover it. 00:26:28.060 [2024-07-16 00:27:46.689015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.060 [2024-07-16 00:27:46.689027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.060 qpair failed and we were unable to recover it. 00:26:28.060 [2024-07-16 00:27:46.689223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.060 [2024-07-16 00:27:46.689243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.060 qpair failed and we were unable to recover it. 00:26:28.060 [2024-07-16 00:27:46.689449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.060 [2024-07-16 00:27:46.689461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.060 qpair failed and we were unable to recover it. 00:26:28.060 [2024-07-16 00:27:46.689592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.060 [2024-07-16 00:27:46.689604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.060 qpair failed and we were unable to recover it. 00:26:28.060 [2024-07-16 00:27:46.689789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.060 [2024-07-16 00:27:46.689800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.060 qpair failed and we were unable to recover it. 00:26:28.060 [2024-07-16 00:27:46.690009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.060 [2024-07-16 00:27:46.690021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.060 qpair failed and we were unable to recover it. 00:26:28.060 [2024-07-16 00:27:46.690188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.060 [2024-07-16 00:27:46.690200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.060 qpair failed and we were unable to recover it. 00:26:28.060 [2024-07-16 00:27:46.690392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.060 [2024-07-16 00:27:46.690405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.060 qpair failed and we were unable to recover it. 00:26:28.060 [2024-07-16 00:27:46.690627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.060 [2024-07-16 00:27:46.690639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.060 qpair failed and we were unable to recover it. 00:26:28.060 [2024-07-16 00:27:46.690835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.060 [2024-07-16 00:27:46.690846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.060 qpair failed and we were unable to recover it. 00:26:28.060 [2024-07-16 00:27:46.690980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.060 [2024-07-16 00:27:46.690992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.060 qpair failed and we were unable to recover it. 00:26:28.060 [2024-07-16 00:27:46.691196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.060 [2024-07-16 00:27:46.691208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.060 qpair failed and we were unable to recover it. 00:26:28.060 [2024-07-16 00:27:46.691404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.060 [2024-07-16 00:27:46.691418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.060 qpair failed and we were unable to recover it. 00:26:28.060 [2024-07-16 00:27:46.691747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.060 [2024-07-16 00:27:46.691760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.060 qpair failed and we were unable to recover it. 00:26:28.060 [2024-07-16 00:27:46.692013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.060 [2024-07-16 00:27:46.692026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.060 qpair failed and we were unable to recover it. 00:26:28.060 [2024-07-16 00:27:46.692284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.061 [2024-07-16 00:27:46.692297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.061 qpair failed and we were unable to recover it. 00:26:28.061 [2024-07-16 00:27:46.692494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.061 [2024-07-16 00:27:46.692506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.061 qpair failed and we were unable to recover it. 00:26:28.061 [2024-07-16 00:27:46.692646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.061 [2024-07-16 00:27:46.692658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.061 qpair failed and we were unable to recover it. 00:26:28.061 [2024-07-16 00:27:46.692776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.061 [2024-07-16 00:27:46.692789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.061 qpair failed and we were unable to recover it. 00:26:28.061 [2024-07-16 00:27:46.693019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.061 [2024-07-16 00:27:46.693031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.061 qpair failed and we were unable to recover it. 00:26:28.061 [2024-07-16 00:27:46.693234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.061 [2024-07-16 00:27:46.693246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.061 qpair failed and we were unable to recover it. 00:26:28.061 [2024-07-16 00:27:46.693392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.061 [2024-07-16 00:27:46.693404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.061 qpair failed and we were unable to recover it. 00:26:28.061 [2024-07-16 00:27:46.693594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.061 [2024-07-16 00:27:46.693607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.061 qpair failed and we were unable to recover it. 00:26:28.061 [2024-07-16 00:27:46.693808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.061 [2024-07-16 00:27:46.693820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.061 qpair failed and we were unable to recover it. 00:26:28.061 [2024-07-16 00:27:46.694020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.061 [2024-07-16 00:27:46.694032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.061 qpair failed and we were unable to recover it. 00:26:28.061 [2024-07-16 00:27:46.694164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.061 [2024-07-16 00:27:46.694176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.061 qpair failed and we were unable to recover it. 00:26:28.061 [2024-07-16 00:27:46.694402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.061 [2024-07-16 00:27:46.694415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.061 qpair failed and we were unable to recover it. 00:26:28.061 [2024-07-16 00:27:46.694512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.061 [2024-07-16 00:27:46.694522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.061 qpair failed and we were unable to recover it. 00:26:28.061 [2024-07-16 00:27:46.694802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.061 [2024-07-16 00:27:46.694814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.061 qpair failed and we were unable to recover it. 00:26:28.061 [2024-07-16 00:27:46.695017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.061 [2024-07-16 00:27:46.695030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.061 qpair failed and we were unable to recover it. 00:26:28.061 [2024-07-16 00:27:46.695164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.061 [2024-07-16 00:27:46.695177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.061 qpair failed and we were unable to recover it. 00:26:28.061 [2024-07-16 00:27:46.695394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.061 [2024-07-16 00:27:46.695406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.061 qpair failed and we were unable to recover it. 00:26:28.061 [2024-07-16 00:27:46.695549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.061 [2024-07-16 00:27:46.695562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.061 qpair failed and we were unable to recover it. 00:26:28.061 [2024-07-16 00:27:46.695811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.061 [2024-07-16 00:27:46.695824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.061 qpair failed and we were unable to recover it. 00:26:28.061 [2024-07-16 00:27:46.696015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.061 [2024-07-16 00:27:46.696027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.061 qpair failed and we were unable to recover it. 00:26:28.061 [2024-07-16 00:27:46.696237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.061 [2024-07-16 00:27:46.696250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.061 qpair failed and we were unable to recover it. 00:26:28.061 [2024-07-16 00:27:46.696501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.061 [2024-07-16 00:27:46.696515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.061 qpair failed and we were unable to recover it. 00:26:28.061 [2024-07-16 00:27:46.696782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.061 [2024-07-16 00:27:46.696794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.061 qpair failed and we were unable to recover it. 00:26:28.061 [2024-07-16 00:27:46.697016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.061 [2024-07-16 00:27:46.697027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.061 qpair failed and we were unable to recover it. 00:26:28.061 [2024-07-16 00:27:46.697181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.061 [2024-07-16 00:27:46.697193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.061 qpair failed and we were unable to recover it. 00:26:28.061 [2024-07-16 00:27:46.697399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.061 [2024-07-16 00:27:46.697412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.061 qpair failed and we were unable to recover it. 00:26:28.061 [2024-07-16 00:27:46.697596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.061 [2024-07-16 00:27:46.697608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.061 qpair failed and we were unable to recover it. 00:26:28.061 [2024-07-16 00:27:46.697736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.061 [2024-07-16 00:27:46.697748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.061 qpair failed and we were unable to recover it. 00:26:28.061 [2024-07-16 00:27:46.698000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.061 [2024-07-16 00:27:46.698012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.061 qpair failed and we were unable to recover it. 00:26:28.061 [2024-07-16 00:27:46.698205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.061 [2024-07-16 00:27:46.698217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.061 qpair failed and we were unable to recover it. 00:26:28.061 [2024-07-16 00:27:46.698380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.061 [2024-07-16 00:27:46.698393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.061 qpair failed and we were unable to recover it. 00:26:28.061 [2024-07-16 00:27:46.698527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.061 [2024-07-16 00:27:46.698539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.061 qpair failed and we were unable to recover it. 00:26:28.061 [2024-07-16 00:27:46.698735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.061 [2024-07-16 00:27:46.698747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.061 qpair failed and we were unable to recover it. 00:26:28.061 [2024-07-16 00:27:46.698879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.061 [2024-07-16 00:27:46.698892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.061 qpair failed and we were unable to recover it. 00:26:28.061 [2024-07-16 00:27:46.699089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.061 [2024-07-16 00:27:46.699102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.061 qpair failed and we were unable to recover it. 00:26:28.061 [2024-07-16 00:27:46.699299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.061 [2024-07-16 00:27:46.699311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.061 qpair failed and we were unable to recover it. 00:26:28.061 [2024-07-16 00:27:46.699585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.061 [2024-07-16 00:27:46.699597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.061 qpair failed and we were unable to recover it. 00:26:28.061 [2024-07-16 00:27:46.699792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.061 [2024-07-16 00:27:46.699804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.061 qpair failed and we were unable to recover it. 00:26:28.061 [2024-07-16 00:27:46.700006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.061 [2024-07-16 00:27:46.700019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.061 qpair failed and we were unable to recover it. 00:26:28.061 [2024-07-16 00:27:46.700208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.061 [2024-07-16 00:27:46.700220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.061 qpair failed and we were unable to recover it. 00:26:28.061 [2024-07-16 00:27:46.700414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.061 [2024-07-16 00:27:46.700426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.061 qpair failed and we were unable to recover it. 00:26:28.062 [2024-07-16 00:27:46.700628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.062 [2024-07-16 00:27:46.700641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.062 qpair failed and we were unable to recover it. 00:26:28.062 [2024-07-16 00:27:46.700750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.062 [2024-07-16 00:27:46.700762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.062 qpair failed and we were unable to recover it. 00:26:28.062 [2024-07-16 00:27:46.701054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.062 [2024-07-16 00:27:46.701066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.062 qpair failed and we were unable to recover it. 00:26:28.062 [2024-07-16 00:27:46.701296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.062 [2024-07-16 00:27:46.701309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.062 qpair failed and we were unable to recover it. 00:26:28.062 [2024-07-16 00:27:46.701534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.062 [2024-07-16 00:27:46.701546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.062 qpair failed and we were unable to recover it. 00:26:28.062 [2024-07-16 00:27:46.701685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.062 [2024-07-16 00:27:46.701697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.062 qpair failed and we were unable to recover it. 00:26:28.062 [2024-07-16 00:27:46.701928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.062 [2024-07-16 00:27:46.701940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.062 qpair failed and we were unable to recover it. 00:26:28.062 [2024-07-16 00:27:46.702134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.062 [2024-07-16 00:27:46.702146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.062 qpair failed and we were unable to recover it. 00:26:28.062 [2024-07-16 00:27:46.702343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.062 [2024-07-16 00:27:46.702355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.062 qpair failed and we were unable to recover it. 00:26:28.062 [2024-07-16 00:27:46.702552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.062 [2024-07-16 00:27:46.702564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.062 qpair failed and we were unable to recover it. 00:26:28.062 [2024-07-16 00:27:46.702849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.062 [2024-07-16 00:27:46.702861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.062 qpair failed and we were unable to recover it. 00:26:28.062 [2024-07-16 00:27:46.703088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.062 [2024-07-16 00:27:46.703100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.062 qpair failed and we were unable to recover it. 00:26:28.062 [2024-07-16 00:27:46.703310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.062 [2024-07-16 00:27:46.703322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.062 qpair failed and we were unable to recover it. 00:26:28.062 [2024-07-16 00:27:46.703515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.062 [2024-07-16 00:27:46.703527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.062 qpair failed and we were unable to recover it. 00:26:28.062 [2024-07-16 00:27:46.703717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.062 [2024-07-16 00:27:46.703729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.062 qpair failed and we were unable to recover it. 00:26:28.062 [2024-07-16 00:27:46.704002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.062 [2024-07-16 00:27:46.704015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.062 qpair failed and we were unable to recover it. 00:26:28.062 [2024-07-16 00:27:46.704240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.062 [2024-07-16 00:27:46.704253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.062 qpair failed and we were unable to recover it. 00:26:28.062 [2024-07-16 00:27:46.704481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.062 [2024-07-16 00:27:46.704493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.062 qpair failed and we were unable to recover it. 00:26:28.062 [2024-07-16 00:27:46.704696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.062 [2024-07-16 00:27:46.704708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.062 qpair failed and we were unable to recover it. 00:26:28.062 [2024-07-16 00:27:46.704844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.062 [2024-07-16 00:27:46.704856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.062 qpair failed and we were unable to recover it. 00:26:28.062 [2024-07-16 00:27:46.705001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.062 [2024-07-16 00:27:46.705017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.062 qpair failed and we were unable to recover it. 00:26:28.062 [2024-07-16 00:27:46.705209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.062 [2024-07-16 00:27:46.705221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.062 qpair failed and we were unable to recover it. 00:26:28.062 [2024-07-16 00:27:46.705362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.062 [2024-07-16 00:27:46.705376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.062 qpair failed and we were unable to recover it. 00:26:28.062 [2024-07-16 00:27:46.705653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.062 [2024-07-16 00:27:46.705665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.062 qpair failed and we were unable to recover it. 00:26:28.062 [2024-07-16 00:27:46.705853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.062 [2024-07-16 00:27:46.705866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.062 qpair failed and we were unable to recover it. 00:26:28.062 [2024-07-16 00:27:46.706062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.062 [2024-07-16 00:27:46.706074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.062 qpair failed and we were unable to recover it. 00:26:28.062 [2024-07-16 00:27:46.706259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.062 [2024-07-16 00:27:46.706271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.062 qpair failed and we were unable to recover it. 00:26:28.062 [2024-07-16 00:27:46.706453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.062 [2024-07-16 00:27:46.706466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.062 qpair failed and we were unable to recover it. 00:26:28.062 [2024-07-16 00:27:46.706673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.062 [2024-07-16 00:27:46.706685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.062 qpair failed and we were unable to recover it. 00:26:28.062 [2024-07-16 00:27:46.706820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.062 [2024-07-16 00:27:46.706832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.062 qpair failed and we were unable to recover it. 00:26:28.062 [2024-07-16 00:27:46.707027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.062 [2024-07-16 00:27:46.707039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.062 qpair failed and we were unable to recover it. 00:26:28.062 [2024-07-16 00:27:46.707162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.062 [2024-07-16 00:27:46.707174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.062 qpair failed and we were unable to recover it. 00:26:28.062 [2024-07-16 00:27:46.707310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.062 [2024-07-16 00:27:46.707323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.062 qpair failed and we were unable to recover it. 00:26:28.062 [2024-07-16 00:27:46.707515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.062 [2024-07-16 00:27:46.707527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.062 qpair failed and we were unable to recover it. 00:26:28.062 [2024-07-16 00:27:46.707784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.062 [2024-07-16 00:27:46.707796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.062 qpair failed and we were unable to recover it. 00:26:28.062 [2024-07-16 00:27:46.707996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.063 [2024-07-16 00:27:46.708009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.063 qpair failed and we were unable to recover it. 00:26:28.063 [2024-07-16 00:27:46.708221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.063 [2024-07-16 00:27:46.708239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.063 qpair failed and we were unable to recover it. 00:26:28.063 [2024-07-16 00:27:46.708451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.063 [2024-07-16 00:27:46.708464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.063 qpair failed and we were unable to recover it. 00:26:28.063 [2024-07-16 00:27:46.708593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.063 [2024-07-16 00:27:46.708606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.063 qpair failed and we were unable to recover it. 00:26:28.063 [2024-07-16 00:27:46.708813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.063 [2024-07-16 00:27:46.708826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.063 qpair failed and we were unable to recover it. 00:26:28.063 [2024-07-16 00:27:46.708959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.063 [2024-07-16 00:27:46.708972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.063 qpair failed and we were unable to recover it. 00:26:28.063 [2024-07-16 00:27:46.709156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.063 [2024-07-16 00:27:46.709168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.063 qpair failed and we were unable to recover it. 00:26:28.063 [2024-07-16 00:27:46.709319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.063 [2024-07-16 00:27:46.709331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.063 qpair failed and we were unable to recover it. 00:26:28.063 [2024-07-16 00:27:46.709523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.063 [2024-07-16 00:27:46.709536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.063 qpair failed and we were unable to recover it. 00:26:28.063 [2024-07-16 00:27:46.709729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.063 [2024-07-16 00:27:46.709741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.063 qpair failed and we were unable to recover it. 00:26:28.063 [2024-07-16 00:27:46.709936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.063 [2024-07-16 00:27:46.709949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.063 qpair failed and we were unable to recover it. 00:26:28.063 [2024-07-16 00:27:46.710146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.063 [2024-07-16 00:27:46.710158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.063 qpair failed and we were unable to recover it. 00:26:28.063 [2024-07-16 00:27:46.710366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.063 [2024-07-16 00:27:46.710386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.063 qpair failed and we were unable to recover it. 00:26:28.063 [2024-07-16 00:27:46.710607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.063 [2024-07-16 00:27:46.710623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.063 qpair failed and we were unable to recover it. 00:26:28.063 [2024-07-16 00:27:46.710815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.063 [2024-07-16 00:27:46.710831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.063 qpair failed and we were unable to recover it. 00:26:28.063 [2024-07-16 00:27:46.710982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.063 [2024-07-16 00:27:46.710998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.063 qpair failed and we were unable to recover it. 00:26:28.063 [2024-07-16 00:27:46.711136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.063 [2024-07-16 00:27:46.711152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.063 qpair failed and we were unable to recover it. 00:26:28.063 [2024-07-16 00:27:46.711434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.063 [2024-07-16 00:27:46.711450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.063 qpair failed and we were unable to recover it. 00:26:28.063 [2024-07-16 00:27:46.711657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.063 [2024-07-16 00:27:46.711672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.063 qpair failed and we were unable to recover it. 00:26:28.063 [2024-07-16 00:27:46.711819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.063 [2024-07-16 00:27:46.711835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.063 qpair failed and we were unable to recover it. 00:26:28.063 [2024-07-16 00:27:46.712049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.063 [2024-07-16 00:27:46.712064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.063 qpair failed and we were unable to recover it. 00:26:28.063 [2024-07-16 00:27:46.712252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.063 [2024-07-16 00:27:46.712266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.063 qpair failed and we were unable to recover it. 00:26:28.063 [2024-07-16 00:27:46.712388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.063 [2024-07-16 00:27:46.712400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.063 qpair failed and we were unable to recover it. 00:26:28.063 [2024-07-16 00:27:46.712673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.063 [2024-07-16 00:27:46.712685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.063 qpair failed and we were unable to recover it. 00:26:28.063 [2024-07-16 00:27:46.712903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.063 [2024-07-16 00:27:46.712916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.063 qpair failed and we were unable to recover it. 00:26:28.063 [2024-07-16 00:27:46.713054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.063 [2024-07-16 00:27:46.713068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.063 qpair failed and we were unable to recover it. 00:26:28.063 [2024-07-16 00:27:46.713177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.063 [2024-07-16 00:27:46.713188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.063 qpair failed and we were unable to recover it. 00:26:28.063 [2024-07-16 00:27:46.713326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.063 [2024-07-16 00:27:46.713338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.063 qpair failed and we were unable to recover it. 00:26:28.063 [2024-07-16 00:27:46.713542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.063 [2024-07-16 00:27:46.713554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.063 qpair failed and we were unable to recover it. 00:26:28.063 [2024-07-16 00:27:46.713758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.063 [2024-07-16 00:27:46.713771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.063 qpair failed and we were unable to recover it. 00:26:28.063 [2024-07-16 00:27:46.713967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.063 [2024-07-16 00:27:46.713979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.063 qpair failed and we were unable to recover it. 00:26:28.063 [2024-07-16 00:27:46.714126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.063 [2024-07-16 00:27:46.714138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.063 qpair failed and we were unable to recover it. 00:26:28.063 [2024-07-16 00:27:46.714283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.063 [2024-07-16 00:27:46.714296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.063 qpair failed and we were unable to recover it. 00:26:28.063 [2024-07-16 00:27:46.714496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.063 [2024-07-16 00:27:46.714508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.063 qpair failed and we were unable to recover it. 00:26:28.063 [2024-07-16 00:27:46.714735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.063 [2024-07-16 00:27:46.714748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.063 qpair failed and we were unable to recover it. 00:26:28.063 [2024-07-16 00:27:46.715023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.063 [2024-07-16 00:27:46.715036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.063 qpair failed and we were unable to recover it. 00:26:28.063 [2024-07-16 00:27:46.715283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.063 [2024-07-16 00:27:46.715295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.063 qpair failed and we were unable to recover it. 00:26:28.063 [2024-07-16 00:27:46.715489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.063 [2024-07-16 00:27:46.715501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.063 qpair failed and we were unable to recover it. 00:26:28.063 [2024-07-16 00:27:46.715701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.063 [2024-07-16 00:27:46.715713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.063 qpair failed and we were unable to recover it. 00:26:28.063 [2024-07-16 00:27:46.715897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.063 [2024-07-16 00:27:46.715909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.063 qpair failed and we were unable to recover it. 00:26:28.063 [2024-07-16 00:27:46.716103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.063 [2024-07-16 00:27:46.716115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.063 qpair failed and we were unable to recover it. 00:26:28.064 [2024-07-16 00:27:46.716239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.064 [2024-07-16 00:27:46.716252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.064 qpair failed and we were unable to recover it. 00:26:28.064 [2024-07-16 00:27:46.716524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.064 [2024-07-16 00:27:46.716536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.064 qpair failed and we were unable to recover it. 00:26:28.064 [2024-07-16 00:27:46.716664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.064 [2024-07-16 00:27:46.716677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.064 qpair failed and we were unable to recover it. 00:26:28.064 [2024-07-16 00:27:46.716952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.064 [2024-07-16 00:27:46.716964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.064 qpair failed and we were unable to recover it. 00:26:28.064 [2024-07-16 00:27:46.717150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.064 [2024-07-16 00:27:46.717162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.064 qpair failed and we were unable to recover it. 00:26:28.064 [2024-07-16 00:27:46.717362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.064 [2024-07-16 00:27:46.717374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.064 qpair failed and we were unable to recover it. 00:26:28.064 [2024-07-16 00:27:46.717649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.064 [2024-07-16 00:27:46.717661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.064 qpair failed and we were unable to recover it. 00:26:28.064 [2024-07-16 00:27:46.717842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.064 [2024-07-16 00:27:46.717854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.064 qpair failed and we were unable to recover it. 00:26:28.064 [2024-07-16 00:27:46.718003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.064 [2024-07-16 00:27:46.718015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.064 qpair failed and we were unable to recover it. 00:26:28.064 [2024-07-16 00:27:46.718209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.064 [2024-07-16 00:27:46.718222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.064 qpair failed and we were unable to recover it. 00:26:28.064 [2024-07-16 00:27:46.718372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.064 [2024-07-16 00:27:46.718385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.064 qpair failed and we were unable to recover it. 00:26:28.064 [2024-07-16 00:27:46.718656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.064 [2024-07-16 00:27:46.718674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.064 qpair failed and we were unable to recover it. 00:26:28.064 [2024-07-16 00:27:46.718881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.064 [2024-07-16 00:27:46.718896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.064 qpair failed and we were unable to recover it. 00:26:28.064 [2024-07-16 00:27:46.719044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.064 [2024-07-16 00:27:46.719059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.064 qpair failed and we were unable to recover it. 00:26:28.064 [2024-07-16 00:27:46.719321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.064 [2024-07-16 00:27:46.719336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.064 qpair failed and we were unable to recover it. 00:26:28.064 [2024-07-16 00:27:46.719526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.064 [2024-07-16 00:27:46.719541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.064 qpair failed and we were unable to recover it. 00:26:28.064 [2024-07-16 00:27:46.719763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.064 [2024-07-16 00:27:46.719778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.064 qpair failed and we were unable to recover it. 00:26:28.064 [2024-07-16 00:27:46.719931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.064 [2024-07-16 00:27:46.719947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.064 qpair failed and we were unable to recover it. 00:26:28.064 [2024-07-16 00:27:46.720252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.064 [2024-07-16 00:27:46.720269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.064 qpair failed and we were unable to recover it. 00:26:28.064 [2024-07-16 00:27:46.720501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.064 [2024-07-16 00:27:46.720516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.064 qpair failed and we were unable to recover it. 00:26:28.064 [2024-07-16 00:27:46.720658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.064 [2024-07-16 00:27:46.720673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.064 qpair failed and we were unable to recover it. 00:26:28.064 [2024-07-16 00:27:46.720832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.064 [2024-07-16 00:27:46.720847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.064 qpair failed and we were unable to recover it. 00:26:28.064 [2024-07-16 00:27:46.720971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.064 [2024-07-16 00:27:46.720986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.064 qpair failed and we were unable to recover it. 00:26:28.064 [2024-07-16 00:27:46.721183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.064 [2024-07-16 00:27:46.721199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.064 qpair failed and we were unable to recover it. 00:26:28.064 [2024-07-16 00:27:46.721354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.064 [2024-07-16 00:27:46.721376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.064 qpair failed and we were unable to recover it. 00:26:28.064 [2024-07-16 00:27:46.721513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.064 [2024-07-16 00:27:46.721529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.064 qpair failed and we were unable to recover it. 00:26:28.064 [2024-07-16 00:27:46.721674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.064 [2024-07-16 00:27:46.721690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.064 qpair failed and we were unable to recover it. 00:26:28.064 [2024-07-16 00:27:46.721828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.064 [2024-07-16 00:27:46.721845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.064 qpair failed and we were unable to recover it. 00:26:28.064 [2024-07-16 00:27:46.721980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.064 [2024-07-16 00:27:46.721994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.064 qpair failed and we were unable to recover it. 00:26:28.064 [2024-07-16 00:27:46.722222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.064 [2024-07-16 00:27:46.722244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.064 qpair failed and we were unable to recover it. 00:26:28.064 [2024-07-16 00:27:46.722381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.064 [2024-07-16 00:27:46.722397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.064 qpair failed and we were unable to recover it. 00:26:28.064 [2024-07-16 00:27:46.722536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.064 [2024-07-16 00:27:46.722550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.064 qpair failed and we were unable to recover it. 00:26:28.064 [2024-07-16 00:27:46.722849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.064 [2024-07-16 00:27:46.722872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.064 qpair failed and we were unable to recover it. 00:26:28.064 [2024-07-16 00:27:46.723072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.064 [2024-07-16 00:27:46.723087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.064 qpair failed and we were unable to recover it. 00:26:28.064 [2024-07-16 00:27:46.723326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.064 [2024-07-16 00:27:46.723343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.064 qpair failed and we were unable to recover it. 00:26:28.064 [2024-07-16 00:27:46.723498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.064 [2024-07-16 00:27:46.723513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.064 qpair failed and we were unable to recover it. 00:26:28.064 [2024-07-16 00:27:46.723789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.064 [2024-07-16 00:27:46.723804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.064 qpair failed and we were unable to recover it. 00:26:28.064 [2024-07-16 00:27:46.724023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.064 [2024-07-16 00:27:46.724038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.064 qpair failed and we were unable to recover it. 00:26:28.064 [2024-07-16 00:27:46.724298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.064 [2024-07-16 00:27:46.724313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.064 qpair failed and we were unable to recover it. 00:26:28.064 [2024-07-16 00:27:46.724542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.064 [2024-07-16 00:27:46.724557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.064 qpair failed and we were unable to recover it. 00:26:28.064 [2024-07-16 00:27:46.724751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.064 [2024-07-16 00:27:46.724766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.064 qpair failed and we were unable to recover it. 00:26:28.065 [2024-07-16 00:27:46.724905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.065 [2024-07-16 00:27:46.724921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.065 qpair failed and we were unable to recover it. 00:26:28.065 [2024-07-16 00:27:46.725111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.065 [2024-07-16 00:27:46.725127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.065 qpair failed and we were unable to recover it. 00:26:28.065 [2024-07-16 00:27:46.725268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.065 [2024-07-16 00:27:46.725283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.065 qpair failed and we were unable to recover it. 00:26:28.065 [2024-07-16 00:27:46.725538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.065 [2024-07-16 00:27:46.725554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.065 qpair failed and we were unable to recover it. 00:26:28.065 [2024-07-16 00:27:46.725675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.065 [2024-07-16 00:27:46.725690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.065 qpair failed and we were unable to recover it. 00:26:28.065 [2024-07-16 00:27:46.725888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.065 [2024-07-16 00:27:46.725903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.065 qpair failed and we were unable to recover it. 00:26:28.065 [2024-07-16 00:27:46.726039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.065 [2024-07-16 00:27:46.726054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.065 qpair failed and we were unable to recover it. 00:26:28.065 [2024-07-16 00:27:46.726258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.065 [2024-07-16 00:27:46.726273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.065 qpair failed and we were unable to recover it. 00:26:28.065 [2024-07-16 00:27:46.726536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.065 [2024-07-16 00:27:46.726551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.065 qpair failed and we were unable to recover it. 00:26:28.065 [2024-07-16 00:27:46.726703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.065 [2024-07-16 00:27:46.726717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.065 qpair failed and we were unable to recover it. 00:26:28.065 [2024-07-16 00:27:46.726866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.065 [2024-07-16 00:27:46.726883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.065 qpair failed and we were unable to recover it. 00:26:28.065 [2024-07-16 00:27:46.727074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.065 [2024-07-16 00:27:46.727089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.065 qpair failed and we were unable to recover it. 00:26:28.065 [2024-07-16 00:27:46.727233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.065 [2024-07-16 00:27:46.727248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.065 qpair failed and we were unable to recover it. 00:26:28.065 [2024-07-16 00:27:46.727382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.065 [2024-07-16 00:27:46.727397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.065 qpair failed and we were unable to recover it. 00:26:28.065 [2024-07-16 00:27:46.727539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.065 [2024-07-16 00:27:46.727554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.065 qpair failed and we were unable to recover it. 00:26:28.065 [2024-07-16 00:27:46.727779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.065 [2024-07-16 00:27:46.727795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.065 qpair failed and we were unable to recover it. 00:26:28.065 [2024-07-16 00:27:46.727952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.065 [2024-07-16 00:27:46.727966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.065 qpair failed and we were unable to recover it. 00:26:28.065 [2024-07-16 00:27:46.728243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.065 [2024-07-16 00:27:46.728259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.065 qpair failed and we were unable to recover it. 00:26:28.065 [2024-07-16 00:27:46.728537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.065 [2024-07-16 00:27:46.728553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.065 qpair failed and we were unable to recover it. 00:26:28.065 [2024-07-16 00:27:46.728685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.065 [2024-07-16 00:27:46.728700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.065 qpair failed and we were unable to recover it. 00:26:28.065 [2024-07-16 00:27:46.728955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.065 [2024-07-16 00:27:46.728969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.065 qpair failed and we were unable to recover it. 00:26:28.065 [2024-07-16 00:27:46.729162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.065 [2024-07-16 00:27:46.729178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.065 qpair failed and we were unable to recover it. 00:26:28.065 [2024-07-16 00:27:46.729320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.065 [2024-07-16 00:27:46.729334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.065 qpair failed and we were unable to recover it. 00:26:28.065 [2024-07-16 00:27:46.729477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.065 [2024-07-16 00:27:46.729492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.065 qpair failed and we were unable to recover it. 00:26:28.065 [2024-07-16 00:27:46.729669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.065 [2024-07-16 00:27:46.729685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.065 qpair failed and we were unable to recover it. 00:26:28.065 [2024-07-16 00:27:46.729810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.065 [2024-07-16 00:27:46.729825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.065 qpair failed and we were unable to recover it. 00:26:28.065 [2024-07-16 00:27:46.729970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.065 [2024-07-16 00:27:46.729985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.065 qpair failed and we were unable to recover it. 00:26:28.065 [2024-07-16 00:27:46.730174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.065 [2024-07-16 00:27:46.730189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.065 qpair failed and we were unable to recover it. 00:26:28.065 [2024-07-16 00:27:46.730328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.065 [2024-07-16 00:27:46.730343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.065 qpair failed and we were unable to recover it. 00:26:28.065 [2024-07-16 00:27:46.730568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.065 [2024-07-16 00:27:46.730583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.065 qpair failed and we were unable to recover it. 00:26:28.065 [2024-07-16 00:27:46.730799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.065 [2024-07-16 00:27:46.730814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.065 qpair failed and we were unable to recover it. 00:26:28.065 [2024-07-16 00:27:46.730916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.065 [2024-07-16 00:27:46.730931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.065 qpair failed and we were unable to recover it. 00:26:28.065 [2024-07-16 00:27:46.731190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.065 [2024-07-16 00:27:46.731205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.065 qpair failed and we were unable to recover it. 00:26:28.065 [2024-07-16 00:27:46.731403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.065 [2024-07-16 00:27:46.731418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.065 qpair failed and we were unable to recover it. 00:26:28.065 [2024-07-16 00:27:46.731557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.065 [2024-07-16 00:27:46.731572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.065 qpair failed and we were unable to recover it. 00:26:28.065 [2024-07-16 00:27:46.731839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.065 [2024-07-16 00:27:46.731854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.065 qpair failed and we were unable to recover it. 00:26:28.065 [2024-07-16 00:27:46.732055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.065 [2024-07-16 00:27:46.732070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.065 qpair failed and we were unable to recover it. 00:26:28.065 [2024-07-16 00:27:46.732294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.065 [2024-07-16 00:27:46.732310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.065 qpair failed and we were unable to recover it. 00:26:28.065 [2024-07-16 00:27:46.732511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.065 [2024-07-16 00:27:46.732526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.065 qpair failed and we were unable to recover it. 00:26:28.065 [2024-07-16 00:27:46.732715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.065 [2024-07-16 00:27:46.732730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.065 qpair failed and we were unable to recover it. 00:26:28.065 [2024-07-16 00:27:46.732874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.065 [2024-07-16 00:27:46.732889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.065 qpair failed and we were unable to recover it. 00:26:28.066 [2024-07-16 00:27:46.733145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.066 [2024-07-16 00:27:46.733160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.066 qpair failed and we were unable to recover it. 00:26:28.066 [2024-07-16 00:27:46.733302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.066 [2024-07-16 00:27:46.733317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.066 qpair failed and we were unable to recover it. 00:26:28.066 [2024-07-16 00:27:46.733526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.066 [2024-07-16 00:27:46.733542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.066 qpair failed and we were unable to recover it. 00:26:28.066 [2024-07-16 00:27:46.733731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.066 [2024-07-16 00:27:46.733746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.066 qpair failed and we were unable to recover it. 00:26:28.066 [2024-07-16 00:27:46.733931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.066 [2024-07-16 00:27:46.733947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.066 qpair failed and we were unable to recover it. 00:26:28.066 [2024-07-16 00:27:46.734163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.066 [2024-07-16 00:27:46.734179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.066 qpair failed and we were unable to recover it. 00:26:28.066 [2024-07-16 00:27:46.734314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.066 [2024-07-16 00:27:46.734330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.066 qpair failed and we were unable to recover it. 00:26:28.066 [2024-07-16 00:27:46.734599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.066 [2024-07-16 00:27:46.734614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.066 qpair failed and we were unable to recover it. 00:26:28.066 [2024-07-16 00:27:46.734825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.066 [2024-07-16 00:27:46.734840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.066 qpair failed and we were unable to recover it. 00:26:28.066 [2024-07-16 00:27:46.735016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.066 [2024-07-16 00:27:46.735034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.066 qpair failed and we were unable to recover it. 00:26:28.066 [2024-07-16 00:27:46.735178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.066 [2024-07-16 00:27:46.735194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.066 qpair failed and we were unable to recover it. 00:26:28.066 [2024-07-16 00:27:46.735456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.066 [2024-07-16 00:27:46.735472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.066 qpair failed and we were unable to recover it. 00:26:28.066 [2024-07-16 00:27:46.735624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.066 [2024-07-16 00:27:46.735639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.066 qpair failed and we were unable to recover it. 00:26:28.066 [2024-07-16 00:27:46.735764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.066 [2024-07-16 00:27:46.735778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.066 qpair failed and we were unable to recover it. 00:26:28.066 [2024-07-16 00:27:46.735983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.066 [2024-07-16 00:27:46.735999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.066 qpair failed and we were unable to recover it. 00:26:28.066 [2024-07-16 00:27:46.736193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.066 [2024-07-16 00:27:46.736208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.066 qpair failed and we were unable to recover it. 00:26:28.066 [2024-07-16 00:27:46.736426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.066 [2024-07-16 00:27:46.736441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.066 qpair failed and we were unable to recover it. 00:26:28.066 [2024-07-16 00:27:46.736583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.066 [2024-07-16 00:27:46.736598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.066 qpair failed and we were unable to recover it. 00:26:28.066 [2024-07-16 00:27:46.736804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.066 [2024-07-16 00:27:46.736819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.066 qpair failed and we were unable to recover it. 00:26:28.066 [2024-07-16 00:27:46.737045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.066 [2024-07-16 00:27:46.737060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.066 qpair failed and we were unable to recover it. 00:26:28.066 [2024-07-16 00:27:46.737274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.066 [2024-07-16 00:27:46.737290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.066 qpair failed and we were unable to recover it. 00:26:28.066 [2024-07-16 00:27:46.737555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.066 [2024-07-16 00:27:46.737570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.066 qpair failed and we were unable to recover it. 00:26:28.066 [2024-07-16 00:27:46.737776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.066 [2024-07-16 00:27:46.737792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.066 qpair failed and we were unable to recover it. 00:26:28.066 [2024-07-16 00:27:46.737982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.066 [2024-07-16 00:27:46.737997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.066 qpair failed and we were unable to recover it. 00:26:28.066 [2024-07-16 00:27:46.738205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.066 [2024-07-16 00:27:46.738220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.066 qpair failed and we were unable to recover it. 00:26:28.066 [2024-07-16 00:27:46.738417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.066 [2024-07-16 00:27:46.738432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.066 qpair failed and we were unable to recover it. 00:26:28.066 [2024-07-16 00:27:46.738639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.066 [2024-07-16 00:27:46.738654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.066 qpair failed and we were unable to recover it. 00:26:28.066 [2024-07-16 00:27:46.738937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.066 [2024-07-16 00:27:46.738952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.066 qpair failed and we were unable to recover it. 00:26:28.066 [2024-07-16 00:27:46.739212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.066 [2024-07-16 00:27:46.739233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.066 qpair failed and we were unable to recover it. 00:26:28.066 [2024-07-16 00:27:46.739463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.066 [2024-07-16 00:27:46.739479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.066 qpair failed and we were unable to recover it. 00:26:28.066 [2024-07-16 00:27:46.739615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.066 [2024-07-16 00:27:46.739631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.066 qpair failed and we were unable to recover it. 00:26:28.066 [2024-07-16 00:27:46.739804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.066 [2024-07-16 00:27:46.739819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.066 qpair failed and we were unable to recover it. 00:26:28.066 [2024-07-16 00:27:46.740097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.066 [2024-07-16 00:27:46.740112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.066 qpair failed and we were unable to recover it. 00:26:28.066 [2024-07-16 00:27:46.740387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.066 [2024-07-16 00:27:46.740403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.066 qpair failed and we were unable to recover it. 00:26:28.066 [2024-07-16 00:27:46.740542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.067 [2024-07-16 00:27:46.740557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.067 qpair failed and we were unable to recover it. 00:26:28.067 [2024-07-16 00:27:46.740761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.067 [2024-07-16 00:27:46.740776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.067 qpair failed and we were unable to recover it. 00:26:28.067 [2024-07-16 00:27:46.740987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.067 [2024-07-16 00:27:46.741002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.067 qpair failed and we were unable to recover it. 00:26:28.067 [2024-07-16 00:27:46.741281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.067 [2024-07-16 00:27:46.741297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.067 qpair failed and we were unable to recover it. 00:26:28.067 [2024-07-16 00:27:46.741525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.067 [2024-07-16 00:27:46.741540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.067 qpair failed and we were unable to recover it. 00:26:28.067 [2024-07-16 00:27:46.741805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.067 [2024-07-16 00:27:46.741821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.067 qpair failed and we were unable to recover it. 00:26:28.067 [2024-07-16 00:27:46.742015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.067 [2024-07-16 00:27:46.742030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.067 qpair failed and we were unable to recover it. 00:26:28.067 [2024-07-16 00:27:46.742163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.067 [2024-07-16 00:27:46.742179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.067 qpair failed and we were unable to recover it. 00:26:28.067 [2024-07-16 00:27:46.742383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.067 [2024-07-16 00:27:46.742399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.067 qpair failed and we were unable to recover it. 00:26:28.067 [2024-07-16 00:27:46.742495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.067 [2024-07-16 00:27:46.742508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.067 qpair failed and we were unable to recover it. 00:26:28.067 [2024-07-16 00:27:46.742713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.067 [2024-07-16 00:27:46.742728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.067 qpair failed and we were unable to recover it. 00:26:28.067 [2024-07-16 00:27:46.742942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.067 [2024-07-16 00:27:46.742957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.067 qpair failed and we were unable to recover it. 00:26:28.067 [2024-07-16 00:27:46.743105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.067 [2024-07-16 00:27:46.743120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.067 qpair failed and we were unable to recover it. 00:26:28.067 [2024-07-16 00:27:46.743387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.067 [2024-07-16 00:27:46.743402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.067 qpair failed and we were unable to recover it. 00:26:28.067 [2024-07-16 00:27:46.743602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.067 [2024-07-16 00:27:46.743617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.067 qpair failed and we were unable to recover it. 00:26:28.067 [2024-07-16 00:27:46.743749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.067 [2024-07-16 00:27:46.743767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.067 qpair failed and we were unable to recover it. 00:26:28.067 [2024-07-16 00:27:46.743905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.067 [2024-07-16 00:27:46.743920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.067 qpair failed and we were unable to recover it. 00:26:28.067 [2024-07-16 00:27:46.744129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.067 [2024-07-16 00:27:46.744144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.067 qpair failed and we were unable to recover it. 00:26:28.067 [2024-07-16 00:27:46.744407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.067 [2024-07-16 00:27:46.744423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.067 qpair failed and we were unable to recover it. 00:26:28.067 [2024-07-16 00:27:46.744625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.067 [2024-07-16 00:27:46.744641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.067 qpair failed and we were unable to recover it. 00:26:28.067 [2024-07-16 00:27:46.744834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.067 [2024-07-16 00:27:46.744849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.067 qpair failed and we were unable to recover it. 00:26:28.067 [2024-07-16 00:27:46.745127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.067 [2024-07-16 00:27:46.745142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.067 qpair failed and we were unable to recover it. 00:26:28.067 [2024-07-16 00:27:46.745341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.067 [2024-07-16 00:27:46.745357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.067 qpair failed and we were unable to recover it. 00:26:28.067 [2024-07-16 00:27:46.745479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.067 [2024-07-16 00:27:46.745493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.067 qpair failed and we were unable to recover it. 00:26:28.067 [2024-07-16 00:27:46.745729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.067 [2024-07-16 00:27:46.745744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.067 qpair failed and we were unable to recover it. 00:26:28.067 [2024-07-16 00:27:46.745880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.067 [2024-07-16 00:27:46.745895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.067 qpair failed and we were unable to recover it. 00:26:28.067 [2024-07-16 00:27:46.746086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.067 [2024-07-16 00:27:46.746101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.067 qpair failed and we were unable to recover it. 00:26:28.067 [2024-07-16 00:27:46.746296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.067 [2024-07-16 00:27:46.746312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.067 qpair failed and we were unable to recover it. 00:26:28.067 [2024-07-16 00:27:46.746460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.067 [2024-07-16 00:27:46.746475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.067 qpair failed and we were unable to recover it. 00:26:28.067 [2024-07-16 00:27:46.746664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.067 [2024-07-16 00:27:46.746680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.067 qpair failed and we were unable to recover it. 00:26:28.067 [2024-07-16 00:27:46.746876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.067 [2024-07-16 00:27:46.746892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.067 qpair failed and we were unable to recover it. 00:26:28.067 [2024-07-16 00:27:46.747101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.067 [2024-07-16 00:27:46.747116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.067 qpair failed and we were unable to recover it. 00:26:28.067 [2024-07-16 00:27:46.747418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.067 [2024-07-16 00:27:46.747434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.067 qpair failed and we were unable to recover it. 00:26:28.067 [2024-07-16 00:27:46.747585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.067 [2024-07-16 00:27:46.747600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.067 qpair failed and we were unable to recover it. 00:26:28.067 [2024-07-16 00:27:46.747886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.067 [2024-07-16 00:27:46.747901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.067 qpair failed and we were unable to recover it. 00:26:28.067 [2024-07-16 00:27:46.748095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.067 [2024-07-16 00:27:46.748110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.067 qpair failed and we were unable to recover it. 00:26:28.067 [2024-07-16 00:27:46.748264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.067 [2024-07-16 00:27:46.748279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.067 qpair failed and we were unable to recover it. 00:26:28.067 [2024-07-16 00:27:46.748418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.067 [2024-07-16 00:27:46.748433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.067 qpair failed and we were unable to recover it. 00:26:28.067 [2024-07-16 00:27:46.748642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.067 [2024-07-16 00:27:46.748657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.067 qpair failed and we were unable to recover it. 00:26:28.067 [2024-07-16 00:27:46.748796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.067 [2024-07-16 00:27:46.748811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.067 qpair failed and we were unable to recover it. 00:26:28.067 [2024-07-16 00:27:46.749043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.067 [2024-07-16 00:27:46.749058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.067 qpair failed and we were unable to recover it. 00:26:28.068 [2024-07-16 00:27:46.749190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.068 [2024-07-16 00:27:46.749205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.068 qpair failed and we were unable to recover it. 00:26:28.068 [2024-07-16 00:27:46.749473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.068 [2024-07-16 00:27:46.749488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.068 qpair failed and we were unable to recover it. 00:26:28.068 [2024-07-16 00:27:46.749760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.068 [2024-07-16 00:27:46.749775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.068 qpair failed and we were unable to recover it. 00:26:28.068 [2024-07-16 00:27:46.749975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.068 [2024-07-16 00:27:46.749990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.068 qpair failed and we were unable to recover it. 00:26:28.068 [2024-07-16 00:27:46.750257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.068 [2024-07-16 00:27:46.750274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.068 qpair failed and we were unable to recover it. 00:26:28.068 [2024-07-16 00:27:46.750428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.068 [2024-07-16 00:27:46.750444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.068 qpair failed and we were unable to recover it. 00:26:28.068 [2024-07-16 00:27:46.750716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.068 [2024-07-16 00:27:46.750731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.068 qpair failed and we were unable to recover it. 00:26:28.068 [2024-07-16 00:27:46.750943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.068 [2024-07-16 00:27:46.750958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.068 qpair failed and we were unable to recover it. 00:26:28.068 [2024-07-16 00:27:46.751116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.068 [2024-07-16 00:27:46.751131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.068 qpair failed and we were unable to recover it. 00:26:28.068 [2024-07-16 00:27:46.751325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.068 [2024-07-16 00:27:46.751340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.068 qpair failed and we were unable to recover it. 00:26:28.068 [2024-07-16 00:27:46.751532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.068 [2024-07-16 00:27:46.751547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.068 qpair failed and we were unable to recover it. 00:26:28.068 [2024-07-16 00:27:46.751684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.068 [2024-07-16 00:27:46.751699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.068 qpair failed and we were unable to recover it. 00:26:28.068 [2024-07-16 00:27:46.751835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.068 [2024-07-16 00:27:46.751851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.068 qpair failed and we were unable to recover it. 00:26:28.068 [2024-07-16 00:27:46.752107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.068 [2024-07-16 00:27:46.752122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.068 qpair failed and we were unable to recover it. 00:26:28.068 [2024-07-16 00:27:46.752377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.068 [2024-07-16 00:27:46.752395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.068 qpair failed and we were unable to recover it. 00:26:28.068 [2024-07-16 00:27:46.752602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.068 [2024-07-16 00:27:46.752617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.068 qpair failed and we were unable to recover it. 00:26:28.068 [2024-07-16 00:27:46.752810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.068 [2024-07-16 00:27:46.752825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.068 qpair failed and we were unable to recover it. 00:26:28.068 [2024-07-16 00:27:46.753016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.068 [2024-07-16 00:27:46.753030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.068 qpair failed and we were unable to recover it. 00:26:28.068 [2024-07-16 00:27:46.753165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.068 [2024-07-16 00:27:46.753180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.068 qpair failed and we were unable to recover it. 00:26:28.068 [2024-07-16 00:27:46.753405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.068 [2024-07-16 00:27:46.753420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.068 qpair failed and we were unable to recover it. 00:26:28.068 [2024-07-16 00:27:46.753623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.068 [2024-07-16 00:27:46.753638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.068 qpair failed and we were unable to recover it. 00:26:28.068 [2024-07-16 00:27:46.753765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.068 [2024-07-16 00:27:46.753780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.068 qpair failed and we were unable to recover it. 00:26:28.068 [2024-07-16 00:27:46.753965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.068 [2024-07-16 00:27:46.753996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.068 qpair failed and we were unable to recover it. 00:26:28.068 [2024-07-16 00:27:46.754168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.068 [2024-07-16 00:27:46.754199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.068 qpair failed and we were unable to recover it. 00:26:28.068 [2024-07-16 00:27:46.754479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.068 [2024-07-16 00:27:46.754514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.068 qpair failed and we were unable to recover it. 00:26:28.068 [2024-07-16 00:27:46.754673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.068 [2024-07-16 00:27:46.754690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.068 qpair failed and we were unable to recover it. 00:26:28.068 [2024-07-16 00:27:46.754886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.068 [2024-07-16 00:27:46.754901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.068 qpair failed and we were unable to recover it. 00:26:28.068 [2024-07-16 00:27:46.755101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.068 [2024-07-16 00:27:46.755116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.068 qpair failed and we were unable to recover it. 00:26:28.068 [2024-07-16 00:27:46.755341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.068 [2024-07-16 00:27:46.755359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.068 qpair failed and we were unable to recover it. 00:26:28.068 [2024-07-16 00:27:46.755550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.068 [2024-07-16 00:27:46.755565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.068 qpair failed and we were unable to recover it. 00:26:28.068 [2024-07-16 00:27:46.755787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.068 [2024-07-16 00:27:46.755803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.068 qpair failed and we were unable to recover it. 00:26:28.068 [2024-07-16 00:27:46.755993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.068 [2024-07-16 00:27:46.756008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.068 qpair failed and we were unable to recover it. 00:26:28.068 [2024-07-16 00:27:46.756161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.068 [2024-07-16 00:27:46.756176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.068 qpair failed and we were unable to recover it. 00:26:28.068 [2024-07-16 00:27:46.756324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.068 [2024-07-16 00:27:46.756340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.068 qpair failed and we were unable to recover it. 00:26:28.068 [2024-07-16 00:27:46.756468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.068 [2024-07-16 00:27:46.756484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.068 qpair failed and we were unable to recover it. 00:26:28.068 [2024-07-16 00:27:46.756625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.068 [2024-07-16 00:27:46.756639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.068 qpair failed and we were unable to recover it. 00:26:28.068 [2024-07-16 00:27:46.756902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.068 [2024-07-16 00:27:46.756917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.068 qpair failed and we were unable to recover it. 00:26:28.068 [2024-07-16 00:27:46.757199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.068 [2024-07-16 00:27:46.757214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.068 qpair failed and we were unable to recover it. 00:26:28.068 [2024-07-16 00:27:46.757367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.068 [2024-07-16 00:27:46.757383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.068 qpair failed and we were unable to recover it. 00:26:28.068 [2024-07-16 00:27:46.757542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.068 [2024-07-16 00:27:46.757557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.068 qpair failed and we were unable to recover it. 00:26:28.068 [2024-07-16 00:27:46.757846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.068 [2024-07-16 00:27:46.757861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.068 qpair failed and we were unable to recover it. 00:26:28.068 [2024-07-16 00:27:46.758080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.069 [2024-07-16 00:27:46.758097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.069 qpair failed and we were unable to recover it. 00:26:28.069 [2024-07-16 00:27:46.758303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.069 [2024-07-16 00:27:46.758319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.069 qpair failed and we were unable to recover it. 00:26:28.069 [2024-07-16 00:27:46.758532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.069 [2024-07-16 00:27:46.758547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.069 qpair failed and we were unable to recover it. 00:26:28.069 [2024-07-16 00:27:46.758771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.069 [2024-07-16 00:27:46.758787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.069 qpair failed and we were unable to recover it. 00:26:28.069 [2024-07-16 00:27:46.758998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.069 [2024-07-16 00:27:46.759012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.069 qpair failed and we were unable to recover it. 00:26:28.069 [2024-07-16 00:27:46.759250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.069 [2024-07-16 00:27:46.759266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.069 qpair failed and we were unable to recover it. 00:26:28.069 [2024-07-16 00:27:46.759492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.069 [2024-07-16 00:27:46.759507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.069 qpair failed and we were unable to recover it. 00:26:28.069 [2024-07-16 00:27:46.759719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.069 [2024-07-16 00:27:46.759734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.069 qpair failed and we were unable to recover it. 00:26:28.069 [2024-07-16 00:27:46.759910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.069 [2024-07-16 00:27:46.759928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.069 qpair failed and we were unable to recover it. 00:26:28.069 [2024-07-16 00:27:46.760063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.069 [2024-07-16 00:27:46.760078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.069 qpair failed and we were unable to recover it. 00:26:28.069 [2024-07-16 00:27:46.760272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.069 [2024-07-16 00:27:46.760288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.069 qpair failed and we were unable to recover it. 00:26:28.069 [2024-07-16 00:27:46.760495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.069 [2024-07-16 00:27:46.760509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.069 qpair failed and we were unable to recover it. 00:26:28.069 [2024-07-16 00:27:46.760703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.069 [2024-07-16 00:27:46.760718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.069 qpair failed and we were unable to recover it. 00:26:28.069 [2024-07-16 00:27:46.760908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.069 [2024-07-16 00:27:46.760922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.069 qpair failed and we were unable to recover it. 00:26:28.069 [2024-07-16 00:27:46.761157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.069 [2024-07-16 00:27:46.761172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.069 qpair failed and we were unable to recover it. 00:26:28.069 [2024-07-16 00:27:46.761430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.069 [2024-07-16 00:27:46.761445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.069 qpair failed and we were unable to recover it. 00:26:28.069 [2024-07-16 00:27:46.761661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.069 [2024-07-16 00:27:46.761676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.069 qpair failed and we were unable to recover it. 00:26:28.069 [2024-07-16 00:27:46.761982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.069 [2024-07-16 00:27:46.761998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.069 qpair failed and we were unable to recover it. 00:26:28.069 [2024-07-16 00:27:46.762151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.069 [2024-07-16 00:27:46.762166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.069 qpair failed and we were unable to recover it. 00:26:28.069 [2024-07-16 00:27:46.762388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.069 [2024-07-16 00:27:46.762403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.069 qpair failed and we were unable to recover it. 00:26:28.069 [2024-07-16 00:27:46.762605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.069 [2024-07-16 00:27:46.762620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.069 qpair failed and we were unable to recover it. 00:26:28.069 [2024-07-16 00:27:46.762812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.069 [2024-07-16 00:27:46.762828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.069 qpair failed and we were unable to recover it. 00:26:28.069 [2024-07-16 00:27:46.763092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.069 [2024-07-16 00:27:46.763107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.069 qpair failed and we were unable to recover it. 00:26:28.069 [2024-07-16 00:27:46.763378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.069 [2024-07-16 00:27:46.763394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.069 qpair failed and we were unable to recover it. 00:26:28.069 [2024-07-16 00:27:46.763532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.069 [2024-07-16 00:27:46.763547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.069 qpair failed and we were unable to recover it. 00:26:28.069 [2024-07-16 00:27:46.763820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.069 [2024-07-16 00:27:46.763836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.069 qpair failed and we were unable to recover it. 00:26:28.069 [2024-07-16 00:27:46.763987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.069 [2024-07-16 00:27:46.764003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.069 qpair failed and we were unable to recover it. 00:26:28.069 [2024-07-16 00:27:46.764139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.069 [2024-07-16 00:27:46.764157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.069 qpair failed and we were unable to recover it. 00:26:28.069 [2024-07-16 00:27:46.764381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.069 [2024-07-16 00:27:46.764396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.069 qpair failed and we were unable to recover it. 00:26:28.069 [2024-07-16 00:27:46.764620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.069 [2024-07-16 00:27:46.764635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.069 qpair failed and we were unable to recover it. 00:26:28.069 [2024-07-16 00:27:46.764778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.069 [2024-07-16 00:27:46.764793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.069 qpair failed and we were unable to recover it. 00:26:28.069 [2024-07-16 00:27:46.764993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.069 [2024-07-16 00:27:46.765010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.069 qpair failed and we were unable to recover it. 00:26:28.069 [2024-07-16 00:27:46.765282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.069 [2024-07-16 00:27:46.765298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.069 qpair failed and we were unable to recover it. 00:26:28.069 [2024-07-16 00:27:46.765552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.069 [2024-07-16 00:27:46.765568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.069 qpair failed and we were unable to recover it. 00:26:28.069 [2024-07-16 00:27:46.765724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.069 [2024-07-16 00:27:46.765739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.069 qpair failed and we were unable to recover it. 00:26:28.069 [2024-07-16 00:27:46.765871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.069 [2024-07-16 00:27:46.765886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.069 qpair failed and we were unable to recover it. 00:26:28.069 [2024-07-16 00:27:46.766157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.069 [2024-07-16 00:27:46.766172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.069 qpair failed and we were unable to recover it. 00:26:28.069 [2024-07-16 00:27:46.766371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.069 [2024-07-16 00:27:46.766387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.069 qpair failed and we were unable to recover it. 00:26:28.069 [2024-07-16 00:27:46.766511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.069 [2024-07-16 00:27:46.766526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.069 qpair failed and we were unable to recover it. 00:26:28.069 [2024-07-16 00:27:46.766676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.069 [2024-07-16 00:27:46.766690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.069 qpair failed and we were unable to recover it. 00:26:28.069 [2024-07-16 00:27:46.766906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.069 [2024-07-16 00:27:46.766922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.069 qpair failed and we were unable to recover it. 00:26:28.069 [2024-07-16 00:27:46.767202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.069 [2024-07-16 00:27:46.767218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.070 qpair failed and we were unable to recover it. 00:26:28.070 [2024-07-16 00:27:46.767426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.070 [2024-07-16 00:27:46.767442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.070 qpair failed and we were unable to recover it. 00:26:28.070 [2024-07-16 00:27:46.767600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.070 [2024-07-16 00:27:46.767615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.070 qpair failed and we were unable to recover it. 00:26:28.070 [2024-07-16 00:27:46.767772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.070 [2024-07-16 00:27:46.767788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.070 qpair failed and we were unable to recover it. 00:26:28.070 [2024-07-16 00:27:46.767990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.070 [2024-07-16 00:27:46.768006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.070 qpair failed and we were unable to recover it. 00:26:28.070 [2024-07-16 00:27:46.768200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.070 [2024-07-16 00:27:46.768216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.070 qpair failed and we were unable to recover it. 00:26:28.070 [2024-07-16 00:27:46.768437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.070 [2024-07-16 00:27:46.768453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.070 qpair failed and we were unable to recover it. 00:26:28.070 [2024-07-16 00:27:46.768607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.070 [2024-07-16 00:27:46.768623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.070 qpair failed and we were unable to recover it. 00:26:28.070 [2024-07-16 00:27:46.768824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.070 [2024-07-16 00:27:46.768839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.070 qpair failed and we were unable to recover it. 00:26:28.070 [2024-07-16 00:27:46.768967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.070 [2024-07-16 00:27:46.768982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.070 qpair failed and we were unable to recover it. 00:26:28.070 [2024-07-16 00:27:46.769173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.070 [2024-07-16 00:27:46.769188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.070 qpair failed and we were unable to recover it. 00:26:28.070 [2024-07-16 00:27:46.769329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.070 [2024-07-16 00:27:46.769345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.070 qpair failed and we were unable to recover it. 00:26:28.070 [2024-07-16 00:27:46.769559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.070 [2024-07-16 00:27:46.769574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.070 qpair failed and we were unable to recover it. 00:26:28.070 [2024-07-16 00:27:46.769746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.070 [2024-07-16 00:27:46.769764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.070 qpair failed and we were unable to recover it. 00:26:28.070 [2024-07-16 00:27:46.769989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.070 [2024-07-16 00:27:46.770004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.070 qpair failed and we were unable to recover it. 00:26:28.070 [2024-07-16 00:27:46.770146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.070 [2024-07-16 00:27:46.770162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.070 qpair failed and we were unable to recover it. 00:26:28.070 [2024-07-16 00:27:46.770371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.070 [2024-07-16 00:27:46.770386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.070 qpair failed and we were unable to recover it. 00:26:28.070 [2024-07-16 00:27:46.770622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.070 [2024-07-16 00:27:46.770638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.070 qpair failed and we were unable to recover it. 00:26:28.070 [2024-07-16 00:27:46.770844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.070 [2024-07-16 00:27:46.770860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.070 qpair failed and we were unable to recover it. 00:26:28.070 [2024-07-16 00:27:46.771050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.070 [2024-07-16 00:27:46.771065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.070 qpair failed and we were unable to recover it. 00:26:28.070 [2024-07-16 00:27:46.771209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.070 [2024-07-16 00:27:46.771234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.070 qpair failed and we were unable to recover it. 00:26:28.070 [2024-07-16 00:27:46.771462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.070 [2024-07-16 00:27:46.771478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.070 qpair failed and we were unable to recover it. 00:26:28.070 [2024-07-16 00:27:46.771617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.070 [2024-07-16 00:27:46.771633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.070 qpair failed and we were unable to recover it. 00:26:28.070 [2024-07-16 00:27:46.771895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.070 [2024-07-16 00:27:46.771910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.070 qpair failed and we were unable to recover it. 00:26:28.070 [2024-07-16 00:27:46.772112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.070 [2024-07-16 00:27:46.772128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.070 qpair failed and we were unable to recover it. 00:26:28.070 [2024-07-16 00:27:46.772287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.070 [2024-07-16 00:27:46.772303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.070 qpair failed and we were unable to recover it. 00:26:28.070 [2024-07-16 00:27:46.772514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.070 [2024-07-16 00:27:46.772530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.070 qpair failed and we were unable to recover it. 00:26:28.070 [2024-07-16 00:27:46.772750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.070 [2024-07-16 00:27:46.772773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.070 qpair failed and we were unable to recover it. 00:26:28.070 [2024-07-16 00:27:46.772972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.070 [2024-07-16 00:27:46.772986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.070 qpair failed and we were unable to recover it. 00:26:28.070 [2024-07-16 00:27:46.773134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.070 [2024-07-16 00:27:46.773146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.070 qpair failed and we were unable to recover it. 00:26:28.070 [2024-07-16 00:27:46.773276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.070 [2024-07-16 00:27:46.773288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.070 qpair failed and we were unable to recover it. 00:26:28.070 [2024-07-16 00:27:46.773430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.070 [2024-07-16 00:27:46.773441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.070 qpair failed and we were unable to recover it. 00:26:28.070 [2024-07-16 00:27:46.773687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.070 [2024-07-16 00:27:46.773698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.070 qpair failed and we were unable to recover it. 00:26:28.070 [2024-07-16 00:27:46.773947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.070 [2024-07-16 00:27:46.773959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.070 qpair failed and we were unable to recover it. 00:26:28.070 [2024-07-16 00:27:46.774208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.070 [2024-07-16 00:27:46.774220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.070 qpair failed and we were unable to recover it. 00:26:28.070 [2024-07-16 00:27:46.774348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.070 [2024-07-16 00:27:46.774360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.070 qpair failed and we were unable to recover it. 00:26:28.070 [2024-07-16 00:27:46.774605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.071 [2024-07-16 00:27:46.774617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.071 qpair failed and we were unable to recover it. 00:26:28.071 [2024-07-16 00:27:46.774815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.071 [2024-07-16 00:27:46.774827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.071 qpair failed and we were unable to recover it. 00:26:28.071 [2024-07-16 00:27:46.774952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.071 [2024-07-16 00:27:46.774964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.071 qpair failed and we were unable to recover it. 00:26:28.071 [2024-07-16 00:27:46.775159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.071 [2024-07-16 00:27:46.775171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.071 qpair failed and we were unable to recover it. 00:26:28.071 [2024-07-16 00:27:46.775290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.071 [2024-07-16 00:27:46.775305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.071 qpair failed and we were unable to recover it. 00:26:28.071 [2024-07-16 00:27:46.775484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.071 [2024-07-16 00:27:46.775496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.071 qpair failed and we were unable to recover it. 00:26:28.071 [2024-07-16 00:27:46.775719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.071 [2024-07-16 00:27:46.775731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.071 qpair failed and we were unable to recover it. 00:26:28.071 [2024-07-16 00:27:46.775981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.071 [2024-07-16 00:27:46.775993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.071 qpair failed and we were unable to recover it. 00:26:28.071 [2024-07-16 00:27:46.776136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.071 [2024-07-16 00:27:46.776147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.071 qpair failed and we were unable to recover it. 00:26:28.071 [2024-07-16 00:27:46.776346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.071 [2024-07-16 00:27:46.776359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.071 qpair failed and we were unable to recover it. 00:26:28.071 [2024-07-16 00:27:46.776552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.071 [2024-07-16 00:27:46.776564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.071 qpair failed and we were unable to recover it. 00:26:28.071 [2024-07-16 00:27:46.776706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.071 [2024-07-16 00:27:46.776718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.071 qpair failed and we were unable to recover it. 00:26:28.071 [2024-07-16 00:27:46.776911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.071 [2024-07-16 00:27:46.776923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.071 qpair failed and we were unable to recover it. 00:26:28.071 [2024-07-16 00:27:46.777069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.071 [2024-07-16 00:27:46.777081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.071 qpair failed and we were unable to recover it. 00:26:28.071 [2024-07-16 00:27:46.777205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.071 [2024-07-16 00:27:46.777217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.071 qpair failed and we were unable to recover it. 00:26:28.071 [2024-07-16 00:27:46.777349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.071 [2024-07-16 00:27:46.777362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.071 qpair failed and we were unable to recover it. 00:26:28.071 [2024-07-16 00:27:46.777504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.071 [2024-07-16 00:27:46.777516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.071 qpair failed and we were unable to recover it. 00:26:28.071 [2024-07-16 00:27:46.777766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.071 [2024-07-16 00:27:46.777778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.071 qpair failed and we were unable to recover it. 00:26:28.071 [2024-07-16 00:27:46.778029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.071 [2024-07-16 00:27:46.778040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.071 qpair failed and we were unable to recover it. 00:26:28.071 [2024-07-16 00:27:46.778189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.071 [2024-07-16 00:27:46.778202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.071 qpair failed and we were unable to recover it. 00:26:28.071 [2024-07-16 00:27:46.778388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.071 [2024-07-16 00:27:46.778399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.071 qpair failed and we were unable to recover it. 00:26:28.071 [2024-07-16 00:27:46.778663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.071 [2024-07-16 00:27:46.778675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.071 qpair failed and we were unable to recover it. 00:26:28.071 [2024-07-16 00:27:46.778782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.071 [2024-07-16 00:27:46.778793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.071 qpair failed and we were unable to recover it. 00:26:28.071 [2024-07-16 00:27:46.779068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.071 [2024-07-16 00:27:46.779080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.071 qpair failed and we were unable to recover it. 00:26:28.071 [2024-07-16 00:27:46.779219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.071 [2024-07-16 00:27:46.779236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.071 qpair failed and we were unable to recover it. 00:26:28.071 [2024-07-16 00:27:46.779502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.071 [2024-07-16 00:27:46.779513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.071 qpair failed and we were unable to recover it. 00:26:28.071 [2024-07-16 00:27:46.779703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.071 [2024-07-16 00:27:46.779714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.071 qpair failed and we were unable to recover it. 00:26:28.071 [2024-07-16 00:27:46.780018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.071 [2024-07-16 00:27:46.780030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.071 qpair failed and we were unable to recover it. 00:26:28.071 [2024-07-16 00:27:46.780153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.071 [2024-07-16 00:27:46.780164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.071 qpair failed and we were unable to recover it. 00:26:28.071 [2024-07-16 00:27:46.780368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.071 [2024-07-16 00:27:46.780380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.071 qpair failed and we were unable to recover it. 00:26:28.071 [2024-07-16 00:27:46.780572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.071 [2024-07-16 00:27:46.780584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.071 qpair failed and we were unable to recover it. 00:26:28.071 [2024-07-16 00:27:46.780717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.071 [2024-07-16 00:27:46.780734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.071 qpair failed and we were unable to recover it. 00:26:28.071 [2024-07-16 00:27:46.780927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.071 [2024-07-16 00:27:46.780943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.071 qpair failed and we were unable to recover it. 00:26:28.071 [2024-07-16 00:27:46.781147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.071 [2024-07-16 00:27:46.781162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.071 qpair failed and we were unable to recover it. 00:26:28.071 [2024-07-16 00:27:46.781447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.071 [2024-07-16 00:27:46.781463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.071 qpair failed and we were unable to recover it. 00:26:28.071 [2024-07-16 00:27:46.781769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.071 [2024-07-16 00:27:46.781785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.071 qpair failed and we were unable to recover it. 00:26:28.071 [2024-07-16 00:27:46.781868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.071 [2024-07-16 00:27:46.781881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.071 qpair failed and we were unable to recover it. 00:26:28.071 [2024-07-16 00:27:46.782078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.071 [2024-07-16 00:27:46.782093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.071 qpair failed and we were unable to recover it. 00:26:28.071 [2024-07-16 00:27:46.782323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.071 [2024-07-16 00:27:46.782339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.071 qpair failed and we were unable to recover it. 00:26:28.071 [2024-07-16 00:27:46.782543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.071 [2024-07-16 00:27:46.782559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.071 qpair failed and we were unable to recover it. 00:26:28.071 [2024-07-16 00:27:46.782748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.071 [2024-07-16 00:27:46.782762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.072 qpair failed and we were unable to recover it. 00:26:28.072 [2024-07-16 00:27:46.782966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.072 [2024-07-16 00:27:46.782981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.072 qpair failed and we were unable to recover it. 00:26:28.072 [2024-07-16 00:27:46.783287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.072 [2024-07-16 00:27:46.783303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.072 qpair failed and we were unable to recover it. 00:26:28.072 [2024-07-16 00:27:46.783450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.072 [2024-07-16 00:27:46.783465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.072 qpair failed and we were unable to recover it. 00:26:28.072 [2024-07-16 00:27:46.783665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.072 [2024-07-16 00:27:46.783681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.072 qpair failed and we were unable to recover it. 00:26:28.072 [2024-07-16 00:27:46.783893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.072 [2024-07-16 00:27:46.783909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.072 qpair failed and we were unable to recover it. 00:26:28.072 [2024-07-16 00:27:46.784135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.072 [2024-07-16 00:27:46.784151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.072 qpair failed and we were unable to recover it. 00:26:28.072 [2024-07-16 00:27:46.784427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.072 [2024-07-16 00:27:46.784444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.072 qpair failed and we were unable to recover it. 00:26:28.072 [2024-07-16 00:27:46.784704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.072 [2024-07-16 00:27:46.784719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.072 qpair failed and we were unable to recover it. 00:26:28.072 [2024-07-16 00:27:46.784879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.072 [2024-07-16 00:27:46.784895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.072 qpair failed and we were unable to recover it. 00:26:28.072 [2024-07-16 00:27:46.785033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.072 [2024-07-16 00:27:46.785047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.072 qpair failed and we were unable to recover it. 00:26:28.072 [2024-07-16 00:27:46.785325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.072 [2024-07-16 00:27:46.785342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.072 qpair failed and we were unable to recover it. 00:26:28.072 [2024-07-16 00:27:46.785465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.072 [2024-07-16 00:27:46.785481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.072 qpair failed and we were unable to recover it. 00:26:28.072 [2024-07-16 00:27:46.785739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.072 [2024-07-16 00:27:46.785754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.072 qpair failed and we were unable to recover it. 00:26:28.072 [2024-07-16 00:27:46.785944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.072 [2024-07-16 00:27:46.785962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.072 qpair failed and we were unable to recover it. 00:26:28.072 [2024-07-16 00:27:46.786171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.072 [2024-07-16 00:27:46.786185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.072 qpair failed and we were unable to recover it. 00:26:28.072 [2024-07-16 00:27:46.786388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.072 [2024-07-16 00:27:46.786402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.072 qpair failed and we were unable to recover it. 00:26:28.072 [2024-07-16 00:27:46.786606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.072 [2024-07-16 00:27:46.786637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.072 qpair failed and we were unable to recover it. 00:26:28.072 [2024-07-16 00:27:46.786903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.072 [2024-07-16 00:27:46.786939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.072 qpair failed and we were unable to recover it. 00:26:28.072 [2024-07-16 00:27:46.787114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.072 [2024-07-16 00:27:46.787143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.072 qpair failed and we were unable to recover it. 00:26:28.072 [2024-07-16 00:27:46.787366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.072 [2024-07-16 00:27:46.787397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.072 qpair failed and we were unable to recover it. 00:26:28.072 [2024-07-16 00:27:46.787666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.072 [2024-07-16 00:27:46.787695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.072 qpair failed and we were unable to recover it. 00:26:28.072 [2024-07-16 00:27:46.787990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.072 [2024-07-16 00:27:46.788021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.072 qpair failed and we were unable to recover it. 00:26:28.072 [2024-07-16 00:27:46.788267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.072 [2024-07-16 00:27:46.788297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.072 qpair failed and we were unable to recover it. 00:26:28.072 [2024-07-16 00:27:46.788525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.072 [2024-07-16 00:27:46.788555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.072 qpair failed and we were unable to recover it. 00:26:28.072 [2024-07-16 00:27:46.788788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.072 [2024-07-16 00:27:46.788818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.072 qpair failed and we were unable to recover it. 00:26:28.072 [2024-07-16 00:27:46.789122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.072 [2024-07-16 00:27:46.789153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.072 qpair failed and we were unable to recover it. 00:26:28.072 [2024-07-16 00:27:46.789521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.072 [2024-07-16 00:27:46.789535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.072 qpair failed and we were unable to recover it. 00:26:28.072 [2024-07-16 00:27:46.789693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.072 [2024-07-16 00:27:46.789706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.072 qpair failed and we were unable to recover it. 00:26:28.072 [2024-07-16 00:27:46.790024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.072 [2024-07-16 00:27:46.790054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.072 qpair failed and we were unable to recover it. 00:26:28.072 [2024-07-16 00:27:46.790250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.072 [2024-07-16 00:27:46.790281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.072 qpair failed and we were unable to recover it. 00:26:28.072 [2024-07-16 00:27:46.790467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.072 [2024-07-16 00:27:46.790497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.072 qpair failed and we were unable to recover it. 00:26:28.072 [2024-07-16 00:27:46.790790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.072 [2024-07-16 00:27:46.790820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.072 qpair failed and we were unable to recover it. 00:26:28.072 [2024-07-16 00:27:46.791013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.072 [2024-07-16 00:27:46.791043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.072 qpair failed and we were unable to recover it. 00:26:28.072 [2024-07-16 00:27:46.791299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.072 [2024-07-16 00:27:46.791331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.072 qpair failed and we were unable to recover it. 00:26:28.072 [2024-07-16 00:27:46.791578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.072 [2024-07-16 00:27:46.791608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.072 qpair failed and we were unable to recover it. 00:26:28.072 [2024-07-16 00:27:46.791834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.072 [2024-07-16 00:27:46.791848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.072 qpair failed and we were unable to recover it. 00:26:28.072 [2024-07-16 00:27:46.791982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.072 [2024-07-16 00:27:46.791995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.072 qpair failed and we were unable to recover it. 00:26:28.072 [2024-07-16 00:27:46.792280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.072 [2024-07-16 00:27:46.792326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.072 qpair failed and we were unable to recover it. 00:26:28.072 [2024-07-16 00:27:46.792642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.072 [2024-07-16 00:27:46.792672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.072 qpair failed and we were unable to recover it. 00:26:28.072 [2024-07-16 00:27:46.792859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.072 [2024-07-16 00:27:46.792872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.072 qpair failed and we were unable to recover it. 00:26:28.072 [2024-07-16 00:27:46.793131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.073 [2024-07-16 00:27:46.793144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.073 qpair failed and we were unable to recover it. 00:26:28.073 [2024-07-16 00:27:46.793293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.073 [2024-07-16 00:27:46.793307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.073 qpair failed and we were unable to recover it. 00:26:28.073 [2024-07-16 00:27:46.793438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.073 [2024-07-16 00:27:46.793452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.073 qpair failed and we were unable to recover it. 00:26:28.073 [2024-07-16 00:27:46.793591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.073 [2024-07-16 00:27:46.793604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.073 qpair failed and we were unable to recover it. 00:26:28.073 [2024-07-16 00:27:46.793842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.073 [2024-07-16 00:27:46.793878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.073 qpair failed and we were unable to recover it. 00:26:28.073 [2024-07-16 00:27:46.794050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.073 [2024-07-16 00:27:46.794080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.073 qpair failed and we were unable to recover it. 00:26:28.073 [2024-07-16 00:27:46.794369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.073 [2024-07-16 00:27:46.794382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.073 qpair failed and we were unable to recover it. 00:26:28.073 [2024-07-16 00:27:46.794544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.073 [2024-07-16 00:27:46.794558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.073 qpair failed and we were unable to recover it. 00:26:28.073 [2024-07-16 00:27:46.794761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.073 [2024-07-16 00:27:46.794774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.073 qpair failed and we were unable to recover it. 00:26:28.073 [2024-07-16 00:27:46.794911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.073 [2024-07-16 00:27:46.794925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.073 qpair failed and we were unable to recover it. 00:26:28.073 [2024-07-16 00:27:46.795113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.073 [2024-07-16 00:27:46.795143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.073 qpair failed and we were unable to recover it. 00:26:28.073 [2024-07-16 00:27:46.795310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.073 [2024-07-16 00:27:46.795341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.073 qpair failed and we were unable to recover it. 00:26:28.073 [2024-07-16 00:27:46.795641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.073 [2024-07-16 00:27:46.795671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.073 qpair failed and we were unable to recover it. 00:26:28.073 [2024-07-16 00:27:46.795917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.073 [2024-07-16 00:27:46.795948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.073 qpair failed and we were unable to recover it. 00:26:28.073 [2024-07-16 00:27:46.796191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.073 [2024-07-16 00:27:46.796221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.073 qpair failed and we were unable to recover it. 00:26:28.073 [2024-07-16 00:27:46.796476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.073 [2024-07-16 00:27:46.796490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.073 qpair failed and we were unable to recover it. 00:26:28.073 [2024-07-16 00:27:46.796780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.073 [2024-07-16 00:27:46.796810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.073 qpair failed and we were unable to recover it. 00:26:28.073 [2024-07-16 00:27:46.797055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.073 [2024-07-16 00:27:46.797086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.073 qpair failed and we were unable to recover it. 00:26:28.073 [2024-07-16 00:27:46.797327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.073 [2024-07-16 00:27:46.797358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.073 qpair failed and we were unable to recover it. 00:26:28.073 [2024-07-16 00:27:46.797657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.073 [2024-07-16 00:27:46.797671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.073 qpair failed and we were unable to recover it. 00:26:28.073 [2024-07-16 00:27:46.797813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.073 [2024-07-16 00:27:46.797826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.073 qpair failed and we were unable to recover it. 00:26:28.073 [2024-07-16 00:27:46.797920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.073 [2024-07-16 00:27:46.797933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.073 qpair failed and we were unable to recover it. 00:26:28.073 [2024-07-16 00:27:46.798126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.073 [2024-07-16 00:27:46.798139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.073 qpair failed and we were unable to recover it. 00:26:28.073 [2024-07-16 00:27:46.798283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.073 [2024-07-16 00:27:46.798297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.073 qpair failed and we were unable to recover it. 00:26:28.073 [2024-07-16 00:27:46.798510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.073 [2024-07-16 00:27:46.798541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.073 qpair failed and we were unable to recover it. 00:26:28.073 [2024-07-16 00:27:46.798809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.073 [2024-07-16 00:27:46.798839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.073 qpair failed and we were unable to recover it. 00:26:28.073 [2024-07-16 00:27:46.799105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.073 [2024-07-16 00:27:46.799135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.073 qpair failed and we were unable to recover it. 00:26:28.073 [2024-07-16 00:27:46.799385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.073 [2024-07-16 00:27:46.799416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.073 qpair failed and we were unable to recover it. 00:26:28.073 [2024-07-16 00:27:46.799674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.073 [2024-07-16 00:27:46.799687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.073 qpair failed and we were unable to recover it. 00:26:28.073 [2024-07-16 00:27:46.799952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.073 [2024-07-16 00:27:46.799965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.073 qpair failed and we were unable to recover it. 00:26:28.073 [2024-07-16 00:27:46.800116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.073 [2024-07-16 00:27:46.800129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.073 qpair failed and we were unable to recover it. 00:26:28.073 [2024-07-16 00:27:46.800346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.073 [2024-07-16 00:27:46.800362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.073 qpair failed and we were unable to recover it. 00:26:28.073 [2024-07-16 00:27:46.800470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.073 [2024-07-16 00:27:46.800483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.073 qpair failed and we were unable to recover it. 00:26:28.073 [2024-07-16 00:27:46.800619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.073 [2024-07-16 00:27:46.800634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.073 qpair failed and we were unable to recover it. 00:26:28.073 [2024-07-16 00:27:46.800838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.073 [2024-07-16 00:27:46.800852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.073 qpair failed and we were unable to recover it. 00:26:28.073 [2024-07-16 00:27:46.801152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.073 [2024-07-16 00:27:46.801182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.073 qpair failed and we were unable to recover it. 00:26:28.073 [2024-07-16 00:27:46.801374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.073 [2024-07-16 00:27:46.801405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.073 qpair failed and we were unable to recover it. 00:26:28.073 [2024-07-16 00:27:46.801583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.073 [2024-07-16 00:27:46.801613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.073 qpair failed and we were unable to recover it. 00:26:28.073 [2024-07-16 00:27:46.801847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.073 [2024-07-16 00:27:46.801860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.073 qpair failed and we were unable to recover it. 00:26:28.073 [2024-07-16 00:27:46.801996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.073 [2024-07-16 00:27:46.802026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.073 qpair failed and we were unable to recover it. 00:26:28.073 [2024-07-16 00:27:46.802327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.073 [2024-07-16 00:27:46.802357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.073 qpair failed and we were unable to recover it. 00:26:28.073 [2024-07-16 00:27:46.802604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.073 [2024-07-16 00:27:46.802635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.073 qpair failed and we were unable to recover it. 00:26:28.074 [2024-07-16 00:27:46.802790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.074 [2024-07-16 00:27:46.802820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.074 qpair failed and we were unable to recover it. 00:26:28.074 [2024-07-16 00:27:46.803057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.074 [2024-07-16 00:27:46.803087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.074 qpair failed and we were unable to recover it. 00:26:28.074 [2024-07-16 00:27:46.803364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.074 [2024-07-16 00:27:46.803395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.074 qpair failed and we were unable to recover it. 00:26:28.074 [2024-07-16 00:27:46.803625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.074 [2024-07-16 00:27:46.803638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.074 qpair failed and we were unable to recover it. 00:26:28.074 [2024-07-16 00:27:46.803828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.074 [2024-07-16 00:27:46.803841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.074 qpair failed and we were unable to recover it. 00:26:28.074 [2024-07-16 00:27:46.804041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.074 [2024-07-16 00:27:46.804054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.074 qpair failed and we were unable to recover it. 00:26:28.074 [2024-07-16 00:27:46.804155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.074 [2024-07-16 00:27:46.804169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.074 qpair failed and we were unable to recover it. 00:26:28.074 [2024-07-16 00:27:46.804375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.074 [2024-07-16 00:27:46.804414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.074 qpair failed and we were unable to recover it. 00:26:28.074 [2024-07-16 00:27:46.804572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.074 [2024-07-16 00:27:46.804601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.074 qpair failed and we were unable to recover it. 00:26:28.074 [2024-07-16 00:27:46.804760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.074 [2024-07-16 00:27:46.804789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.074 qpair failed and we were unable to recover it. 00:26:28.074 [2024-07-16 00:27:46.805083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.074 [2024-07-16 00:27:46.805113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.074 qpair failed and we were unable to recover it. 00:26:28.074 [2024-07-16 00:27:46.805342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.074 [2024-07-16 00:27:46.805373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.074 qpair failed and we were unable to recover it. 00:26:28.074 [2024-07-16 00:27:46.805613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.074 [2024-07-16 00:27:46.805627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.074 qpair failed and we were unable to recover it. 00:26:28.074 [2024-07-16 00:27:46.805841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.074 [2024-07-16 00:27:46.805871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.074 qpair failed and we were unable to recover it. 00:26:28.074 [2024-07-16 00:27:46.806030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.074 [2024-07-16 00:27:46.806060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.074 qpair failed and we were unable to recover it. 00:26:28.074 [2024-07-16 00:27:46.806295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.074 [2024-07-16 00:27:46.806326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.074 qpair failed and we were unable to recover it. 00:26:28.074 [2024-07-16 00:27:46.806608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.074 [2024-07-16 00:27:46.806621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.074 qpair failed and we were unable to recover it. 00:26:28.074 [2024-07-16 00:27:46.806840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.074 [2024-07-16 00:27:46.806853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.074 qpair failed and we were unable to recover it. 00:26:28.074 [2024-07-16 00:27:46.807045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.074 [2024-07-16 00:27:46.807058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.074 qpair failed and we were unable to recover it. 00:26:28.074 [2024-07-16 00:27:46.807327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.074 [2024-07-16 00:27:46.807340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.074 qpair failed and we were unable to recover it. 00:26:28.074 [2024-07-16 00:27:46.807487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.074 [2024-07-16 00:27:46.807500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.074 qpair failed and we were unable to recover it. 00:26:28.074 [2024-07-16 00:27:46.807661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.074 [2024-07-16 00:27:46.807690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.074 qpair failed and we were unable to recover it. 00:26:28.074 [2024-07-16 00:27:46.807835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.074 [2024-07-16 00:27:46.807866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.074 qpair failed and we were unable to recover it. 00:26:28.074 [2024-07-16 00:27:46.808184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.074 [2024-07-16 00:27:46.808214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.074 qpair failed and we were unable to recover it. 00:26:28.074 [2024-07-16 00:27:46.808411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.074 [2024-07-16 00:27:46.808440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.074 qpair failed and we were unable to recover it. 00:26:28.074 [2024-07-16 00:27:46.808618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.074 [2024-07-16 00:27:46.808648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.074 qpair failed and we were unable to recover it. 00:26:28.074 [2024-07-16 00:27:46.808821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.074 [2024-07-16 00:27:46.808834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.074 qpair failed and we were unable to recover it. 00:26:28.074 [2024-07-16 00:27:46.809044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.074 [2024-07-16 00:27:46.809075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.074 qpair failed and we were unable to recover it. 00:26:28.074 [2024-07-16 00:27:46.809260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.074 [2024-07-16 00:27:46.809291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.074 qpair failed and we were unable to recover it. 00:26:28.074 [2024-07-16 00:27:46.809476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.074 [2024-07-16 00:27:46.809506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.074 qpair failed and we were unable to recover it. 00:26:28.074 [2024-07-16 00:27:46.809558] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a6c000 (9): Bad file descriptor 00:26:28.074 [2024-07-16 00:27:46.809871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.074 [2024-07-16 00:27:46.809898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.074 qpair failed and we were unable to recover it. 00:26:28.074 [2024-07-16 00:27:46.810096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.074 [2024-07-16 00:27:46.810111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.074 qpair failed and we were unable to recover it. 00:26:28.074 [2024-07-16 00:27:46.810254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.074 [2024-07-16 00:27:46.810269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.074 qpair failed and we were unable to recover it. 00:26:28.074 [2024-07-16 00:27:46.810415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.074 [2024-07-16 00:27:46.810429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.074 qpair failed and we were unable to recover it. 00:26:28.074 [2024-07-16 00:27:46.810517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.074 [2024-07-16 00:27:46.810531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.074 qpair failed and we were unable to recover it. 00:26:28.074 [2024-07-16 00:27:46.810818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.074 [2024-07-16 00:27:46.810847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.074 qpair failed and we were unable to recover it. 00:26:28.074 [2024-07-16 00:27:46.811014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.074 [2024-07-16 00:27:46.811044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.074 qpair failed and we were unable to recover it. 00:26:28.074 [2024-07-16 00:27:46.811233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.074 [2024-07-16 00:27:46.811265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.074 qpair failed and we were unable to recover it. 00:26:28.075 [2024-07-16 00:27:46.811502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.075 [2024-07-16 00:27:46.811516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.075 qpair failed and we were unable to recover it. 00:26:28.075 [2024-07-16 00:27:46.811780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.075 [2024-07-16 00:27:46.811809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.075 qpair failed and we were unable to recover it. 00:26:28.075 [2024-07-16 00:27:46.812032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.075 [2024-07-16 00:27:46.812062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.075 qpair failed and we were unable to recover it. 00:26:28.075 [2024-07-16 00:27:46.812350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.075 [2024-07-16 00:27:46.812380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.075 qpair failed and we were unable to recover it. 00:26:28.075 [2024-07-16 00:27:46.812556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.075 [2024-07-16 00:27:46.812570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.075 qpair failed and we were unable to recover it. 00:26:28.075 [2024-07-16 00:27:46.812719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.075 [2024-07-16 00:27:46.812749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.075 qpair failed and we were unable to recover it. 00:26:28.075 [2024-07-16 00:27:46.812922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.075 [2024-07-16 00:27:46.812952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.075 qpair failed and we were unable to recover it. 00:26:28.075 [2024-07-16 00:27:46.813183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.075 [2024-07-16 00:27:46.813212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.075 qpair failed and we were unable to recover it. 00:26:28.075 [2024-07-16 00:27:46.813408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.075 [2024-07-16 00:27:46.813438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.075 qpair failed and we were unable to recover it. 00:26:28.075 [2024-07-16 00:27:46.813677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.075 [2024-07-16 00:27:46.813706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.075 qpair failed and we were unable to recover it. 00:26:28.075 [2024-07-16 00:27:46.813939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.075 [2024-07-16 00:27:46.813952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.075 qpair failed and we were unable to recover it. 00:26:28.075 [2024-07-16 00:27:46.814151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.075 [2024-07-16 00:27:46.814164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.075 qpair failed and we were unable to recover it. 00:26:28.075 [2024-07-16 00:27:46.814376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.075 [2024-07-16 00:27:46.814406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.075 qpair failed and we were unable to recover it. 00:26:28.075 [2024-07-16 00:27:46.814629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.075 [2024-07-16 00:27:46.814642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.075 qpair failed and we were unable to recover it. 00:26:28.075 [2024-07-16 00:27:46.814794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.075 [2024-07-16 00:27:46.814825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.075 qpair failed and we were unable to recover it. 00:26:28.075 [2024-07-16 00:27:46.815005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.075 [2024-07-16 00:27:46.815034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.075 qpair failed and we were unable to recover it. 00:26:28.075 [2024-07-16 00:27:46.815347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.075 [2024-07-16 00:27:46.815377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.075 qpair failed and we were unable to recover it. 00:26:28.075 [2024-07-16 00:27:46.815623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.075 [2024-07-16 00:27:46.815636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.075 qpair failed and we were unable to recover it. 00:26:28.075 [2024-07-16 00:27:46.815823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.075 [2024-07-16 00:27:46.815841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.075 qpair failed and we were unable to recover it. 00:26:28.075 [2024-07-16 00:27:46.816062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.075 [2024-07-16 00:27:46.816075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.075 qpair failed and we were unable to recover it. 00:26:28.075 [2024-07-16 00:27:46.816215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.075 [2024-07-16 00:27:46.816233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.075 qpair failed and we were unable to recover it. 00:26:28.075 [2024-07-16 00:27:46.816454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.075 [2024-07-16 00:27:46.816484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.075 qpair failed and we were unable to recover it. 00:26:28.075 [2024-07-16 00:27:46.816720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.075 [2024-07-16 00:27:46.816750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.075 qpair failed and we were unable to recover it. 00:26:28.075 [2024-07-16 00:27:46.817053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.075 [2024-07-16 00:27:46.817083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.075 qpair failed and we were unable to recover it. 00:26:28.075 [2024-07-16 00:27:46.817328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.075 [2024-07-16 00:27:46.817358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.075 qpair failed and we were unable to recover it. 00:26:28.075 [2024-07-16 00:27:46.817536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.075 [2024-07-16 00:27:46.817564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.075 qpair failed and we were unable to recover it. 00:26:28.075 [2024-07-16 00:27:46.817795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.075 [2024-07-16 00:27:46.817808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.075 qpair failed and we were unable to recover it. 00:26:28.075 [2024-07-16 00:27:46.818003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.075 [2024-07-16 00:27:46.818016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.075 qpair failed and we were unable to recover it. 00:26:28.075 [2024-07-16 00:27:46.818150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.075 [2024-07-16 00:27:46.818180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.075 qpair failed and we were unable to recover it. 00:26:28.075 [2024-07-16 00:27:46.818356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.075 [2024-07-16 00:27:46.818387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.075 qpair failed and we were unable to recover it. 00:26:28.075 [2024-07-16 00:27:46.818556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.075 [2024-07-16 00:27:46.818586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.075 qpair failed and we were unable to recover it. 00:26:28.075 [2024-07-16 00:27:46.818744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.075 [2024-07-16 00:27:46.818758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.075 qpair failed and we were unable to recover it. 00:26:28.075 [2024-07-16 00:27:46.818960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.075 [2024-07-16 00:27:46.818991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.075 qpair failed and we were unable to recover it. 00:26:28.075 [2024-07-16 00:27:46.819239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.075 [2024-07-16 00:27:46.819269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.075 qpair failed and we were unable to recover it. 00:26:28.075 [2024-07-16 00:27:46.819499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.075 [2024-07-16 00:27:46.819529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.075 qpair failed and we were unable to recover it. 00:26:28.075 [2024-07-16 00:27:46.819702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.075 [2024-07-16 00:27:46.819732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.075 qpair failed and we were unable to recover it. 00:26:28.075 [2024-07-16 00:27:46.819975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.075 [2024-07-16 00:27:46.820005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.075 qpair failed and we were unable to recover it. 00:26:28.075 [2024-07-16 00:27:46.820247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.075 [2024-07-16 00:27:46.820277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.076 qpair failed and we were unable to recover it. 00:26:28.076 [2024-07-16 00:27:46.820414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.076 [2024-07-16 00:27:46.820429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.076 qpair failed and we were unable to recover it. 00:26:28.076 [2024-07-16 00:27:46.820573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.076 [2024-07-16 00:27:46.820586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.076 qpair failed and we were unable to recover it. 00:26:28.076 [2024-07-16 00:27:46.820744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.076 [2024-07-16 00:27:46.820757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.076 qpair failed and we were unable to recover it. 00:26:28.076 [2024-07-16 00:27:46.820947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.076 [2024-07-16 00:27:46.820961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.076 qpair failed and we were unable to recover it. 00:26:28.076 [2024-07-16 00:27:46.821107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.076 [2024-07-16 00:27:46.821120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.076 qpair failed and we were unable to recover it. 00:26:28.076 [2024-07-16 00:27:46.821274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.076 [2024-07-16 00:27:46.821288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.076 qpair failed and we were unable to recover it. 00:26:28.076 [2024-07-16 00:27:46.821418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.076 [2024-07-16 00:27:46.821431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.076 qpair failed and we were unable to recover it. 00:26:28.076 [2024-07-16 00:27:46.821692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.076 [2024-07-16 00:27:46.821705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.076 qpair failed and we were unable to recover it. 00:26:28.076 [2024-07-16 00:27:46.821855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.076 [2024-07-16 00:27:46.821885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.076 qpair failed and we were unable to recover it. 00:26:28.076 [2024-07-16 00:27:46.822114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.076 [2024-07-16 00:27:46.822144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.076 qpair failed and we were unable to recover it. 00:26:28.076 [2024-07-16 00:27:46.822325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.076 [2024-07-16 00:27:46.822356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.076 qpair failed and we were unable to recover it. 00:26:28.076 [2024-07-16 00:27:46.822597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.076 [2024-07-16 00:27:46.822627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.076 qpair failed and we were unable to recover it. 00:26:28.076 [2024-07-16 00:27:46.822924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.076 [2024-07-16 00:27:46.822953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.076 qpair failed and we were unable to recover it. 00:26:28.076 [2024-07-16 00:27:46.823271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.076 [2024-07-16 00:27:46.823301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.076 qpair failed and we were unable to recover it. 00:26:28.076 [2024-07-16 00:27:46.823592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.076 [2024-07-16 00:27:46.823621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.076 qpair failed and we were unable to recover it. 00:26:28.076 [2024-07-16 00:27:46.823799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.076 [2024-07-16 00:27:46.823841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.076 qpair failed and we were unable to recover it. 00:26:28.076 [2024-07-16 00:27:46.824047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.076 [2024-07-16 00:27:46.824060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.076 qpair failed and we were unable to recover it. 00:26:28.076 [2024-07-16 00:27:46.824269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.076 [2024-07-16 00:27:46.824284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.076 qpair failed and we were unable to recover it. 00:26:28.076 [2024-07-16 00:27:46.824435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.076 [2024-07-16 00:27:46.824466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.076 qpair failed and we were unable to recover it. 00:26:28.076 [2024-07-16 00:27:46.824634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.076 [2024-07-16 00:27:46.824666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.076 qpair failed and we were unable to recover it. 00:26:28.076 [2024-07-16 00:27:46.824840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.076 [2024-07-16 00:27:46.824876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.076 qpair failed and we were unable to recover it. 00:26:28.076 [2024-07-16 00:27:46.825175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.076 [2024-07-16 00:27:46.825204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.076 qpair failed and we were unable to recover it. 00:26:28.076 [2024-07-16 00:27:46.825435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.076 [2024-07-16 00:27:46.825448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.076 qpair failed and we were unable to recover it. 00:26:28.076 [2024-07-16 00:27:46.825579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.076 [2024-07-16 00:27:46.825592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.076 qpair failed and we were unable to recover it. 00:26:28.076 [2024-07-16 00:27:46.825779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.076 [2024-07-16 00:27:46.825793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.076 qpair failed and we were unable to recover it. 00:26:28.076 [2024-07-16 00:27:46.825921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.076 [2024-07-16 00:27:46.825936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.076 qpair failed and we were unable to recover it. 00:26:28.076 [2024-07-16 00:27:46.826128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.076 [2024-07-16 00:27:46.826159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.076 qpair failed and we were unable to recover it. 00:26:28.076 [2024-07-16 00:27:46.826384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.076 [2024-07-16 00:27:46.826415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.076 qpair failed and we were unable to recover it. 00:26:28.076 [2024-07-16 00:27:46.826579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.076 [2024-07-16 00:27:46.826609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.076 qpair failed and we were unable to recover it. 00:26:28.076 [2024-07-16 00:27:46.826845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.076 [2024-07-16 00:27:46.826858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.076 qpair failed and we were unable to recover it. 00:26:28.076 [2024-07-16 00:27:46.826995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.076 [2024-07-16 00:27:46.827025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.076 qpair failed and we were unable to recover it. 00:26:28.076 [2024-07-16 00:27:46.827140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.076 [2024-07-16 00:27:46.827171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.076 qpair failed and we were unable to recover it. 00:26:28.076 [2024-07-16 00:27:46.827519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.076 [2024-07-16 00:27:46.827550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.076 qpair failed and we were unable to recover it. 00:26:28.076 [2024-07-16 00:27:46.827699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.076 [2024-07-16 00:27:46.827712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.076 qpair failed and we were unable to recover it. 00:26:28.076 [2024-07-16 00:27:46.827871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.076 [2024-07-16 00:27:46.827884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.076 qpair failed and we were unable to recover it. 00:26:28.076 [2024-07-16 00:27:46.828019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.076 [2024-07-16 00:27:46.828032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.076 qpair failed and we were unable to recover it. 00:26:28.076 [2024-07-16 00:27:46.828158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.076 [2024-07-16 00:27:46.828188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.076 qpair failed and we were unable to recover it. 00:26:28.076 [2024-07-16 00:27:46.828370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.076 [2024-07-16 00:27:46.828401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.076 qpair failed and we were unable to recover it. 00:26:28.076 [2024-07-16 00:27:46.828553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.076 [2024-07-16 00:27:46.828582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.076 qpair failed and we were unable to recover it. 00:26:28.076 [2024-07-16 00:27:46.828733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.076 [2024-07-16 00:27:46.828764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.076 qpair failed and we were unable to recover it. 00:26:28.076 [2024-07-16 00:27:46.828964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.076 [2024-07-16 00:27:46.828994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.076 qpair failed and we were unable to recover it. 00:26:28.077 [2024-07-16 00:27:46.829165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.077 [2024-07-16 00:27:46.829195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.077 qpair failed and we were unable to recover it. 00:26:28.077 [2024-07-16 00:27:46.829497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.077 [2024-07-16 00:27:46.829528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.077 qpair failed and we were unable to recover it. 00:26:28.077 [2024-07-16 00:27:46.829769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.077 [2024-07-16 00:27:46.829800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.077 qpair failed and we were unable to recover it. 00:26:28.077 [2024-07-16 00:27:46.829956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.077 [2024-07-16 00:27:46.829970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.077 qpair failed and we were unable to recover it. 00:26:28.077 [2024-07-16 00:27:46.830167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.077 [2024-07-16 00:27:46.830181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.077 qpair failed and we were unable to recover it. 00:26:28.077 [2024-07-16 00:27:46.830319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.077 [2024-07-16 00:27:46.830333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.077 qpair failed and we were unable to recover it. 00:26:28.077 [2024-07-16 00:27:46.830527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.077 [2024-07-16 00:27:46.830541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.077 qpair failed and we were unable to recover it. 00:26:28.077 [2024-07-16 00:27:46.830690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.077 [2024-07-16 00:27:46.830703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.077 qpair failed and we were unable to recover it. 00:26:28.077 [2024-07-16 00:27:46.830905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.077 [2024-07-16 00:27:46.830918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.077 qpair failed and we were unable to recover it. 00:26:28.077 [2024-07-16 00:27:46.831048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.077 [2024-07-16 00:27:46.831061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.077 qpair failed and we were unable to recover it. 00:26:28.077 [2024-07-16 00:27:46.831276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.077 [2024-07-16 00:27:46.831290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.077 qpair failed and we were unable to recover it. 00:26:28.077 [2024-07-16 00:27:46.831491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.077 [2024-07-16 00:27:46.831505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.077 qpair failed and we were unable to recover it. 00:26:28.077 [2024-07-16 00:27:46.831718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.077 [2024-07-16 00:27:46.831731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.077 qpair failed and we were unable to recover it. 00:26:28.077 [2024-07-16 00:27:46.831923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.077 [2024-07-16 00:27:46.831937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.077 qpair failed and we were unable to recover it. 00:26:28.077 [2024-07-16 00:27:46.832203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.077 [2024-07-16 00:27:46.832216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.077 qpair failed and we were unable to recover it. 00:26:28.077 [2024-07-16 00:27:46.832428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.077 [2024-07-16 00:27:46.832441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.077 qpair failed and we were unable to recover it. 00:26:28.077 [2024-07-16 00:27:46.832633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.077 [2024-07-16 00:27:46.832646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.077 qpair failed and we were unable to recover it. 00:26:28.077 [2024-07-16 00:27:46.832770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.077 [2024-07-16 00:27:46.832783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.077 qpair failed and we were unable to recover it. 00:26:28.077 [2024-07-16 00:27:46.832942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.077 [2024-07-16 00:27:46.832956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.077 qpair failed and we were unable to recover it. 00:26:28.077 [2024-07-16 00:27:46.833078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.077 [2024-07-16 00:27:46.833094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.077 qpair failed and we were unable to recover it. 00:26:28.077 [2024-07-16 00:27:46.833383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.077 [2024-07-16 00:27:46.833397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.077 qpair failed and we were unable to recover it. 00:26:28.077 [2024-07-16 00:27:46.833589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.077 [2024-07-16 00:27:46.833603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.077 qpair failed and we were unable to recover it. 00:26:28.077 [2024-07-16 00:27:46.833816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.077 [2024-07-16 00:27:46.833829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.077 qpair failed and we were unable to recover it. 00:26:28.077 [2024-07-16 00:27:46.833968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.077 [2024-07-16 00:27:46.833981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.077 qpair failed and we were unable to recover it. 00:26:28.077 [2024-07-16 00:27:46.834188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.077 [2024-07-16 00:27:46.834201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.077 qpair failed and we were unable to recover it. 00:26:28.077 [2024-07-16 00:27:46.834408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.077 [2024-07-16 00:27:46.834422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.077 qpair failed and we were unable to recover it. 00:26:28.077 [2024-07-16 00:27:46.834501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.077 [2024-07-16 00:27:46.834515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.077 qpair failed and we were unable to recover it. 00:26:28.077 [2024-07-16 00:27:46.834673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.077 [2024-07-16 00:27:46.834686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.077 qpair failed and we were unable to recover it. 00:26:28.077 [2024-07-16 00:27:46.834884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.077 [2024-07-16 00:27:46.834897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.077 qpair failed and we were unable to recover it. 00:26:28.077 [2024-07-16 00:27:46.835158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.077 [2024-07-16 00:27:46.835171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.077 qpair failed and we were unable to recover it. 00:26:28.077 [2024-07-16 00:27:46.835364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.077 [2024-07-16 00:27:46.835378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.077 qpair failed and we were unable to recover it. 00:26:28.077 [2024-07-16 00:27:46.835637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.077 [2024-07-16 00:27:46.835651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.077 qpair failed and we were unable to recover it. 00:26:28.077 [2024-07-16 00:27:46.835857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.077 [2024-07-16 00:27:46.835870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.077 qpair failed and we were unable to recover it. 00:26:28.077 [2024-07-16 00:27:46.836083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.077 [2024-07-16 00:27:46.836097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.077 qpair failed and we were unable to recover it. 00:26:28.077 [2024-07-16 00:27:46.836266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.077 [2024-07-16 00:27:46.836280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.077 qpair failed and we were unable to recover it. 00:26:28.077 [2024-07-16 00:27:46.836472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.077 [2024-07-16 00:27:46.836486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.077 qpair failed and we were unable to recover it. 00:26:28.077 [2024-07-16 00:27:46.836611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.077 [2024-07-16 00:27:46.836624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.077 qpair failed and we were unable to recover it. 00:26:28.077 [2024-07-16 00:27:46.836762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.077 [2024-07-16 00:27:46.836775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.077 qpair failed and we were unable to recover it. 00:26:28.078 [2024-07-16 00:27:46.836964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.078 [2024-07-16 00:27:46.836977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.078 qpair failed and we were unable to recover it. 00:26:28.078 [2024-07-16 00:27:46.837205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.078 [2024-07-16 00:27:46.837219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.078 qpair failed and we were unable to recover it. 00:26:28.078 [2024-07-16 00:27:46.837504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.078 [2024-07-16 00:27:46.837517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.078 qpair failed and we were unable to recover it. 00:26:28.078 [2024-07-16 00:27:46.837727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.078 [2024-07-16 00:27:46.837741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.078 qpair failed and we were unable to recover it. 00:26:28.078 [2024-07-16 00:27:46.837866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.078 [2024-07-16 00:27:46.837880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.078 qpair failed and we were unable to recover it. 00:26:28.078 [2024-07-16 00:27:46.838097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.078 [2024-07-16 00:27:46.838111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.078 qpair failed and we were unable to recover it. 00:26:28.078 [2024-07-16 00:27:46.838253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.078 [2024-07-16 00:27:46.838266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.078 qpair failed and we were unable to recover it. 00:26:28.078 [2024-07-16 00:27:46.838457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.078 [2024-07-16 00:27:46.838470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.078 qpair failed and we were unable to recover it. 00:26:28.078 [2024-07-16 00:27:46.838662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.078 [2024-07-16 00:27:46.838695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.078 qpair failed and we were unable to recover it. 00:26:28.078 [2024-07-16 00:27:46.838997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.078 [2024-07-16 00:27:46.839021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.078 qpair failed and we were unable to recover it. 00:26:28.078 [2024-07-16 00:27:46.839140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.078 [2024-07-16 00:27:46.839151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.078 qpair failed and we were unable to recover it. 00:26:28.078 [2024-07-16 00:27:46.839287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.078 [2024-07-16 00:27:46.839297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.078 qpair failed and we were unable to recover it. 00:26:28.078 [2024-07-16 00:27:46.839499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.078 [2024-07-16 00:27:46.839509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.078 qpair failed and we were unable to recover it. 00:26:28.078 [2024-07-16 00:27:46.839707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.078 [2024-07-16 00:27:46.839717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.078 qpair failed and we were unable to recover it. 00:26:28.078 [2024-07-16 00:27:46.839965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.078 [2024-07-16 00:27:46.839975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.078 qpair failed and we were unable to recover it. 00:26:28.078 [2024-07-16 00:27:46.840093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.078 [2024-07-16 00:27:46.840102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.078 qpair failed and we were unable to recover it. 00:26:28.078 [2024-07-16 00:27:46.840295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.078 [2024-07-16 00:27:46.840305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.078 qpair failed and we were unable to recover it. 00:26:28.078 [2024-07-16 00:27:46.840458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.078 [2024-07-16 00:27:46.840467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.078 qpair failed and we were unable to recover it. 00:26:28.078 [2024-07-16 00:27:46.840723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.078 [2024-07-16 00:27:46.840733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.078 qpair failed and we were unable to recover it. 00:26:28.078 [2024-07-16 00:27:46.840954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.078 [2024-07-16 00:27:46.840964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.078 qpair failed and we were unable to recover it. 00:26:28.078 [2024-07-16 00:27:46.841097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.078 [2024-07-16 00:27:46.841107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.078 qpair failed and we were unable to recover it. 00:26:28.078 [2024-07-16 00:27:46.841303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.078 [2024-07-16 00:27:46.841316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.078 qpair failed and we were unable to recover it. 00:26:28.078 [2024-07-16 00:27:46.841539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.078 [2024-07-16 00:27:46.841549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.078 qpair failed and we were unable to recover it. 00:26:28.078 [2024-07-16 00:27:46.841677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.078 [2024-07-16 00:27:46.841687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.078 qpair failed and we were unable to recover it. 00:26:28.078 [2024-07-16 00:27:46.841882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.078 [2024-07-16 00:27:46.841891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.078 qpair failed and we were unable to recover it. 00:26:28.078 [2024-07-16 00:27:46.842110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.078 [2024-07-16 00:27:46.842120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.078 qpair failed and we were unable to recover it. 00:26:28.078 [2024-07-16 00:27:46.842327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.078 [2024-07-16 00:27:46.842337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.078 qpair failed and we were unable to recover it. 00:26:28.078 [2024-07-16 00:27:46.842459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.078 [2024-07-16 00:27:46.842469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.078 qpair failed and we were unable to recover it. 00:26:28.078 [2024-07-16 00:27:46.842594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.078 [2024-07-16 00:27:46.842603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.078 qpair failed and we were unable to recover it. 00:26:28.078 [2024-07-16 00:27:46.842909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.078 [2024-07-16 00:27:46.842919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.078 qpair failed and we were unable to recover it. 00:26:28.078 [2024-07-16 00:27:46.843126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.078 [2024-07-16 00:27:46.843135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.078 qpair failed and we were unable to recover it. 00:26:28.078 [2024-07-16 00:27:46.843302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.078 [2024-07-16 00:27:46.843313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.078 qpair failed and we were unable to recover it. 00:26:28.078 [2024-07-16 00:27:46.843503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.078 [2024-07-16 00:27:46.843513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.078 qpair failed and we were unable to recover it. 00:26:28.078 [2024-07-16 00:27:46.843699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.078 [2024-07-16 00:27:46.843709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.078 qpair failed and we were unable to recover it. 00:26:28.078 [2024-07-16 00:27:46.843893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.078 [2024-07-16 00:27:46.843902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.078 qpair failed and we were unable to recover it. 00:26:28.078 [2024-07-16 00:27:46.844089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.078 [2024-07-16 00:27:46.844098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.078 qpair failed and we were unable to recover it. 00:26:28.078 [2024-07-16 00:27:46.844180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.078 [2024-07-16 00:27:46.844189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.078 qpair failed and we were unable to recover it. 00:26:28.078 [2024-07-16 00:27:46.844320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.078 [2024-07-16 00:27:46.844331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.078 qpair failed and we were unable to recover it. 00:26:28.078 [2024-07-16 00:27:46.844512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.079 [2024-07-16 00:27:46.844522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.079 qpair failed and we were unable to recover it. 00:26:28.079 [2024-07-16 00:27:46.844715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.079 [2024-07-16 00:27:46.844724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.079 qpair failed and we were unable to recover it. 00:26:28.079 [2024-07-16 00:27:46.844866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.079 [2024-07-16 00:27:46.844876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.079 qpair failed and we were unable to recover it. 00:26:28.079 [2024-07-16 00:27:46.845079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.079 [2024-07-16 00:27:46.845088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.079 qpair failed and we were unable to recover it. 00:26:28.079 [2024-07-16 00:27:46.845283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.079 [2024-07-16 00:27:46.845293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.079 qpair failed and we were unable to recover it. 00:26:28.079 [2024-07-16 00:27:46.845381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.079 [2024-07-16 00:27:46.845391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.079 qpair failed and we were unable to recover it. 00:26:28.079 [2024-07-16 00:27:46.845637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.079 [2024-07-16 00:27:46.845646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.079 qpair failed and we were unable to recover it. 00:26:28.079 [2024-07-16 00:27:46.845754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.079 [2024-07-16 00:27:46.845764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.079 qpair failed and we were unable to recover it. 00:26:28.079 [2024-07-16 00:27:46.845898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.079 [2024-07-16 00:27:46.845908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.079 qpair failed and we were unable to recover it. 00:26:28.079 [2024-07-16 00:27:46.846021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.079 [2024-07-16 00:27:46.846031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.079 qpair failed and we were unable to recover it. 00:26:28.079 [2024-07-16 00:27:46.846112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.079 [2024-07-16 00:27:46.846122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.079 qpair failed and we were unable to recover it. 00:26:28.079 [2024-07-16 00:27:46.846321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.079 [2024-07-16 00:27:46.846331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.079 qpair failed and we were unable to recover it. 00:26:28.079 [2024-07-16 00:27:46.846529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.079 [2024-07-16 00:27:46.846539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.079 qpair failed and we were unable to recover it. 00:26:28.079 [2024-07-16 00:27:46.846675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.079 [2024-07-16 00:27:46.846685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.079 qpair failed and we were unable to recover it. 00:26:28.079 [2024-07-16 00:27:46.846875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.079 [2024-07-16 00:27:46.846885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.079 qpair failed and we were unable to recover it. 00:26:28.079 [2024-07-16 00:27:46.847034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.079 [2024-07-16 00:27:46.847043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.079 qpair failed and we were unable to recover it. 00:26:28.079 [2024-07-16 00:27:46.847185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.079 [2024-07-16 00:27:46.847195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.079 qpair failed and we were unable to recover it. 00:26:28.079 [2024-07-16 00:27:46.847384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.079 [2024-07-16 00:27:46.847394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.079 qpair failed and we were unable to recover it. 00:26:28.079 [2024-07-16 00:27:46.847579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.079 [2024-07-16 00:27:46.847589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.079 qpair failed and we were unable to recover it. 00:26:28.079 [2024-07-16 00:27:46.847700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.079 [2024-07-16 00:27:46.847710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.079 qpair failed and we were unable to recover it. 00:26:28.079 [2024-07-16 00:27:46.847835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.079 [2024-07-16 00:27:46.847845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.079 qpair failed and we were unable to recover it. 00:26:28.079 [2024-07-16 00:27:46.847977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.079 [2024-07-16 00:27:46.847986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.079 qpair failed and we were unable to recover it. 00:26:28.079 [2024-07-16 00:27:46.848284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.079 [2024-07-16 00:27:46.848294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.079 qpair failed and we were unable to recover it. 00:26:28.079 [2024-07-16 00:27:46.848535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.079 [2024-07-16 00:27:46.848547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.079 qpair failed and we were unable to recover it. 00:26:28.079 [2024-07-16 00:27:46.848748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.079 [2024-07-16 00:27:46.848757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.079 qpair failed and we were unable to recover it. 00:26:28.079 [2024-07-16 00:27:46.848882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.079 [2024-07-16 00:27:46.848892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.079 qpair failed and we were unable to recover it. 00:26:28.079 [2024-07-16 00:27:46.849078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.079 [2024-07-16 00:27:46.849087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.079 qpair failed and we were unable to recover it. 00:26:28.079 [2024-07-16 00:27:46.849228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.079 [2024-07-16 00:27:46.849238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.079 qpair failed and we were unable to recover it. 00:26:28.079 [2024-07-16 00:27:46.849428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.079 [2024-07-16 00:27:46.849438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.079 qpair failed and we were unable to recover it. 00:26:28.079 [2024-07-16 00:27:46.849641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.079 [2024-07-16 00:27:46.849651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.079 qpair failed and we were unable to recover it. 00:26:28.079 [2024-07-16 00:27:46.849845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.079 [2024-07-16 00:27:46.849854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.079 qpair failed and we were unable to recover it. 00:26:28.079 [2024-07-16 00:27:46.849984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.079 [2024-07-16 00:27:46.849994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.079 qpair failed and we were unable to recover it. 00:26:28.079 [2024-07-16 00:27:46.850115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.079 [2024-07-16 00:27:46.850125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.079 qpair failed and we were unable to recover it. 00:26:28.079 [2024-07-16 00:27:46.850356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.079 [2024-07-16 00:27:46.850366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.079 qpair failed and we were unable to recover it. 00:26:28.079 [2024-07-16 00:27:46.850545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.080 [2024-07-16 00:27:46.850555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.080 qpair failed and we were unable to recover it. 00:26:28.080 [2024-07-16 00:27:46.850699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.080 [2024-07-16 00:27:46.850708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.080 qpair failed and we were unable to recover it. 00:26:28.080 [2024-07-16 00:27:46.850911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.080 [2024-07-16 00:27:46.850921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.080 qpair failed and we were unable to recover it. 00:26:28.080 [2024-07-16 00:27:46.851107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.080 [2024-07-16 00:27:46.851117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.080 qpair failed and we were unable to recover it. 00:26:28.080 [2024-07-16 00:27:46.851318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.080 [2024-07-16 00:27:46.851328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.080 qpair failed and we were unable to recover it. 00:26:28.080 [2024-07-16 00:27:46.851461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.080 [2024-07-16 00:27:46.851471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.080 qpair failed and we were unable to recover it. 00:26:28.080 [2024-07-16 00:27:46.851597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.080 [2024-07-16 00:27:46.851607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.080 qpair failed and we were unable to recover it. 00:26:28.080 [2024-07-16 00:27:46.851755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.080 [2024-07-16 00:27:46.851765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.080 qpair failed and we were unable to recover it. 00:26:28.080 [2024-07-16 00:27:46.851888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.080 [2024-07-16 00:27:46.851898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.080 qpair failed and we were unable to recover it. 00:26:28.080 [2024-07-16 00:27:46.852087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.080 [2024-07-16 00:27:46.852097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.080 qpair failed and we were unable to recover it. 00:26:28.080 [2024-07-16 00:27:46.852363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.080 [2024-07-16 00:27:46.852373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.080 qpair failed and we were unable to recover it. 00:26:28.080 [2024-07-16 00:27:46.852513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.080 [2024-07-16 00:27:46.852524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.080 qpair failed and we were unable to recover it. 00:26:28.080 [2024-07-16 00:27:46.852711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.080 [2024-07-16 00:27:46.852721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.080 qpair failed and we were unable to recover it. 00:26:28.080 [2024-07-16 00:27:46.852922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.080 [2024-07-16 00:27:46.852932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.080 qpair failed and we were unable to recover it. 00:26:28.080 [2024-07-16 00:27:46.853051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.080 [2024-07-16 00:27:46.853061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.080 qpair failed and we were unable to recover it. 00:26:28.080 [2024-07-16 00:27:46.853330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.080 [2024-07-16 00:27:46.853341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.080 qpair failed and we were unable to recover it. 00:26:28.080 [2024-07-16 00:27:46.853483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.080 [2024-07-16 00:27:46.853493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.080 qpair failed and we were unable to recover it. 00:26:28.080 [2024-07-16 00:27:46.853693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.080 [2024-07-16 00:27:46.853703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.080 qpair failed and we were unable to recover it. 00:26:28.080 [2024-07-16 00:27:46.853925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.080 [2024-07-16 00:27:46.853936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.080 qpair failed and we were unable to recover it. 00:26:28.080 [2024-07-16 00:27:46.854064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.080 [2024-07-16 00:27:46.854074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.080 qpair failed and we were unable to recover it. 00:26:28.080 [2024-07-16 00:27:46.854258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.080 [2024-07-16 00:27:46.854268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.080 qpair failed and we were unable to recover it. 00:26:28.080 [2024-07-16 00:27:46.854452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.080 [2024-07-16 00:27:46.854462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.080 qpair failed and we were unable to recover it. 00:26:28.080 [2024-07-16 00:27:46.854588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.080 [2024-07-16 00:27:46.854599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.080 qpair failed and we were unable to recover it. 00:26:28.080 [2024-07-16 00:27:46.854800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.080 [2024-07-16 00:27:46.854810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.080 qpair failed and we were unable to recover it. 00:26:28.080 [2024-07-16 00:27:46.855013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.080 [2024-07-16 00:27:46.855023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.080 qpair failed and we were unable to recover it. 00:26:28.080 [2024-07-16 00:27:46.855248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.080 [2024-07-16 00:27:46.855258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.080 qpair failed and we were unable to recover it. 00:26:28.080 [2024-07-16 00:27:46.855451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.080 [2024-07-16 00:27:46.855461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.080 qpair failed and we were unable to recover it. 00:26:28.080 [2024-07-16 00:27:46.855669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.080 [2024-07-16 00:27:46.855678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.080 qpair failed and we were unable to recover it. 00:26:28.080 [2024-07-16 00:27:46.855826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.080 [2024-07-16 00:27:46.855836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.080 qpair failed and we were unable to recover it. 00:26:28.080 [2024-07-16 00:27:46.856041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.080 [2024-07-16 00:27:46.856053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.080 qpair failed and we were unable to recover it. 00:26:28.080 [2024-07-16 00:27:46.856177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.080 [2024-07-16 00:27:46.856187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.080 qpair failed and we were unable to recover it. 00:26:28.080 [2024-07-16 00:27:46.856273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.080 [2024-07-16 00:27:46.856283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.080 qpair failed and we were unable to recover it. 00:26:28.080 [2024-07-16 00:27:46.856475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.080 [2024-07-16 00:27:46.856485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.080 qpair failed and we were unable to recover it. 00:26:28.080 [2024-07-16 00:27:46.856610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.080 [2024-07-16 00:27:46.856619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.080 qpair failed and we were unable to recover it. 00:26:28.080 [2024-07-16 00:27:46.856904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.080 [2024-07-16 00:27:46.856914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.080 qpair failed and we were unable to recover it. 00:26:28.080 [2024-07-16 00:27:46.857119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.080 [2024-07-16 00:27:46.857129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.080 qpair failed and we were unable to recover it. 00:26:28.080 [2024-07-16 00:27:46.857277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.080 [2024-07-16 00:27:46.857288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.080 qpair failed and we were unable to recover it. 00:26:28.080 [2024-07-16 00:27:46.857477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.080 [2024-07-16 00:27:46.857488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.080 qpair failed and we were unable to recover it. 00:26:28.080 [2024-07-16 00:27:46.857614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.080 [2024-07-16 00:27:46.857623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.080 qpair failed and we were unable to recover it. 00:26:28.080 [2024-07-16 00:27:46.857820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.080 [2024-07-16 00:27:46.857830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.080 qpair failed and we were unable to recover it. 00:26:28.080 [2024-07-16 00:27:46.858025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.080 [2024-07-16 00:27:46.858036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.080 qpair failed and we were unable to recover it. 00:26:28.080 [2024-07-16 00:27:46.858223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.081 [2024-07-16 00:27:46.858238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.081 qpair failed and we were unable to recover it. 00:26:28.081 [2024-07-16 00:27:46.858425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.081 [2024-07-16 00:27:46.858435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.081 qpair failed and we were unable to recover it. 00:26:28.081 [2024-07-16 00:27:46.858569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.081 [2024-07-16 00:27:46.858579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.081 qpair failed and we were unable to recover it. 00:26:28.081 [2024-07-16 00:27:46.858878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.081 [2024-07-16 00:27:46.858888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.081 qpair failed and we were unable to recover it. 00:26:28.081 [2024-07-16 00:27:46.859093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.081 [2024-07-16 00:27:46.859102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.081 qpair failed and we were unable to recover it. 00:26:28.081 [2024-07-16 00:27:46.859284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.081 [2024-07-16 00:27:46.859295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.081 qpair failed and we were unable to recover it. 00:26:28.081 [2024-07-16 00:27:46.859562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.081 [2024-07-16 00:27:46.859572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.081 qpair failed and we were unable to recover it. 00:26:28.081 [2024-07-16 00:27:46.859712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.081 [2024-07-16 00:27:46.859722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.081 qpair failed and we were unable to recover it. 00:26:28.081 [2024-07-16 00:27:46.859912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.081 [2024-07-16 00:27:46.859922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.081 qpair failed and we were unable to recover it. 00:26:28.081 [2024-07-16 00:27:46.860172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.081 [2024-07-16 00:27:46.860182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.081 qpair failed and we were unable to recover it. 00:26:28.081 [2024-07-16 00:27:46.860413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.081 [2024-07-16 00:27:46.860424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.081 qpair failed and we were unable to recover it. 00:26:28.081 [2024-07-16 00:27:46.860557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.081 [2024-07-16 00:27:46.860568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.081 qpair failed and we were unable to recover it. 00:26:28.081 [2024-07-16 00:27:46.860696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.081 [2024-07-16 00:27:46.860706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.081 qpair failed and we were unable to recover it. 00:26:28.081 [2024-07-16 00:27:46.860908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.081 [2024-07-16 00:27:46.860918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.081 qpair failed and we were unable to recover it. 00:26:28.081 [2024-07-16 00:27:46.861063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.081 [2024-07-16 00:27:46.861073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.081 qpair failed and we were unable to recover it. 00:26:28.081 [2024-07-16 00:27:46.861273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.081 [2024-07-16 00:27:46.861283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.081 qpair failed and we were unable to recover it. 00:26:28.081 [2024-07-16 00:27:46.861465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.081 [2024-07-16 00:27:46.861475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.081 qpair failed and we were unable to recover it. 00:26:28.081 [2024-07-16 00:27:46.861676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.081 [2024-07-16 00:27:46.861686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.081 qpair failed and we were unable to recover it. 00:26:28.081 [2024-07-16 00:27:46.861894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.081 [2024-07-16 00:27:46.861904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.081 qpair failed and we were unable to recover it. 00:26:28.081 [2024-07-16 00:27:46.862032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.081 [2024-07-16 00:27:46.862042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.081 qpair failed and we were unable to recover it. 00:26:28.081 [2024-07-16 00:27:46.862229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.081 [2024-07-16 00:27:46.862239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.081 qpair failed and we were unable to recover it. 00:26:28.081 [2024-07-16 00:27:46.862383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.081 [2024-07-16 00:27:46.862393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.081 qpair failed and we were unable to recover it. 00:26:28.081 [2024-07-16 00:27:46.862533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.081 [2024-07-16 00:27:46.862543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.081 qpair failed and we were unable to recover it. 00:26:28.081 [2024-07-16 00:27:46.862727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.081 [2024-07-16 00:27:46.862737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.081 qpair failed and we were unable to recover it. 00:26:28.081 [2024-07-16 00:27:46.862834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.081 [2024-07-16 00:27:46.862844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.081 qpair failed and we were unable to recover it. 00:26:28.081 [2024-07-16 00:27:46.863114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.081 [2024-07-16 00:27:46.863124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.081 qpair failed and we were unable to recover it. 00:26:28.081 [2024-07-16 00:27:46.863402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.081 [2024-07-16 00:27:46.863412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.081 qpair failed and we were unable to recover it. 00:26:28.081 [2024-07-16 00:27:46.863604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.081 [2024-07-16 00:27:46.863614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.081 qpair failed and we were unable to recover it. 00:26:28.081 [2024-07-16 00:27:46.863742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.081 [2024-07-16 00:27:46.863754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.081 qpair failed and we were unable to recover it. 00:26:28.081 [2024-07-16 00:27:46.863886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.081 [2024-07-16 00:27:46.863896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.081 qpair failed and we were unable to recover it. 00:26:28.081 [2024-07-16 00:27:46.864021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.081 [2024-07-16 00:27:46.864031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.081 qpair failed and we were unable to recover it. 00:26:28.081 [2024-07-16 00:27:46.864280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.081 [2024-07-16 00:27:46.864290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.081 qpair failed and we were unable to recover it. 00:26:28.081 [2024-07-16 00:27:46.864407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.081 [2024-07-16 00:27:46.864417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.081 qpair failed and we were unable to recover it. 00:26:28.081 [2024-07-16 00:27:46.864531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.081 [2024-07-16 00:27:46.864541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.081 qpair failed and we were unable to recover it. 00:26:28.081 [2024-07-16 00:27:46.864666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.081 [2024-07-16 00:27:46.864676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.081 qpair failed and we were unable to recover it. 00:26:28.081 [2024-07-16 00:27:46.864875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.081 [2024-07-16 00:27:46.864885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.081 qpair failed and we were unable to recover it. 00:26:28.082 [2024-07-16 00:27:46.865073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.082 [2024-07-16 00:27:46.865083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.082 qpair failed and we were unable to recover it. 00:26:28.082 [2024-07-16 00:27:46.865221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.082 [2024-07-16 00:27:46.865235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.082 qpair failed and we were unable to recover it. 00:26:28.082 [2024-07-16 00:27:46.865388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.082 [2024-07-16 00:27:46.865398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.082 qpair failed and we were unable to recover it. 00:26:28.082 [2024-07-16 00:27:46.865582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.082 [2024-07-16 00:27:46.865592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.082 qpair failed and we were unable to recover it. 00:26:28.082 [2024-07-16 00:27:46.865792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.082 [2024-07-16 00:27:46.865802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.082 qpair failed and we were unable to recover it. 00:26:28.082 [2024-07-16 00:27:46.865947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.082 [2024-07-16 00:27:46.865957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.082 qpair failed and we were unable to recover it. 00:26:28.082 [2024-07-16 00:27:46.866083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.082 [2024-07-16 00:27:46.866093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.082 qpair failed and we were unable to recover it. 00:26:28.082 [2024-07-16 00:27:46.866209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.082 [2024-07-16 00:27:46.866219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.082 qpair failed and we were unable to recover it. 00:26:28.082 [2024-07-16 00:27:46.866337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.082 [2024-07-16 00:27:46.866348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.082 qpair failed and we were unable to recover it. 00:26:28.082 [2024-07-16 00:27:46.866531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.082 [2024-07-16 00:27:46.866541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.082 qpair failed and we were unable to recover it. 00:26:28.082 [2024-07-16 00:27:46.866756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.082 [2024-07-16 00:27:46.866765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.082 qpair failed and we were unable to recover it. 00:26:28.082 [2024-07-16 00:27:46.866968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.082 [2024-07-16 00:27:46.866978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.082 qpair failed and we were unable to recover it. 00:26:28.082 [2024-07-16 00:27:46.867248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.082 [2024-07-16 00:27:46.867259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.082 qpair failed and we were unable to recover it. 00:26:28.082 [2024-07-16 00:27:46.867452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.082 [2024-07-16 00:27:46.867462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.082 qpair failed and we were unable to recover it. 00:26:28.082 [2024-07-16 00:27:46.867608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.082 [2024-07-16 00:27:46.867618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.082 qpair failed and we were unable to recover it. 00:26:28.082 [2024-07-16 00:27:46.867853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.082 [2024-07-16 00:27:46.867863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.082 qpair failed and we were unable to recover it. 00:26:28.082 [2024-07-16 00:27:46.867998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.082 [2024-07-16 00:27:46.868008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.082 qpair failed and we were unable to recover it. 00:26:28.082 [2024-07-16 00:27:46.868281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.082 [2024-07-16 00:27:46.868291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.082 qpair failed and we were unable to recover it. 00:26:28.082 [2024-07-16 00:27:46.868486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.082 [2024-07-16 00:27:46.868496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.082 qpair failed and we were unable to recover it. 00:26:28.082 [2024-07-16 00:27:46.868688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.082 [2024-07-16 00:27:46.868699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.082 qpair failed and we were unable to recover it. 00:26:28.082 [2024-07-16 00:27:46.868921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.082 [2024-07-16 00:27:46.868931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.082 qpair failed and we were unable to recover it. 00:26:28.082 [2024-07-16 00:27:46.869045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.082 [2024-07-16 00:27:46.869055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.082 qpair failed and we were unable to recover it. 00:26:28.082 [2024-07-16 00:27:46.869243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.082 [2024-07-16 00:27:46.869253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.082 qpair failed and we were unable to recover it. 00:26:28.082 [2024-07-16 00:27:46.869514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.082 [2024-07-16 00:27:46.869524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.082 qpair failed and we were unable to recover it. 00:26:28.082 [2024-07-16 00:27:46.869819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.082 [2024-07-16 00:27:46.869829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.082 qpair failed and we were unable to recover it. 00:26:28.082 [2024-07-16 00:27:46.869963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.082 [2024-07-16 00:27:46.869973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.082 qpair failed and we were unable to recover it. 00:26:28.082 [2024-07-16 00:27:46.870250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.082 [2024-07-16 00:27:46.870260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.082 qpair failed and we were unable to recover it. 00:26:28.082 [2024-07-16 00:27:46.870449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.082 [2024-07-16 00:27:46.870458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.082 qpair failed and we were unable to recover it. 00:26:28.082 [2024-07-16 00:27:46.870666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.082 [2024-07-16 00:27:46.870676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.082 qpair failed and we were unable to recover it. 00:26:28.082 [2024-07-16 00:27:46.870791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.082 [2024-07-16 00:27:46.870801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.082 qpair failed and we were unable to recover it. 00:26:28.082 [2024-07-16 00:27:46.870995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.082 [2024-07-16 00:27:46.871004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.082 qpair failed and we were unable to recover it. 00:26:28.082 [2024-07-16 00:27:46.871206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.082 [2024-07-16 00:27:46.871216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.082 qpair failed and we were unable to recover it. 00:26:28.082 [2024-07-16 00:27:46.871352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.082 [2024-07-16 00:27:46.871363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.082 qpair failed and we were unable to recover it. 00:26:28.082 [2024-07-16 00:27:46.871493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.082 [2024-07-16 00:27:46.871503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.082 qpair failed and we were unable to recover it. 00:26:28.082 [2024-07-16 00:27:46.871636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.082 [2024-07-16 00:27:46.871645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.082 qpair failed and we were unable to recover it. 00:26:28.082 [2024-07-16 00:27:46.871841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.082 [2024-07-16 00:27:46.871851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.082 qpair failed and we were unable to recover it. 00:26:28.082 [2024-07-16 00:27:46.872006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.082 [2024-07-16 00:27:46.872015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.082 qpair failed and we were unable to recover it. 00:26:28.082 [2024-07-16 00:27:46.872209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.082 [2024-07-16 00:27:46.872219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.082 qpair failed and we were unable to recover it. 00:26:28.082 [2024-07-16 00:27:46.872358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.082 [2024-07-16 00:27:46.872368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.082 qpair failed and we were unable to recover it. 00:26:28.082 [2024-07-16 00:27:46.872584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.082 [2024-07-16 00:27:46.872594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.082 qpair failed and we were unable to recover it. 00:26:28.082 [2024-07-16 00:27:46.872789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.082 [2024-07-16 00:27:46.872799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.082 qpair failed and we were unable to recover it. 00:26:28.083 [2024-07-16 00:27:46.873070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.083 [2024-07-16 00:27:46.873079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.083 qpair failed and we were unable to recover it. 00:26:28.083 [2024-07-16 00:27:46.873269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.083 [2024-07-16 00:27:46.873279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.083 qpair failed and we were unable to recover it. 00:26:28.083 [2024-07-16 00:27:46.873411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.083 [2024-07-16 00:27:46.873421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.083 qpair failed and we were unable to recover it. 00:26:28.083 [2024-07-16 00:27:46.873552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.083 [2024-07-16 00:27:46.873561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.083 qpair failed and we were unable to recover it. 00:26:28.083 [2024-07-16 00:27:46.873759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.083 [2024-07-16 00:27:46.873769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.083 qpair failed and we were unable to recover it. 00:26:28.083 [2024-07-16 00:27:46.873925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.083 [2024-07-16 00:27:46.873935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.083 qpair failed and we were unable to recover it. 00:26:28.083 [2024-07-16 00:27:46.874063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.083 [2024-07-16 00:27:46.874073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.083 qpair failed and we were unable to recover it. 00:26:28.083 [2024-07-16 00:27:46.874278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.083 [2024-07-16 00:27:46.874288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.083 qpair failed and we were unable to recover it. 00:26:28.083 [2024-07-16 00:27:46.874487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.083 [2024-07-16 00:27:46.874497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.083 qpair failed and we were unable to recover it. 00:26:28.083 [2024-07-16 00:27:46.874729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.083 [2024-07-16 00:27:46.874739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.083 qpair failed and we were unable to recover it. 00:26:28.083 [2024-07-16 00:27:46.874890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.083 [2024-07-16 00:27:46.874900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.083 qpair failed and we were unable to recover it. 00:26:28.083 [2024-07-16 00:27:46.875029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.083 [2024-07-16 00:27:46.875039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.083 qpair failed and we were unable to recover it. 00:26:28.083 [2024-07-16 00:27:46.875243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.083 [2024-07-16 00:27:46.875254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.083 qpair failed and we were unable to recover it. 00:26:28.083 [2024-07-16 00:27:46.875445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.083 [2024-07-16 00:27:46.875456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.083 qpair failed and we were unable to recover it. 00:26:28.083 [2024-07-16 00:27:46.875643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.083 [2024-07-16 00:27:46.875653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.083 qpair failed and we were unable to recover it. 00:26:28.083 [2024-07-16 00:27:46.875773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.083 [2024-07-16 00:27:46.875784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.083 qpair failed and we were unable to recover it. 00:26:28.083 [2024-07-16 00:27:46.875971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.083 [2024-07-16 00:27:46.875981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.083 qpair failed and we were unable to recover it. 00:26:28.083 [2024-07-16 00:27:46.876165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.083 [2024-07-16 00:27:46.876175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.083 qpair failed and we were unable to recover it. 00:26:28.083 [2024-07-16 00:27:46.876433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.083 [2024-07-16 00:27:46.876448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.083 qpair failed and we were unable to recover it. 00:26:28.083 [2024-07-16 00:27:46.876641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.083 [2024-07-16 00:27:46.876651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.083 qpair failed and we were unable to recover it. 00:26:28.083 [2024-07-16 00:27:46.876842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.083 [2024-07-16 00:27:46.876853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.083 qpair failed and we were unable to recover it. 00:26:28.083 [2024-07-16 00:27:46.877040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.083 [2024-07-16 00:27:46.877051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.083 qpair failed and we were unable to recover it. 00:26:28.083 [2024-07-16 00:27:46.877303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.083 [2024-07-16 00:27:46.877313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.083 qpair failed and we were unable to recover it. 00:26:28.083 [2024-07-16 00:27:46.877475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.083 [2024-07-16 00:27:46.877485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.083 qpair failed and we were unable to recover it. 00:26:28.083 [2024-07-16 00:27:46.877708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.083 [2024-07-16 00:27:46.877718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.083 qpair failed and we were unable to recover it. 00:26:28.083 [2024-07-16 00:27:46.877936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.083 [2024-07-16 00:27:46.877945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.083 qpair failed and we were unable to recover it. 00:26:28.083 [2024-07-16 00:27:46.878195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.083 [2024-07-16 00:27:46.878205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.083 qpair failed and we were unable to recover it. 00:26:28.083 [2024-07-16 00:27:46.878421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.083 [2024-07-16 00:27:46.878432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.083 qpair failed and we were unable to recover it. 00:26:28.083 [2024-07-16 00:27:46.878566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.083 [2024-07-16 00:27:46.878576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.083 qpair failed and we were unable to recover it. 00:26:28.083 [2024-07-16 00:27:46.878769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.083 [2024-07-16 00:27:46.878779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.083 qpair failed and we were unable to recover it. 00:26:28.083 [2024-07-16 00:27:46.878909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.084 [2024-07-16 00:27:46.878919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.084 qpair failed and we were unable to recover it. 00:26:28.084 [2024-07-16 00:27:46.879051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.084 [2024-07-16 00:27:46.879061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.084 qpair failed and we were unable to recover it. 00:26:28.084 [2024-07-16 00:27:46.879237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.084 [2024-07-16 00:27:46.879247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.084 qpair failed and we were unable to recover it. 00:26:28.084 [2024-07-16 00:27:46.879393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.084 [2024-07-16 00:27:46.879403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.084 qpair failed and we were unable to recover it. 00:26:28.084 [2024-07-16 00:27:46.879599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.084 [2024-07-16 00:27:46.879608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.084 qpair failed and we were unable to recover it. 00:26:28.084 [2024-07-16 00:27:46.879800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.084 [2024-07-16 00:27:46.879809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.084 qpair failed and we were unable to recover it. 00:26:28.084 [2024-07-16 00:27:46.880024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.084 [2024-07-16 00:27:46.880033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.084 qpair failed and we were unable to recover it. 00:26:28.084 [2024-07-16 00:27:46.880179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.084 [2024-07-16 00:27:46.880189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.084 qpair failed and we were unable to recover it. 00:26:28.084 [2024-07-16 00:27:46.880314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.084 [2024-07-16 00:27:46.880324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.084 qpair failed and we were unable to recover it. 00:26:28.084 [2024-07-16 00:27:46.880461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.084 [2024-07-16 00:27:46.880471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.084 qpair failed and we were unable to recover it. 00:26:28.084 [2024-07-16 00:27:46.880683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.084 [2024-07-16 00:27:46.880693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.084 qpair failed and we were unable to recover it. 00:26:28.084 [2024-07-16 00:27:46.880827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.084 [2024-07-16 00:27:46.880837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.084 qpair failed and we were unable to recover it. 00:26:28.084 [2024-07-16 00:27:46.881029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.084 [2024-07-16 00:27:46.881039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.084 qpair failed and we were unable to recover it. 00:26:28.084 [2024-07-16 00:27:46.881126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.084 [2024-07-16 00:27:46.881136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.084 qpair failed and we were unable to recover it. 00:26:28.084 [2024-07-16 00:27:46.881387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.084 [2024-07-16 00:27:46.881397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.084 qpair failed and we were unable to recover it. 00:26:28.084 [2024-07-16 00:27:46.881602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.084 [2024-07-16 00:27:46.881612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.084 qpair failed and we were unable to recover it. 00:26:28.084 [2024-07-16 00:27:46.881796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.084 [2024-07-16 00:27:46.881806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.084 qpair failed and we were unable to recover it. 00:26:28.084 [2024-07-16 00:27:46.881997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.084 [2024-07-16 00:27:46.882007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.084 qpair failed and we were unable to recover it. 00:26:28.084 [2024-07-16 00:27:46.882152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.084 [2024-07-16 00:27:46.882162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.084 qpair failed and we were unable to recover it. 00:26:28.363 [2024-07-16 00:27:46.882350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.364 [2024-07-16 00:27:46.882361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.364 qpair failed and we were unable to recover it. 00:26:28.364 [2024-07-16 00:27:46.882595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.364 [2024-07-16 00:27:46.882606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.364 qpair failed and we were unable to recover it. 00:26:28.364 [2024-07-16 00:27:46.882753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.364 [2024-07-16 00:27:46.882763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.364 qpair failed and we were unable to recover it. 00:26:28.364 [2024-07-16 00:27:46.882977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.364 [2024-07-16 00:27:46.882987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.364 qpair failed and we were unable to recover it. 00:26:28.364 [2024-07-16 00:27:46.883185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.364 [2024-07-16 00:27:46.883196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.364 qpair failed and we were unable to recover it. 00:26:28.364 [2024-07-16 00:27:46.883385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.364 [2024-07-16 00:27:46.883396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.364 qpair failed and we were unable to recover it. 00:26:28.364 [2024-07-16 00:27:46.883526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.364 [2024-07-16 00:27:46.883536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.364 qpair failed and we were unable to recover it. 00:26:28.364 [2024-07-16 00:27:46.883735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.364 [2024-07-16 00:27:46.883745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.364 qpair failed and we were unable to recover it. 00:26:28.364 [2024-07-16 00:27:46.883930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.364 [2024-07-16 00:27:46.883940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.364 qpair failed and we were unable to recover it. 00:26:28.364 [2024-07-16 00:27:46.884147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.364 [2024-07-16 00:27:46.884159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.364 qpair failed and we were unable to recover it. 00:26:28.364 [2024-07-16 00:27:46.884357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.364 [2024-07-16 00:27:46.884368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.364 qpair failed and we were unable to recover it. 00:26:28.364 [2024-07-16 00:27:46.884516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.364 [2024-07-16 00:27:46.884527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.364 qpair failed and we were unable to recover it. 00:26:28.364 [2024-07-16 00:27:46.884716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.364 [2024-07-16 00:27:46.884726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.364 qpair failed and we were unable to recover it. 00:26:28.364 [2024-07-16 00:27:46.884964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.364 [2024-07-16 00:27:46.884974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.364 qpair failed and we were unable to recover it. 00:26:28.364 [2024-07-16 00:27:46.885235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.364 [2024-07-16 00:27:46.885245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.364 qpair failed and we were unable to recover it. 00:26:28.364 [2024-07-16 00:27:46.885446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.364 [2024-07-16 00:27:46.885456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.364 qpair failed and we were unable to recover it. 00:26:28.364 [2024-07-16 00:27:46.885599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.364 [2024-07-16 00:27:46.885609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.364 qpair failed and we were unable to recover it. 00:26:28.364 [2024-07-16 00:27:46.885807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.364 [2024-07-16 00:27:46.885818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.364 qpair failed and we were unable to recover it. 00:26:28.364 [2024-07-16 00:27:46.885952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.364 [2024-07-16 00:27:46.885961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.364 qpair failed and we were unable to recover it. 00:26:28.364 [2024-07-16 00:27:46.886144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.364 [2024-07-16 00:27:46.886154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.364 qpair failed and we were unable to recover it. 00:26:28.364 [2024-07-16 00:27:46.886336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.364 [2024-07-16 00:27:46.886347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.364 qpair failed and we were unable to recover it. 00:26:28.364 [2024-07-16 00:27:46.886536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.364 [2024-07-16 00:27:46.886546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.364 qpair failed and we were unable to recover it. 00:26:28.364 [2024-07-16 00:27:46.886707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.364 [2024-07-16 00:27:46.886717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.364 qpair failed and we were unable to recover it. 00:26:28.364 [2024-07-16 00:27:46.886904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.364 [2024-07-16 00:27:46.886914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.364 qpair failed and we were unable to recover it. 00:26:28.364 [2024-07-16 00:27:46.887095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.364 [2024-07-16 00:27:46.887105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.364 qpair failed and we were unable to recover it. 00:26:28.364 [2024-07-16 00:27:46.887241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.364 [2024-07-16 00:27:46.887250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.364 qpair failed and we were unable to recover it. 00:26:28.364 [2024-07-16 00:27:46.887431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.364 [2024-07-16 00:27:46.887441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.364 qpair failed and we were unable to recover it. 00:26:28.364 [2024-07-16 00:27:46.887585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.364 [2024-07-16 00:27:46.887596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.364 qpair failed and we were unable to recover it. 00:26:28.364 [2024-07-16 00:27:46.887781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.364 [2024-07-16 00:27:46.887790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.364 qpair failed and we were unable to recover it. 00:26:28.364 [2024-07-16 00:27:46.887974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.364 [2024-07-16 00:27:46.887984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.364 qpair failed and we were unable to recover it. 00:26:28.364 [2024-07-16 00:27:46.888182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.364 [2024-07-16 00:27:46.888192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.364 qpair failed and we were unable to recover it. 00:26:28.364 [2024-07-16 00:27:46.888378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.364 [2024-07-16 00:27:46.888388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.364 qpair failed and we were unable to recover it. 00:26:28.364 [2024-07-16 00:27:46.888581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.364 [2024-07-16 00:27:46.888592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.364 qpair failed and we were unable to recover it. 00:26:28.364 [2024-07-16 00:27:46.888724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.364 [2024-07-16 00:27:46.888734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.364 qpair failed and we were unable to recover it. 00:26:28.364 [2024-07-16 00:27:46.888926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.364 [2024-07-16 00:27:46.888936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.364 qpair failed and we were unable to recover it. 00:26:28.364 [2024-07-16 00:27:46.889076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.364 [2024-07-16 00:27:46.889086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.364 qpair failed and we were unable to recover it. 00:26:28.364 [2024-07-16 00:27:46.889271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.364 [2024-07-16 00:27:46.889281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.364 qpair failed and we were unable to recover it. 00:26:28.364 [2024-07-16 00:27:46.889556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.364 [2024-07-16 00:27:46.889566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.364 qpair failed and we were unable to recover it. 00:26:28.364 [2024-07-16 00:27:46.889786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.364 [2024-07-16 00:27:46.889797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.364 qpair failed and we were unable to recover it. 00:26:28.364 [2024-07-16 00:27:46.889975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.364 [2024-07-16 00:27:46.889985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.364 qpair failed and we were unable to recover it. 00:26:28.364 [2024-07-16 00:27:46.890236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.364 [2024-07-16 00:27:46.890246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.364 qpair failed and we were unable to recover it. 00:26:28.365 [2024-07-16 00:27:46.890469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.365 [2024-07-16 00:27:46.890479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.365 qpair failed and we were unable to recover it. 00:26:28.365 [2024-07-16 00:27:46.890690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.365 [2024-07-16 00:27:46.890700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.365 qpair failed and we were unable to recover it. 00:26:28.365 [2024-07-16 00:27:46.890843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.365 [2024-07-16 00:27:46.890853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.365 qpair failed and we were unable to recover it. 00:26:28.365 [2024-07-16 00:27:46.891135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.365 [2024-07-16 00:27:46.891146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.365 qpair failed and we were unable to recover it. 00:26:28.365 [2024-07-16 00:27:46.891330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.365 [2024-07-16 00:27:46.891341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.365 qpair failed and we were unable to recover it. 00:26:28.365 [2024-07-16 00:27:46.891539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.365 [2024-07-16 00:27:46.891549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.365 qpair failed and we were unable to recover it. 00:26:28.365 [2024-07-16 00:27:46.891680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.365 [2024-07-16 00:27:46.891691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.365 qpair failed and we were unable to recover it. 00:26:28.365 [2024-07-16 00:27:46.891892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.365 [2024-07-16 00:27:46.891902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.365 qpair failed and we were unable to recover it. 00:26:28.365 [2024-07-16 00:27:46.892104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.365 [2024-07-16 00:27:46.892116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.365 qpair failed and we were unable to recover it. 00:26:28.365 [2024-07-16 00:27:46.892418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.365 [2024-07-16 00:27:46.892429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.365 qpair failed and we were unable to recover it. 00:26:28.365 [2024-07-16 00:27:46.892558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.365 [2024-07-16 00:27:46.892569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.365 qpair failed and we were unable to recover it. 00:26:28.365 [2024-07-16 00:27:46.892750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.365 [2024-07-16 00:27:46.892761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.365 qpair failed and we were unable to recover it. 00:26:28.365 [2024-07-16 00:27:46.892893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.365 [2024-07-16 00:27:46.892903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.365 qpair failed and we were unable to recover it. 00:26:28.365 [2024-07-16 00:27:46.893041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.365 [2024-07-16 00:27:46.893051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.365 qpair failed and we were unable to recover it. 00:26:28.365 [2024-07-16 00:27:46.893235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.365 [2024-07-16 00:27:46.893246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.365 qpair failed and we were unable to recover it. 00:26:28.365 [2024-07-16 00:27:46.893430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.365 [2024-07-16 00:27:46.893441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.365 qpair failed and we were unable to recover it. 00:26:28.365 [2024-07-16 00:27:46.893668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.365 [2024-07-16 00:27:46.893678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.365 qpair failed and we were unable to recover it. 00:26:28.365 [2024-07-16 00:27:46.893888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.365 [2024-07-16 00:27:46.893897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.365 qpair failed and we were unable to recover it. 00:26:28.365 [2024-07-16 00:27:46.894090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.365 [2024-07-16 00:27:46.894099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.365 qpair failed and we were unable to recover it. 00:26:28.365 [2024-07-16 00:27:46.894291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.365 [2024-07-16 00:27:46.894301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.365 qpair failed and we were unable to recover it. 00:26:28.365 [2024-07-16 00:27:46.894526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.365 [2024-07-16 00:27:46.894536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.365 qpair failed and we were unable to recover it. 00:26:28.365 [2024-07-16 00:27:46.894656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.365 [2024-07-16 00:27:46.894667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.365 qpair failed and we were unable to recover it. 00:26:28.365 [2024-07-16 00:27:46.894966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.365 [2024-07-16 00:27:46.894976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.365 qpair failed and we were unable to recover it. 00:26:28.365 [2024-07-16 00:27:46.895165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.365 [2024-07-16 00:27:46.895175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.365 qpair failed and we were unable to recover it. 00:26:28.365 [2024-07-16 00:27:46.895364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.365 [2024-07-16 00:27:46.895375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.365 qpair failed and we were unable to recover it. 00:26:28.365 [2024-07-16 00:27:46.895515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.365 [2024-07-16 00:27:46.895525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.365 qpair failed and we were unable to recover it. 00:26:28.365 [2024-07-16 00:27:46.895668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.365 [2024-07-16 00:27:46.895678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.365 qpair failed and we were unable to recover it. 00:26:28.365 [2024-07-16 00:27:46.895876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.365 [2024-07-16 00:27:46.895886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.365 qpair failed and we were unable to recover it. 00:26:28.365 [2024-07-16 00:27:46.896142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.365 [2024-07-16 00:27:46.896152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.365 qpair failed and we were unable to recover it. 00:26:28.365 [2024-07-16 00:27:46.896293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.365 [2024-07-16 00:27:46.896304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.365 qpair failed and we were unable to recover it. 00:26:28.365 [2024-07-16 00:27:46.896504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.365 [2024-07-16 00:27:46.896516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.365 qpair failed and we were unable to recover it. 00:26:28.365 [2024-07-16 00:27:46.896648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.365 [2024-07-16 00:27:46.896658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.365 qpair failed and we were unable to recover it. 00:26:28.365 [2024-07-16 00:27:46.896780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.365 [2024-07-16 00:27:46.896791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.365 qpair failed and we were unable to recover it. 00:26:28.365 [2024-07-16 00:27:46.896979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.365 [2024-07-16 00:27:46.896989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.365 qpair failed and we were unable to recover it. 00:26:28.365 [2024-07-16 00:27:46.897217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.365 [2024-07-16 00:27:46.897230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.365 qpair failed and we were unable to recover it. 00:26:28.365 [2024-07-16 00:27:46.897358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.365 [2024-07-16 00:27:46.897369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.365 qpair failed and we were unable to recover it. 00:26:28.365 [2024-07-16 00:27:46.897503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.365 [2024-07-16 00:27:46.897513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.365 qpair failed and we were unable to recover it. 00:26:28.365 [2024-07-16 00:27:46.897741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.365 [2024-07-16 00:27:46.897752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.365 qpair failed and we were unable to recover it. 00:26:28.365 [2024-07-16 00:27:46.898011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.365 [2024-07-16 00:27:46.898041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.365 qpair failed and we were unable to recover it. 00:26:28.365 [2024-07-16 00:27:46.898238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.365 [2024-07-16 00:27:46.898270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.365 qpair failed and we were unable to recover it. 00:26:28.365 [2024-07-16 00:27:46.898521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.366 [2024-07-16 00:27:46.898550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.366 qpair failed and we were unable to recover it. 00:26:28.366 [2024-07-16 00:27:46.899287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.366 [2024-07-16 00:27:46.899327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.366 qpair failed and we were unable to recover it. 00:26:28.366 [2024-07-16 00:27:46.899502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.366 [2024-07-16 00:27:46.899534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.366 qpair failed and we were unable to recover it. 00:26:28.366 [2024-07-16 00:27:46.899875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.366 [2024-07-16 00:27:46.899912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.366 qpair failed and we were unable to recover it. 00:26:28.366 [2024-07-16 00:27:46.900149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.366 [2024-07-16 00:27:46.900162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.366 qpair failed and we were unable to recover it. 00:26:28.366 [2024-07-16 00:27:46.900419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.366 [2024-07-16 00:27:46.900431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.366 qpair failed and we were unable to recover it. 00:26:28.366 [2024-07-16 00:27:46.900652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.366 [2024-07-16 00:27:46.900664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.366 qpair failed and we were unable to recover it. 00:26:28.366 [2024-07-16 00:27:46.900920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.366 [2024-07-16 00:27:46.900930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.366 qpair failed and we were unable to recover it. 00:26:28.366 [2024-07-16 00:27:46.901182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.366 [2024-07-16 00:27:46.901194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.366 qpair failed and we were unable to recover it. 00:26:28.366 [2024-07-16 00:27:46.901345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.366 [2024-07-16 00:27:46.901356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.366 qpair failed and we were unable to recover it. 00:26:28.366 [2024-07-16 00:27:46.901625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.366 [2024-07-16 00:27:46.901654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.366 qpair failed and we were unable to recover it. 00:26:28.366 [2024-07-16 00:27:46.901881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.366 [2024-07-16 00:27:46.901911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.366 qpair failed and we were unable to recover it. 00:26:28.366 [2024-07-16 00:27:46.902076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.366 [2024-07-16 00:27:46.902106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.366 qpair failed and we were unable to recover it. 00:26:28.366 [2024-07-16 00:27:46.902234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.366 [2024-07-16 00:27:46.902265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.366 qpair failed and we were unable to recover it. 00:26:28.366 [2024-07-16 00:27:46.902531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.366 [2024-07-16 00:27:46.902561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.366 qpair failed and we were unable to recover it. 00:26:28.366 [2024-07-16 00:27:46.902707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.366 [2024-07-16 00:27:46.902717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.366 qpair failed and we were unable to recover it. 00:26:28.366 [2024-07-16 00:27:46.902847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.366 [2024-07-16 00:27:46.902857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.366 qpair failed and we were unable to recover it. 00:26:28.366 [2024-07-16 00:27:46.903057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.366 [2024-07-16 00:27:46.903087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.366 qpair failed and we were unable to recover it. 00:26:28.366 [2024-07-16 00:27:46.903385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.366 [2024-07-16 00:27:46.903417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.366 qpair failed and we were unable to recover it. 00:26:28.366 [2024-07-16 00:27:46.903593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.366 [2024-07-16 00:27:46.903623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.366 qpair failed and we were unable to recover it. 00:26:28.366 [2024-07-16 00:27:46.903863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.366 [2024-07-16 00:27:46.903873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.366 qpair failed and we were unable to recover it. 00:26:28.366 [2024-07-16 00:27:46.904147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.366 [2024-07-16 00:27:46.904157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.366 qpair failed and we were unable to recover it. 00:26:28.366 [2024-07-16 00:27:46.904309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.366 [2024-07-16 00:27:46.904319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.366 qpair failed and we were unable to recover it. 00:26:28.366 [2024-07-16 00:27:46.904598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.366 [2024-07-16 00:27:46.904608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.366 qpair failed and we were unable to recover it. 00:26:28.366 [2024-07-16 00:27:46.904732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.366 [2024-07-16 00:27:46.904742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.366 qpair failed and we were unable to recover it. 00:26:28.366 [2024-07-16 00:27:46.904935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.366 [2024-07-16 00:27:46.904944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.366 qpair failed and we were unable to recover it. 00:26:28.366 [2024-07-16 00:27:46.905074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.366 [2024-07-16 00:27:46.905084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.366 qpair failed and we were unable to recover it. 00:26:28.366 [2024-07-16 00:27:46.905294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.366 [2024-07-16 00:27:46.905304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.366 qpair failed and we were unable to recover it. 00:26:28.366 [2024-07-16 00:27:46.905582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.366 [2024-07-16 00:27:46.905612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.366 qpair failed and we were unable to recover it. 00:26:28.366 [2024-07-16 00:27:46.905839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.366 [2024-07-16 00:27:46.905869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.366 qpair failed and we were unable to recover it. 00:26:28.366 [2024-07-16 00:27:46.906036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.366 [2024-07-16 00:27:46.906066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.366 qpair failed and we were unable to recover it. 00:26:28.366 [2024-07-16 00:27:46.906240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.366 [2024-07-16 00:27:46.906271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.366 qpair failed and we were unable to recover it. 00:26:28.366 [2024-07-16 00:27:46.906518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.366 [2024-07-16 00:27:46.906548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.366 qpair failed and we were unable to recover it. 00:26:28.366 [2024-07-16 00:27:46.906808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.366 [2024-07-16 00:27:46.906838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.366 qpair failed and we were unable to recover it. 00:26:28.366 [2024-07-16 00:27:46.907083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.366 [2024-07-16 00:27:46.907092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.366 qpair failed and we were unable to recover it. 00:26:28.366 [2024-07-16 00:27:46.907235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.366 [2024-07-16 00:27:46.907246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.366 qpair failed and we were unable to recover it. 00:26:28.366 [2024-07-16 00:27:46.907495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.366 [2024-07-16 00:27:46.907525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.366 qpair failed and we were unable to recover it. 00:26:28.366 [2024-07-16 00:27:46.907704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.366 [2024-07-16 00:27:46.907733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.366 qpair failed and we were unable to recover it. 00:26:28.366 [2024-07-16 00:27:46.907962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.366 [2024-07-16 00:27:46.907990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.366 qpair failed and we were unable to recover it. 00:26:28.366 [2024-07-16 00:27:46.908289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.366 [2024-07-16 00:27:46.908320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.366 qpair failed and we were unable to recover it. 00:26:28.366 [2024-07-16 00:27:46.908637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.366 [2024-07-16 00:27:46.908667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.366 qpair failed and we were unable to recover it. 00:26:28.367 [2024-07-16 00:27:46.908825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.367 [2024-07-16 00:27:46.908854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.367 qpair failed and we were unable to recover it. 00:26:28.367 [2024-07-16 00:27:46.909027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.367 [2024-07-16 00:27:46.909057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.367 qpair failed and we were unable to recover it. 00:26:28.367 [2024-07-16 00:27:46.909281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.367 [2024-07-16 00:27:46.909310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.367 qpair failed and we were unable to recover it. 00:26:28.367 [2024-07-16 00:27:46.909501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.367 [2024-07-16 00:27:46.909530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.367 qpair failed and we were unable to recover it. 00:26:28.367 [2024-07-16 00:27:46.909697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.367 [2024-07-16 00:27:46.909706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.367 qpair failed and we were unable to recover it. 00:26:28.367 [2024-07-16 00:27:46.909921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.367 [2024-07-16 00:27:46.909951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.367 qpair failed and we were unable to recover it. 00:26:28.367 [2024-07-16 00:27:46.910120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.367 [2024-07-16 00:27:46.910150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.367 qpair failed and we were unable to recover it. 00:26:28.367 [2024-07-16 00:27:46.910385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.367 [2024-07-16 00:27:46.910421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.367 qpair failed and we were unable to recover it. 00:26:28.367 [2024-07-16 00:27:46.910595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.367 [2024-07-16 00:27:46.910625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.367 qpair failed and we were unable to recover it. 00:26:28.367 [2024-07-16 00:27:46.910922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.367 [2024-07-16 00:27:46.910952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.367 qpair failed and we were unable to recover it. 00:26:28.367 [2024-07-16 00:27:46.911189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.367 [2024-07-16 00:27:46.911198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.367 qpair failed and we were unable to recover it. 00:26:28.367 [2024-07-16 00:27:46.911466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.367 [2024-07-16 00:27:46.911476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.367 qpair failed and we were unable to recover it. 00:26:28.367 [2024-07-16 00:27:46.911614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.367 [2024-07-16 00:27:46.911643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.367 qpair failed and we were unable to recover it. 00:26:28.367 [2024-07-16 00:27:46.911882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.367 [2024-07-16 00:27:46.911912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.367 qpair failed and we were unable to recover it. 00:26:28.367 [2024-07-16 00:27:46.912167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.367 [2024-07-16 00:27:46.912196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.367 qpair failed and we were unable to recover it. 00:26:28.367 [2024-07-16 00:27:46.912469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.367 [2024-07-16 00:27:46.912500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.367 qpair failed and we were unable to recover it. 00:26:28.367 [2024-07-16 00:27:46.912675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.367 [2024-07-16 00:27:46.912705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.367 qpair failed and we were unable to recover it. 00:26:28.367 [2024-07-16 00:27:46.912895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.367 [2024-07-16 00:27:46.912905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.367 qpair failed and we were unable to recover it. 00:26:28.367 [2024-07-16 00:27:46.913135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.367 [2024-07-16 00:27:46.913165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.367 qpair failed and we were unable to recover it. 00:26:28.367 [2024-07-16 00:27:46.913347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.367 [2024-07-16 00:27:46.913378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.367 qpair failed and we were unable to recover it. 00:26:28.367 [2024-07-16 00:27:46.913606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.367 [2024-07-16 00:27:46.913636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.367 qpair failed and we were unable to recover it. 00:26:28.367 [2024-07-16 00:27:46.913809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.367 [2024-07-16 00:27:46.913819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.367 qpair failed and we were unable to recover it. 00:26:28.367 [2024-07-16 00:27:46.914011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.367 [2024-07-16 00:27:46.914021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.367 qpair failed and we were unable to recover it. 00:26:28.367 [2024-07-16 00:27:46.914229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.367 [2024-07-16 00:27:46.914239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.367 qpair failed and we were unable to recover it. 00:26:28.367 [2024-07-16 00:27:46.914450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.367 [2024-07-16 00:27:46.914459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.367 qpair failed and we were unable to recover it. 00:26:28.367 [2024-07-16 00:27:46.914579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.367 [2024-07-16 00:27:46.914589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.367 qpair failed and we were unable to recover it. 00:26:28.367 [2024-07-16 00:27:46.914725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.367 [2024-07-16 00:27:46.914734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.367 qpair failed and we were unable to recover it. 00:26:28.367 [2024-07-16 00:27:46.914865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.367 [2024-07-16 00:27:46.914875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.367 qpair failed and we were unable to recover it. 00:26:28.367 [2024-07-16 00:27:46.915007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.367 [2024-07-16 00:27:46.915017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.367 qpair failed and we were unable to recover it. 00:26:28.367 [2024-07-16 00:27:46.915291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.367 [2024-07-16 00:27:46.915323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.367 qpair failed and we were unable to recover it. 00:26:28.367 [2024-07-16 00:27:46.915496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.367 [2024-07-16 00:27:46.915525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.367 qpair failed and we were unable to recover it. 00:26:28.367 [2024-07-16 00:27:46.915700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.367 [2024-07-16 00:27:46.915710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.367 qpair failed and we were unable to recover it. 00:26:28.367 [2024-07-16 00:27:46.915842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.367 [2024-07-16 00:27:46.915852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.367 qpair failed and we were unable to recover it. 00:26:28.367 [2024-07-16 00:27:46.915968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.367 [2024-07-16 00:27:46.915977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.367 qpair failed and we were unable to recover it. 00:26:28.367 [2024-07-16 00:27:46.916183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.367 [2024-07-16 00:27:46.916214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.367 qpair failed and we were unable to recover it. 00:26:28.367 [2024-07-16 00:27:46.916400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.367 [2024-07-16 00:27:46.916430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.367 qpair failed and we were unable to recover it. 00:26:28.367 [2024-07-16 00:27:46.916599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.367 [2024-07-16 00:27:46.916628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.367 qpair failed and we were unable to recover it. 00:26:28.367 [2024-07-16 00:27:46.916933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.367 [2024-07-16 00:27:46.916943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.367 qpair failed and we were unable to recover it. 00:26:28.367 [2024-07-16 00:27:46.917104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.367 [2024-07-16 00:27:46.917113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.367 qpair failed and we were unable to recover it. 00:26:28.367 [2024-07-16 00:27:46.917306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.367 [2024-07-16 00:27:46.917316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.367 qpair failed and we were unable to recover it. 00:26:28.367 [2024-07-16 00:27:46.917531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.367 [2024-07-16 00:27:46.917560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.367 qpair failed and we were unable to recover it. 00:26:28.368 [2024-07-16 00:27:46.917732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.368 [2024-07-16 00:27:46.917763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.368 qpair failed and we were unable to recover it. 00:26:28.368 [2024-07-16 00:27:46.917931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.368 [2024-07-16 00:27:46.917961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.368 qpair failed and we were unable to recover it. 00:26:28.368 [2024-07-16 00:27:46.918119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.368 [2024-07-16 00:27:46.918129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.368 qpair failed and we were unable to recover it. 00:26:28.368 [2024-07-16 00:27:46.918338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.368 [2024-07-16 00:27:46.918348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.368 qpair failed and we were unable to recover it. 00:26:28.368 [2024-07-16 00:27:46.918468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.368 [2024-07-16 00:27:46.918477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.368 qpair failed and we were unable to recover it. 00:26:28.368 [2024-07-16 00:27:46.918692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.368 [2024-07-16 00:27:46.918702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.368 qpair failed and we were unable to recover it. 00:26:28.368 [2024-07-16 00:27:46.918835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.368 [2024-07-16 00:27:46.918847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.368 qpair failed and we were unable to recover it. 00:26:28.368 [2024-07-16 00:27:46.918984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.368 [2024-07-16 00:27:46.919014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.368 qpair failed and we were unable to recover it. 00:26:28.368 [2024-07-16 00:27:46.919311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.368 [2024-07-16 00:27:46.919342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.368 qpair failed and we were unable to recover it. 00:26:28.368 [2024-07-16 00:27:46.919622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.368 [2024-07-16 00:27:46.919651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.368 qpair failed and we were unable to recover it. 00:26:28.368 [2024-07-16 00:27:46.919825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.368 [2024-07-16 00:27:46.919854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.368 qpair failed and we were unable to recover it. 00:26:28.368 [2024-07-16 00:27:46.920040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.368 [2024-07-16 00:27:46.920050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.368 qpair failed and we were unable to recover it. 00:26:28.368 [2024-07-16 00:27:46.920172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.368 [2024-07-16 00:27:46.920181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.368 qpair failed and we were unable to recover it. 00:26:28.368 [2024-07-16 00:27:46.920316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.368 [2024-07-16 00:27:46.920325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.368 qpair failed and we were unable to recover it. 00:26:28.368 [2024-07-16 00:27:46.920464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.368 [2024-07-16 00:27:46.920474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.368 qpair failed and we were unable to recover it. 00:26:28.368 [2024-07-16 00:27:46.920605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.368 [2024-07-16 00:27:46.920615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.368 qpair failed and we were unable to recover it. 00:26:28.368 [2024-07-16 00:27:46.920801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.368 [2024-07-16 00:27:46.920810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.368 qpair failed and we were unable to recover it. 00:26:28.368 [2024-07-16 00:27:46.921027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.368 [2024-07-16 00:27:46.921037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.368 qpair failed and we were unable to recover it. 00:26:28.368 [2024-07-16 00:27:46.921176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.368 [2024-07-16 00:27:46.921186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.368 qpair failed and we were unable to recover it. 00:26:28.368 [2024-07-16 00:27:46.921328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.368 [2024-07-16 00:27:46.921338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.368 qpair failed and we were unable to recover it. 00:26:28.368 [2024-07-16 00:27:46.921479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.368 [2024-07-16 00:27:46.921489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.368 qpair failed and we were unable to recover it. 00:26:28.368 [2024-07-16 00:27:46.921661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.368 [2024-07-16 00:27:46.921671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.368 qpair failed and we were unable to recover it. 00:26:28.368 [2024-07-16 00:27:46.921794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.368 [2024-07-16 00:27:46.921803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.368 qpair failed and we were unable to recover it. 00:26:28.368 [2024-07-16 00:27:46.922010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.368 [2024-07-16 00:27:46.922039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.368 qpair failed and we were unable to recover it. 00:26:28.368 [2024-07-16 00:27:46.922292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.368 [2024-07-16 00:27:46.922322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.368 qpair failed and we were unable to recover it. 00:26:28.368 [2024-07-16 00:27:46.922618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.368 [2024-07-16 00:27:46.922627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.368 qpair failed and we were unable to recover it. 00:26:28.368 [2024-07-16 00:27:46.922827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.368 [2024-07-16 00:27:46.922857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.368 qpair failed and we were unable to recover it. 00:26:28.368 [2024-07-16 00:27:46.923019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.368 [2024-07-16 00:27:46.923048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.368 qpair failed and we were unable to recover it. 00:26:28.368 [2024-07-16 00:27:46.923352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.368 [2024-07-16 00:27:46.923383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.368 qpair failed and we were unable to recover it. 00:26:28.368 [2024-07-16 00:27:46.923569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.368 [2024-07-16 00:27:46.923599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.368 qpair failed and we were unable to recover it. 00:26:28.368 [2024-07-16 00:27:46.923781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.368 [2024-07-16 00:27:46.923814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.368 qpair failed and we were unable to recover it. 00:26:28.368 [2024-07-16 00:27:46.924059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.368 [2024-07-16 00:27:46.924069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.368 qpair failed and we were unable to recover it. 00:26:28.368 [2024-07-16 00:27:46.924196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.368 [2024-07-16 00:27:46.924205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.368 qpair failed and we were unable to recover it. 00:26:28.368 [2024-07-16 00:27:46.924352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.369 [2024-07-16 00:27:46.924362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.369 qpair failed and we were unable to recover it. 00:26:28.369 [2024-07-16 00:27:46.924558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.369 [2024-07-16 00:27:46.924568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.369 qpair failed and we were unable to recover it. 00:26:28.369 [2024-07-16 00:27:46.924690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.369 [2024-07-16 00:27:46.924704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.369 qpair failed and we were unable to recover it. 00:26:28.369 [2024-07-16 00:27:46.924903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.369 [2024-07-16 00:27:46.924913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.369 qpair failed and we were unable to recover it. 00:26:28.369 [2024-07-16 00:27:46.925044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.369 [2024-07-16 00:27:46.925054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.369 qpair failed and we were unable to recover it. 00:26:28.369 [2024-07-16 00:27:46.925183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.369 [2024-07-16 00:27:46.925193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.369 qpair failed and we were unable to recover it. 00:26:28.369 [2024-07-16 00:27:46.925336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.369 [2024-07-16 00:27:46.925345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.369 qpair failed and we were unable to recover it. 00:26:28.369 [2024-07-16 00:27:46.925461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.369 [2024-07-16 00:27:46.925471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.369 qpair failed and we were unable to recover it. 00:26:28.369 [2024-07-16 00:27:46.925608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.369 [2024-07-16 00:27:46.925618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.369 qpair failed and we were unable to recover it. 00:26:28.369 [2024-07-16 00:27:46.925764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.369 [2024-07-16 00:27:46.925774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.369 qpair failed and we were unable to recover it. 00:26:28.369 [2024-07-16 00:27:46.925970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.369 [2024-07-16 00:27:46.925981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.369 qpair failed and we were unable to recover it. 00:26:28.369 [2024-07-16 00:27:46.926108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.369 [2024-07-16 00:27:46.926118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.369 qpair failed and we were unable to recover it. 00:26:28.369 [2024-07-16 00:27:46.926250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.369 [2024-07-16 00:27:46.926261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.369 qpair failed and we were unable to recover it. 00:26:28.369 [2024-07-16 00:27:46.926377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.369 [2024-07-16 00:27:46.926390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.369 qpair failed and we were unable to recover it. 00:26:28.369 [2024-07-16 00:27:46.926582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.369 [2024-07-16 00:27:46.926593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.369 qpair failed and we were unable to recover it. 00:26:28.369 [2024-07-16 00:27:46.926869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.369 [2024-07-16 00:27:46.926879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.369 qpair failed and we were unable to recover it. 00:26:28.369 [2024-07-16 00:27:46.927004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.369 [2024-07-16 00:27:46.927014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.369 qpair failed and we were unable to recover it. 00:26:28.369 [2024-07-16 00:27:46.927153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.369 [2024-07-16 00:27:46.927163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.369 qpair failed and we were unable to recover it. 00:26:28.369 [2024-07-16 00:27:46.927344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.369 [2024-07-16 00:27:46.927354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.369 qpair failed and we were unable to recover it. 00:26:28.369 [2024-07-16 00:27:46.927480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.369 [2024-07-16 00:27:46.927490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.369 qpair failed and we were unable to recover it. 00:26:28.369 [2024-07-16 00:27:46.927609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.369 [2024-07-16 00:27:46.927619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.369 qpair failed and we were unable to recover it. 00:26:28.369 [2024-07-16 00:27:46.927801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.369 [2024-07-16 00:27:46.927812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.369 qpair failed and we were unable to recover it. 00:26:28.369 [2024-07-16 00:27:46.928003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.369 [2024-07-16 00:27:46.928013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.369 qpair failed and we were unable to recover it. 00:26:28.369 [2024-07-16 00:27:46.928185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.369 [2024-07-16 00:27:46.928215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.369 qpair failed and we were unable to recover it. 00:26:28.369 [2024-07-16 00:27:46.928472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.369 [2024-07-16 00:27:46.928503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.369 qpair failed and we were unable to recover it. 00:26:28.369 [2024-07-16 00:27:46.928662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.369 [2024-07-16 00:27:46.928672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.369 qpair failed and we were unable to recover it. 00:26:28.369 [2024-07-16 00:27:46.928889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.369 [2024-07-16 00:27:46.928919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.369 qpair failed and we were unable to recover it. 00:26:28.369 [2024-07-16 00:27:46.929092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.369 [2024-07-16 00:27:46.929121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.369 qpair failed and we were unable to recover it. 00:26:28.369 [2024-07-16 00:27:46.929362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.369 [2024-07-16 00:27:46.929393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.369 qpair failed and we were unable to recover it. 00:26:28.369 [2024-07-16 00:27:46.929558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.369 [2024-07-16 00:27:46.929588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.369 qpair failed and we were unable to recover it. 00:26:28.369 [2024-07-16 00:27:46.929824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.369 [2024-07-16 00:27:46.929853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.369 qpair failed and we were unable to recover it. 00:26:28.369 [2024-07-16 00:27:46.930077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.369 [2024-07-16 00:27:46.930087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.369 qpair failed and we were unable to recover it. 00:26:28.369 [2024-07-16 00:27:46.930365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.369 [2024-07-16 00:27:46.930375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.369 qpair failed and we were unable to recover it. 00:26:28.369 [2024-07-16 00:27:46.930503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.369 [2024-07-16 00:27:46.930514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.369 qpair failed and we were unable to recover it. 00:26:28.369 [2024-07-16 00:27:46.930636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.369 [2024-07-16 00:27:46.930646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.369 qpair failed and we were unable to recover it. 00:26:28.369 [2024-07-16 00:27:46.930770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.369 [2024-07-16 00:27:46.930780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.369 qpair failed and we were unable to recover it. 00:26:28.369 [2024-07-16 00:27:46.930907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.369 [2024-07-16 00:27:46.930916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.369 qpair failed and we were unable to recover it. 00:26:28.369 [2024-07-16 00:27:46.931165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.369 [2024-07-16 00:27:46.931175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.369 qpair failed and we were unable to recover it. 00:26:28.369 [2024-07-16 00:27:46.931341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.369 [2024-07-16 00:27:46.931352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.369 qpair failed and we were unable to recover it. 00:26:28.369 [2024-07-16 00:27:46.931571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.369 [2024-07-16 00:27:46.931581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.369 qpair failed and we were unable to recover it. 00:26:28.369 [2024-07-16 00:27:46.931703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.369 [2024-07-16 00:27:46.931713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.369 qpair failed and we were unable to recover it. 00:26:28.370 [2024-07-16 00:27:46.931843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.370 [2024-07-16 00:27:46.931854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.370 qpair failed and we were unable to recover it. 00:26:28.370 [2024-07-16 00:27:46.931999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.370 [2024-07-16 00:27:46.932009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.370 qpair failed and we were unable to recover it. 00:26:28.370 [2024-07-16 00:27:46.932200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.370 [2024-07-16 00:27:46.932210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.370 qpair failed and we were unable to recover it. 00:26:28.370 [2024-07-16 00:27:46.932318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.370 [2024-07-16 00:27:46.932328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.370 qpair failed and we were unable to recover it. 00:26:28.370 [2024-07-16 00:27:46.932400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.370 [2024-07-16 00:27:46.932409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.370 qpair failed and we were unable to recover it. 00:26:28.370 [2024-07-16 00:27:46.932528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.370 [2024-07-16 00:27:46.932537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.370 qpair failed and we were unable to recover it. 00:26:28.370 [2024-07-16 00:27:46.932678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.370 [2024-07-16 00:27:46.932689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.370 qpair failed and we were unable to recover it. 00:26:28.370 [2024-07-16 00:27:46.932907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.370 [2024-07-16 00:27:46.932917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.370 qpair failed and we were unable to recover it. 00:26:28.370 [2024-07-16 00:27:46.933110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.370 [2024-07-16 00:27:46.933120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.370 qpair failed and we were unable to recover it. 00:26:28.370 [2024-07-16 00:27:46.933253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.370 [2024-07-16 00:27:46.933263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.370 qpair failed and we were unable to recover it. 00:26:28.370 [2024-07-16 00:27:46.933391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.370 [2024-07-16 00:27:46.933401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.370 qpair failed and we were unable to recover it. 00:26:28.370 [2024-07-16 00:27:46.933586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.370 [2024-07-16 00:27:46.933597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.370 qpair failed and we were unable to recover it. 00:26:28.370 [2024-07-16 00:27:46.933720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.370 [2024-07-16 00:27:46.933732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.370 qpair failed and we were unable to recover it. 00:26:28.370 [2024-07-16 00:27:46.933861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.370 [2024-07-16 00:27:46.933871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.370 qpair failed and we were unable to recover it. 00:26:28.370 [2024-07-16 00:27:46.934019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.370 [2024-07-16 00:27:46.934029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.370 qpair failed and we were unable to recover it. 00:26:28.370 [2024-07-16 00:27:46.934232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.370 [2024-07-16 00:27:46.934242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.370 qpair failed and we were unable to recover it. 00:26:28.370 [2024-07-16 00:27:46.934322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.370 [2024-07-16 00:27:46.934332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.370 qpair failed and we were unable to recover it. 00:26:28.370 [2024-07-16 00:27:46.934456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.370 [2024-07-16 00:27:46.934465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.370 qpair failed and we were unable to recover it. 00:26:28.370 [2024-07-16 00:27:46.934649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.370 [2024-07-16 00:27:46.934659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.370 qpair failed and we were unable to recover it. 00:26:28.370 [2024-07-16 00:27:46.934782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.370 [2024-07-16 00:27:46.934791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.370 qpair failed and we were unable to recover it. 00:26:28.370 [2024-07-16 00:27:46.934978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.370 [2024-07-16 00:27:46.934987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.370 qpair failed and we were unable to recover it. 00:26:28.370 [2024-07-16 00:27:46.935128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.370 [2024-07-16 00:27:46.935138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.370 qpair failed and we were unable to recover it. 00:26:28.370 [2024-07-16 00:27:46.935300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.370 [2024-07-16 00:27:46.935311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.370 qpair failed and we were unable to recover it. 00:26:28.370 [2024-07-16 00:27:46.935496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.370 [2024-07-16 00:27:46.935507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.370 qpair failed and we were unable to recover it. 00:26:28.370 [2024-07-16 00:27:46.935644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.370 [2024-07-16 00:27:46.935654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.370 qpair failed and we were unable to recover it. 00:26:28.370 [2024-07-16 00:27:46.935774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.370 [2024-07-16 00:27:46.935784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.370 qpair failed and we were unable to recover it. 00:26:28.370 [2024-07-16 00:27:46.935924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.370 [2024-07-16 00:27:46.935935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.370 qpair failed and we were unable to recover it. 00:26:28.370 [2024-07-16 00:27:46.936061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.370 [2024-07-16 00:27:46.936072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.370 qpair failed and we were unable to recover it. 00:26:28.370 [2024-07-16 00:27:46.936195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.370 [2024-07-16 00:27:46.936206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.370 qpair failed and we were unable to recover it. 00:26:28.370 [2024-07-16 00:27:46.936427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.370 [2024-07-16 00:27:46.936457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.370 qpair failed and we were unable to recover it. 00:26:28.370 [2024-07-16 00:27:46.936699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.370 [2024-07-16 00:27:46.936730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.370 qpair failed and we were unable to recover it. 00:26:28.370 [2024-07-16 00:27:46.936889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.370 [2024-07-16 00:27:46.936929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.370 qpair failed and we were unable to recover it. 00:26:28.370 [2024-07-16 00:27:46.937142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.370 [2024-07-16 00:27:46.937153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.370 qpair failed and we were unable to recover it. 00:26:28.370 [2024-07-16 00:27:46.937297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.370 [2024-07-16 00:27:46.937307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.370 qpair failed and we were unable to recover it. 00:26:28.370 [2024-07-16 00:27:46.937488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.370 [2024-07-16 00:27:46.937498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.370 qpair failed and we were unable to recover it. 00:26:28.370 [2024-07-16 00:27:46.937629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.370 [2024-07-16 00:27:46.937639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.370 qpair failed and we were unable to recover it. 00:26:28.370 [2024-07-16 00:27:46.937765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.370 [2024-07-16 00:27:46.937775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.370 qpair failed and we were unable to recover it. 00:26:28.370 [2024-07-16 00:27:46.937961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.370 [2024-07-16 00:27:46.937971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.370 qpair failed and we were unable to recover it. 00:26:28.370 [2024-07-16 00:27:46.938198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.370 [2024-07-16 00:27:46.938208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.370 qpair failed and we were unable to recover it. 00:26:28.370 [2024-07-16 00:27:46.938370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.370 [2024-07-16 00:27:46.938380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.370 qpair failed and we were unable to recover it. 00:26:28.370 [2024-07-16 00:27:46.938496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.370 [2024-07-16 00:27:46.938506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.371 qpair failed and we were unable to recover it. 00:26:28.371 [2024-07-16 00:27:46.938623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.371 [2024-07-16 00:27:46.938633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.371 qpair failed and we were unable to recover it. 00:26:28.371 [2024-07-16 00:27:46.938884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.371 [2024-07-16 00:27:46.938894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.371 qpair failed and we were unable to recover it. 00:26:28.371 [2024-07-16 00:27:46.939008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.371 [2024-07-16 00:27:46.939018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.371 qpair failed and we were unable to recover it. 00:26:28.371 [2024-07-16 00:27:46.939135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.371 [2024-07-16 00:27:46.939145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.371 qpair failed and we were unable to recover it. 00:26:28.371 [2024-07-16 00:27:46.939332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.371 [2024-07-16 00:27:46.939343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.371 qpair failed and we were unable to recover it. 00:26:28.371 [2024-07-16 00:27:46.939608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.371 [2024-07-16 00:27:46.939618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.371 qpair failed and we were unable to recover it. 00:26:28.371 [2024-07-16 00:27:46.939812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.371 [2024-07-16 00:27:46.939821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.371 qpair failed and we were unable to recover it. 00:26:28.371 [2024-07-16 00:27:46.939959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.371 [2024-07-16 00:27:46.939969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.371 qpair failed and we were unable to recover it. 00:26:28.371 [2024-07-16 00:27:46.940088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.371 [2024-07-16 00:27:46.940118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.371 qpair failed and we were unable to recover it. 00:26:28.371 [2024-07-16 00:27:46.940287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.371 [2024-07-16 00:27:46.940317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.371 qpair failed and we were unable to recover it. 00:26:28.371 [2024-07-16 00:27:46.940492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.371 [2024-07-16 00:27:46.940522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.371 qpair failed and we were unable to recover it. 00:26:28.371 [2024-07-16 00:27:46.940749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.371 [2024-07-16 00:27:46.940761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.371 qpair failed and we were unable to recover it. 00:26:28.371 [2024-07-16 00:27:46.940960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.371 [2024-07-16 00:27:46.940989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.371 qpair failed and we were unable to recover it. 00:26:28.371 [2024-07-16 00:27:46.941133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.371 [2024-07-16 00:27:46.941164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.371 qpair failed and we were unable to recover it. 00:26:28.371 [2024-07-16 00:27:46.941408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.371 [2024-07-16 00:27:46.941438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.371 qpair failed and we were unable to recover it. 00:26:28.371 [2024-07-16 00:27:46.941659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.371 [2024-07-16 00:27:46.941668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.371 qpair failed and we were unable to recover it. 00:26:28.371 [2024-07-16 00:27:46.941884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.371 [2024-07-16 00:27:46.941914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.371 qpair failed and we were unable to recover it. 00:26:28.371 [2024-07-16 00:27:46.942141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.371 [2024-07-16 00:27:46.942170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.371 qpair failed and we were unable to recover it. 00:26:28.371 [2024-07-16 00:27:46.942417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.371 [2024-07-16 00:27:46.942447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.371 qpair failed and we were unable to recover it. 00:26:28.371 [2024-07-16 00:27:46.942757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.371 [2024-07-16 00:27:46.942787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.371 qpair failed and we were unable to recover it. 00:26:28.371 [2024-07-16 00:27:46.943079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.371 [2024-07-16 00:27:46.943109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.371 qpair failed and we were unable to recover it. 00:26:28.371 [2024-07-16 00:27:46.943351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.371 [2024-07-16 00:27:46.943381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.371 qpair failed and we were unable to recover it. 00:26:28.371 [2024-07-16 00:27:46.943549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.371 [2024-07-16 00:27:46.943579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.371 qpair failed and we were unable to recover it. 00:26:28.371 [2024-07-16 00:27:46.943807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.371 [2024-07-16 00:27:46.943817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.371 qpair failed and we were unable to recover it. 00:26:28.371 [2024-07-16 00:27:46.944016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.371 [2024-07-16 00:27:46.944046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.371 qpair failed and we were unable to recover it. 00:26:28.371 [2024-07-16 00:27:46.944297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.371 [2024-07-16 00:27:46.944329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.371 qpair failed and we were unable to recover it. 00:26:28.371 [2024-07-16 00:27:46.944558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.371 [2024-07-16 00:27:46.944588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.371 qpair failed and we were unable to recover it. 00:26:28.371 [2024-07-16 00:27:46.944767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.371 [2024-07-16 00:27:46.944776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.371 qpair failed and we were unable to recover it. 00:26:28.371 [2024-07-16 00:27:46.944903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.371 [2024-07-16 00:27:46.944913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.371 qpair failed and we were unable to recover it. 00:26:28.371 [2024-07-16 00:27:46.945126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.371 [2024-07-16 00:27:46.945156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.371 qpair failed and we were unable to recover it. 00:26:28.371 [2024-07-16 00:27:46.945449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.371 [2024-07-16 00:27:46.945480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.371 qpair failed and we were unable to recover it. 00:26:28.371 [2024-07-16 00:27:46.945701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.371 [2024-07-16 00:27:46.945711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.371 qpair failed and we were unable to recover it. 00:26:28.371 [2024-07-16 00:27:46.945797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.371 [2024-07-16 00:27:46.945806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.371 qpair failed and we were unable to recover it. 00:26:28.371 [2024-07-16 00:27:46.945987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.371 [2024-07-16 00:27:46.945997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.371 qpair failed and we were unable to recover it. 00:26:28.371 [2024-07-16 00:27:46.946248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.371 [2024-07-16 00:27:46.946264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.371 qpair failed and we were unable to recover it. 00:26:28.371 [2024-07-16 00:27:46.946480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.371 [2024-07-16 00:27:46.946489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.371 qpair failed and we were unable to recover it. 00:26:28.371 [2024-07-16 00:27:46.946681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.371 [2024-07-16 00:27:46.946712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.371 qpair failed and we were unable to recover it. 00:26:28.371 [2024-07-16 00:27:46.946967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.371 [2024-07-16 00:27:46.946997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.371 qpair failed and we were unable to recover it. 00:26:28.371 [2024-07-16 00:27:46.947264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.371 [2024-07-16 00:27:46.947295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.371 qpair failed and we were unable to recover it. 00:26:28.371 [2024-07-16 00:27:46.947469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.371 [2024-07-16 00:27:46.947498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.371 qpair failed and we were unable to recover it. 00:26:28.371 [2024-07-16 00:27:46.947667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.371 [2024-07-16 00:27:46.947676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.372 qpair failed and we were unable to recover it. 00:26:28.372 [2024-07-16 00:27:46.947858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.372 [2024-07-16 00:27:46.947867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.372 qpair failed and we were unable to recover it. 00:26:28.372 [2024-07-16 00:27:46.948065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.372 [2024-07-16 00:27:46.948095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.372 qpair failed and we were unable to recover it. 00:26:28.372 [2024-07-16 00:27:46.948391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.372 [2024-07-16 00:27:46.948422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.372 qpair failed and we were unable to recover it. 00:26:28.372 [2024-07-16 00:27:46.948590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.372 [2024-07-16 00:27:46.948618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.372 qpair failed and we were unable to recover it. 00:26:28.372 [2024-07-16 00:27:46.948751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.372 [2024-07-16 00:27:46.948761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.372 qpair failed and we were unable to recover it. 00:26:28.372 [2024-07-16 00:27:46.948929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.372 [2024-07-16 00:27:46.948959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.372 qpair failed and we were unable to recover it. 00:26:28.372 [2024-07-16 00:27:46.949117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.372 [2024-07-16 00:27:46.949147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.372 qpair failed and we were unable to recover it. 00:26:28.372 [2024-07-16 00:27:46.949306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.372 [2024-07-16 00:27:46.949335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.372 qpair failed and we were unable to recover it. 00:26:28.372 [2024-07-16 00:27:46.949563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.372 [2024-07-16 00:27:46.949594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.372 qpair failed and we were unable to recover it. 00:26:28.372 [2024-07-16 00:27:46.949825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.372 [2024-07-16 00:27:46.949855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.372 qpair failed and we were unable to recover it. 00:26:28.372 [2024-07-16 00:27:46.950102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.372 [2024-07-16 00:27:46.950114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.372 qpair failed and we were unable to recover it. 00:26:28.372 [2024-07-16 00:27:46.950309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.372 [2024-07-16 00:27:46.950319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.372 qpair failed and we were unable to recover it. 00:26:28.372 [2024-07-16 00:27:46.950452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.372 [2024-07-16 00:27:46.950482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.372 qpair failed and we were unable to recover it. 00:26:28.372 [2024-07-16 00:27:46.950710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.372 [2024-07-16 00:27:46.950740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.372 qpair failed and we were unable to recover it. 00:26:28.372 [2024-07-16 00:27:46.950995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.372 [2024-07-16 00:27:46.951026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.372 qpair failed and we were unable to recover it. 00:26:28.372 [2024-07-16 00:27:46.951252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.372 [2024-07-16 00:27:46.951282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.372 qpair failed and we were unable to recover it. 00:26:28.372 [2024-07-16 00:27:46.951459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.372 [2024-07-16 00:27:46.951489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.372 qpair failed and we were unable to recover it. 00:26:28.372 [2024-07-16 00:27:46.951658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.372 [2024-07-16 00:27:46.951687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.372 qpair failed and we were unable to recover it. 00:26:28.372 [2024-07-16 00:27:46.951860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.372 [2024-07-16 00:27:46.951889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.372 qpair failed and we were unable to recover it. 00:26:28.372 [2024-07-16 00:27:46.952063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.372 [2024-07-16 00:27:46.952092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.372 qpair failed and we were unable to recover it. 00:26:28.372 [2024-07-16 00:27:46.952393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.372 [2024-07-16 00:27:46.952404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.372 qpair failed and we were unable to recover it. 00:26:28.372 [2024-07-16 00:27:46.952527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.372 [2024-07-16 00:27:46.952537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.372 qpair failed and we were unable to recover it. 00:26:28.372 [2024-07-16 00:27:46.952656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.372 [2024-07-16 00:27:46.952665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.372 qpair failed and we were unable to recover it. 00:26:28.372 [2024-07-16 00:27:46.952874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.372 [2024-07-16 00:27:46.952883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.372 qpair failed and we were unable to recover it. 00:26:28.372 [2024-07-16 00:27:46.953033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.372 [2024-07-16 00:27:46.953043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.372 qpair failed and we were unable to recover it. 00:26:28.372 [2024-07-16 00:27:46.953239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.372 [2024-07-16 00:27:46.953270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.372 qpair failed and we were unable to recover it. 00:26:28.372 [2024-07-16 00:27:46.953524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.372 [2024-07-16 00:27:46.953554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.372 qpair failed and we were unable to recover it. 00:26:28.372 [2024-07-16 00:27:46.953847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.372 [2024-07-16 00:27:46.953876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.372 qpair failed and we were unable to recover it. 00:26:28.372 [2024-07-16 00:27:46.954100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.372 [2024-07-16 00:27:46.954130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.372 qpair failed and we were unable to recover it. 00:26:28.372 [2024-07-16 00:27:46.954294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.372 [2024-07-16 00:27:46.954325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.372 qpair failed and we were unable to recover it. 00:26:28.372 [2024-07-16 00:27:46.954509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.372 [2024-07-16 00:27:46.954539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.372 qpair failed and we were unable to recover it. 00:26:28.372 [2024-07-16 00:27:46.954810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.372 [2024-07-16 00:27:46.954840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.372 qpair failed and we were unable to recover it. 00:26:28.372 [2024-07-16 00:27:46.955024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.372 [2024-07-16 00:27:46.955057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.372 qpair failed and we were unable to recover it. 00:26:28.372 [2024-07-16 00:27:46.955171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.372 [2024-07-16 00:27:46.955181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.372 qpair failed and we were unable to recover it. 00:26:28.372 [2024-07-16 00:27:46.955303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.372 [2024-07-16 00:27:46.955313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.372 qpair failed and we were unable to recover it. 00:26:28.372 [2024-07-16 00:27:46.955505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.373 [2024-07-16 00:27:46.955515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.373 qpair failed and we were unable to recover it. 00:26:28.373 [2024-07-16 00:27:46.955650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.373 [2024-07-16 00:27:46.955659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.373 qpair failed and we were unable to recover it. 00:26:28.373 [2024-07-16 00:27:46.955867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.373 [2024-07-16 00:27:46.955896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.373 qpair failed and we were unable to recover it. 00:26:28.373 [2024-07-16 00:27:46.956190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.373 [2024-07-16 00:27:46.956220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.373 qpair failed and we were unable to recover it. 00:26:28.373 [2024-07-16 00:27:46.956477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.373 [2024-07-16 00:27:46.956506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.373 qpair failed and we were unable to recover it. 00:26:28.373 [2024-07-16 00:27:46.956671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.373 [2024-07-16 00:27:46.956700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.373 qpair failed and we were unable to recover it. 00:26:28.373 [2024-07-16 00:27:46.957023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.373 [2024-07-16 00:27:46.957046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.373 qpair failed and we were unable to recover it. 00:26:28.373 [2024-07-16 00:27:46.957298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.373 [2024-07-16 00:27:46.957308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.373 qpair failed and we were unable to recover it. 00:26:28.373 [2024-07-16 00:27:46.957503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.373 [2024-07-16 00:27:46.957512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.373 qpair failed and we were unable to recover it. 00:26:28.373 [2024-07-16 00:27:46.957702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.373 [2024-07-16 00:27:46.957712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.373 qpair failed and we were unable to recover it. 00:26:28.373 [2024-07-16 00:27:46.957841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.373 [2024-07-16 00:27:46.957850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.373 qpair failed and we were unable to recover it. 00:26:28.373 [2024-07-16 00:27:46.957997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.373 [2024-07-16 00:27:46.958006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.373 qpair failed and we were unable to recover it. 00:26:28.373 [2024-07-16 00:27:46.958261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.373 [2024-07-16 00:27:46.958271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.373 qpair failed and we were unable to recover it. 00:26:28.373 [2024-07-16 00:27:46.958368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.373 [2024-07-16 00:27:46.958378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.373 qpair failed and we were unable to recover it. 00:26:28.373 [2024-07-16 00:27:46.958505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.373 [2024-07-16 00:27:46.958515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.373 qpair failed and we were unable to recover it. 00:26:28.373 [2024-07-16 00:27:46.958726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.373 [2024-07-16 00:27:46.958737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.373 qpair failed and we were unable to recover it. 00:26:28.373 [2024-07-16 00:27:46.958936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.373 [2024-07-16 00:27:46.958946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.373 qpair failed and we were unable to recover it. 00:26:28.373 [2024-07-16 00:27:46.959083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.373 [2024-07-16 00:27:46.959092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.373 qpair failed and we were unable to recover it. 00:26:28.373 [2024-07-16 00:27:46.959215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.373 [2024-07-16 00:27:46.959230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.373 qpair failed and we were unable to recover it. 00:26:28.373 [2024-07-16 00:27:46.959426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.373 [2024-07-16 00:27:46.959455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.373 qpair failed and we were unable to recover it. 00:26:28.373 [2024-07-16 00:27:46.959770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.373 [2024-07-16 00:27:46.959801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.373 qpair failed and we were unable to recover it. 00:26:28.373 [2024-07-16 00:27:46.960040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.373 [2024-07-16 00:27:46.960071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.373 qpair failed and we were unable to recover it. 00:26:28.373 [2024-07-16 00:27:46.960206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.373 [2024-07-16 00:27:46.960215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.373 qpair failed and we were unable to recover it. 00:26:28.373 [2024-07-16 00:27:46.960452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.373 [2024-07-16 00:27:46.960483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.373 qpair failed and we were unable to recover it. 00:26:28.373 [2024-07-16 00:27:46.960665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.373 [2024-07-16 00:27:46.960694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.373 qpair failed and we were unable to recover it. 00:26:28.373 [2024-07-16 00:27:46.960873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.373 [2024-07-16 00:27:46.960902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.373 qpair failed and we were unable to recover it. 00:26:28.373 [2024-07-16 00:27:46.961199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.373 [2024-07-16 00:27:46.961209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.373 qpair failed and we were unable to recover it. 00:26:28.373 [2024-07-16 00:27:46.961335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.373 [2024-07-16 00:27:46.961345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.373 qpair failed and we were unable to recover it. 00:26:28.373 [2024-07-16 00:27:46.961607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.373 [2024-07-16 00:27:46.961636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.373 qpair failed and we were unable to recover it. 00:26:28.373 [2024-07-16 00:27:46.961872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.373 [2024-07-16 00:27:46.961902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.373 qpair failed and we were unable to recover it. 00:26:28.373 [2024-07-16 00:27:46.962136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.373 [2024-07-16 00:27:46.962146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.373 qpair failed and we were unable to recover it. 00:26:28.373 [2024-07-16 00:27:46.962347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.373 [2024-07-16 00:27:46.962379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.373 qpair failed and we were unable to recover it. 00:26:28.373 [2024-07-16 00:27:46.962631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.373 [2024-07-16 00:27:46.962660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.373 qpair failed and we were unable to recover it. 00:26:28.373 [2024-07-16 00:27:46.962889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.373 [2024-07-16 00:27:46.962899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.373 qpair failed and we were unable to recover it. 00:26:28.373 [2024-07-16 00:27:46.963090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.373 [2024-07-16 00:27:46.963099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.373 qpair failed and we were unable to recover it. 00:26:28.373 [2024-07-16 00:27:46.963355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.373 [2024-07-16 00:27:46.963385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.373 qpair failed and we were unable to recover it. 00:26:28.373 [2024-07-16 00:27:46.963614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.373 [2024-07-16 00:27:46.963644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.373 qpair failed and we were unable to recover it. 00:26:28.373 [2024-07-16 00:27:46.963817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.373 [2024-07-16 00:27:46.963826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.373 qpair failed and we were unable to recover it. 00:26:28.373 [2024-07-16 00:27:46.964081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.373 [2024-07-16 00:27:46.964110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.373 qpair failed and we were unable to recover it. 00:26:28.373 [2024-07-16 00:27:46.964389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.373 [2024-07-16 00:27:46.964420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.373 qpair failed and we were unable to recover it. 00:26:28.373 [2024-07-16 00:27:46.964637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.373 [2024-07-16 00:27:46.964647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.373 qpair failed and we were unable to recover it. 00:26:28.374 [2024-07-16 00:27:46.964747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.374 [2024-07-16 00:27:46.964757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.374 qpair failed and we were unable to recover it. 00:26:28.374 [2024-07-16 00:27:46.964893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.374 [2024-07-16 00:27:46.964903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.374 qpair failed and we were unable to recover it. 00:26:28.374 [2024-07-16 00:27:46.965111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.374 [2024-07-16 00:27:46.965140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.374 qpair failed and we were unable to recover it. 00:26:28.374 [2024-07-16 00:27:46.965406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.374 [2024-07-16 00:27:46.965436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.374 qpair failed and we were unable to recover it. 00:26:28.374 [2024-07-16 00:27:46.965732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.374 [2024-07-16 00:27:46.965762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.374 qpair failed and we were unable to recover it. 00:26:28.374 [2024-07-16 00:27:46.965918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.374 [2024-07-16 00:27:46.965928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.374 qpair failed and we were unable to recover it. 00:26:28.374 [2024-07-16 00:27:46.966202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.374 [2024-07-16 00:27:46.966328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.374 qpair failed and we were unable to recover it. 00:26:28.374 [2024-07-16 00:27:46.966469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.374 [2024-07-16 00:27:46.966500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.374 qpair failed and we were unable to recover it. 00:26:28.374 [2024-07-16 00:27:46.966719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.374 [2024-07-16 00:27:46.966729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.374 qpair failed and we were unable to recover it. 00:26:28.374 [2024-07-16 00:27:46.966854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.374 [2024-07-16 00:27:46.966863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.374 qpair failed and we were unable to recover it. 00:26:28.374 [2024-07-16 00:27:46.967010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.374 [2024-07-16 00:27:46.967020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.374 qpair failed and we were unable to recover it. 00:26:28.374 [2024-07-16 00:27:46.967238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.374 [2024-07-16 00:27:46.967269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.374 qpair failed and we were unable to recover it. 00:26:28.374 [2024-07-16 00:27:46.967438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.374 [2024-07-16 00:27:46.967468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.374 qpair failed and we were unable to recover it. 00:26:28.374 [2024-07-16 00:27:46.967630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.374 [2024-07-16 00:27:46.967660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.374 qpair failed and we were unable to recover it. 00:26:28.374 [2024-07-16 00:27:46.967838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.374 [2024-07-16 00:27:46.967872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.374 qpair failed and we were unable to recover it. 00:26:28.374 [2024-07-16 00:27:46.968097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.374 [2024-07-16 00:27:46.968126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.374 qpair failed and we were unable to recover it. 00:26:28.374 [2024-07-16 00:27:46.968311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.374 [2024-07-16 00:27:46.968343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.374 qpair failed and we were unable to recover it. 00:26:28.374 [2024-07-16 00:27:46.968502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.374 [2024-07-16 00:27:46.968532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.374 qpair failed and we were unable to recover it. 00:26:28.374 [2024-07-16 00:27:46.968713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.374 [2024-07-16 00:27:46.968743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.374 qpair failed and we were unable to recover it. 00:26:28.374 [2024-07-16 00:27:46.968993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.374 [2024-07-16 00:27:46.969003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.374 qpair failed and we were unable to recover it. 00:26:28.374 [2024-07-16 00:27:46.969139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.374 [2024-07-16 00:27:46.969149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.374 qpair failed and we were unable to recover it. 00:26:28.374 [2024-07-16 00:27:46.969264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.374 [2024-07-16 00:27:46.969275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.374 qpair failed and we were unable to recover it. 00:26:28.374 [2024-07-16 00:27:46.969405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.374 [2024-07-16 00:27:46.969414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.374 qpair failed and we were unable to recover it. 00:26:28.374 [2024-07-16 00:27:46.969614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.374 [2024-07-16 00:27:46.969624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.374 qpair failed and we were unable to recover it. 00:26:28.374 [2024-07-16 00:27:46.969801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.374 [2024-07-16 00:27:46.969811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.374 qpair failed and we were unable to recover it. 00:26:28.374 [2024-07-16 00:27:46.969958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.374 [2024-07-16 00:27:46.969987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.374 qpair failed and we were unable to recover it. 00:26:28.374 [2024-07-16 00:27:46.970215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.374 [2024-07-16 00:27:46.970255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.374 qpair failed and we were unable to recover it. 00:26:28.374 [2024-07-16 00:27:46.970485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.374 [2024-07-16 00:27:46.970514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.374 qpair failed and we were unable to recover it. 00:26:28.374 [2024-07-16 00:27:46.970747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.374 [2024-07-16 00:27:46.970777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.374 qpair failed and we were unable to recover it. 00:26:28.374 [2024-07-16 00:27:46.971069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.374 [2024-07-16 00:27:46.971098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.374 qpair failed and we were unable to recover it. 00:26:28.374 [2024-07-16 00:27:46.971265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.374 [2024-07-16 00:27:46.971274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.374 qpair failed and we were unable to recover it. 00:26:28.374 [2024-07-16 00:27:46.971590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.374 [2024-07-16 00:27:46.971600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.374 qpair failed and we were unable to recover it. 00:26:28.374 [2024-07-16 00:27:46.971785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.374 [2024-07-16 00:27:46.971795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.374 qpair failed and we were unable to recover it. 00:26:28.374 [2024-07-16 00:27:46.972044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.374 [2024-07-16 00:27:46.972054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.374 qpair failed and we were unable to recover it. 00:26:28.374 [2024-07-16 00:27:46.972255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.374 [2024-07-16 00:27:46.972286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.374 qpair failed and we were unable to recover it. 00:26:28.374 [2024-07-16 00:27:46.972455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.374 [2024-07-16 00:27:46.972484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.374 qpair failed and we were unable to recover it. 00:26:28.374 [2024-07-16 00:27:46.972718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.374 [2024-07-16 00:27:46.972747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.374 qpair failed and we were unable to recover it. 00:26:28.374 [2024-07-16 00:27:46.972984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.374 [2024-07-16 00:27:46.973015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.374 qpair failed and we were unable to recover it. 00:26:28.374 [2024-07-16 00:27:46.973330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.374 [2024-07-16 00:27:46.973361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.374 qpair failed and we were unable to recover it. 00:26:28.374 [2024-07-16 00:27:46.973529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.374 [2024-07-16 00:27:46.973559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.374 qpair failed and we were unable to recover it. 00:26:28.374 [2024-07-16 00:27:46.973740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.374 [2024-07-16 00:27:46.973770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.374 qpair failed and we were unable to recover it. 00:26:28.375 [2024-07-16 00:27:46.973942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.375 [2024-07-16 00:27:46.973953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.375 qpair failed and we were unable to recover it. 00:26:28.375 [2024-07-16 00:27:46.974262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.375 [2024-07-16 00:27:46.974292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.375 qpair failed and we were unable to recover it. 00:26:28.375 [2024-07-16 00:27:46.974461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.375 [2024-07-16 00:27:46.974490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.375 qpair failed and we were unable to recover it. 00:26:28.375 [2024-07-16 00:27:46.974735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.375 [2024-07-16 00:27:46.974765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.375 qpair failed and we were unable to recover it. 00:26:28.375 [2024-07-16 00:27:46.975079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.375 [2024-07-16 00:27:46.975109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.375 qpair failed and we were unable to recover it. 00:26:28.375 [2024-07-16 00:27:46.975313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.375 [2024-07-16 00:27:46.975322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.375 qpair failed and we were unable to recover it. 00:26:28.375 [2024-07-16 00:27:46.975453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.375 [2024-07-16 00:27:46.975463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.375 qpair failed and we were unable to recover it. 00:26:28.375 [2024-07-16 00:27:46.975606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.375 [2024-07-16 00:27:46.975615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.375 qpair failed and we were unable to recover it. 00:26:28.375 [2024-07-16 00:27:46.975893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.375 [2024-07-16 00:27:46.975922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.375 qpair failed and we were unable to recover it. 00:26:28.375 [2024-07-16 00:27:46.976091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.375 [2024-07-16 00:27:46.976121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.375 qpair failed and we were unable to recover it. 00:26:28.375 [2024-07-16 00:27:46.976348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.375 [2024-07-16 00:27:46.976378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.375 qpair failed and we were unable to recover it. 00:26:28.375 [2024-07-16 00:27:46.976546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.375 [2024-07-16 00:27:46.976576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.375 qpair failed and we were unable to recover it. 00:26:28.375 [2024-07-16 00:27:46.976810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.375 [2024-07-16 00:27:46.976839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.375 qpair failed and we were unable to recover it. 00:26:28.375 [2024-07-16 00:27:46.977014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.375 [2024-07-16 00:27:46.977044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.375 qpair failed and we were unable to recover it. 00:26:28.375 [2024-07-16 00:27:46.977248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.375 [2024-07-16 00:27:46.977279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.375 qpair failed and we were unable to recover it. 00:26:28.375 [2024-07-16 00:27:46.977461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.375 [2024-07-16 00:27:46.977491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.375 qpair failed and we were unable to recover it. 00:26:28.375 [2024-07-16 00:27:46.977778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.375 [2024-07-16 00:27:46.977808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.375 qpair failed and we were unable to recover it. 00:26:28.375 [2024-07-16 00:27:46.977990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.375 [2024-07-16 00:27:46.978019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.375 qpair failed and we were unable to recover it. 00:26:28.375 [2024-07-16 00:27:46.978322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.375 [2024-07-16 00:27:46.978332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.375 qpair failed and we were unable to recover it. 00:26:28.375 [2024-07-16 00:27:46.978521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.375 [2024-07-16 00:27:46.978531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.375 qpair failed and we were unable to recover it. 00:26:28.375 [2024-07-16 00:27:46.978717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.375 [2024-07-16 00:27:46.978727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.375 qpair failed and we were unable to recover it. 00:26:28.375 [2024-07-16 00:27:46.978925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.375 [2024-07-16 00:27:46.978935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.375 qpair failed and we were unable to recover it. 00:26:28.375 [2024-07-16 00:27:46.979188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.375 [2024-07-16 00:27:46.979218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.375 qpair failed and we were unable to recover it. 00:26:28.375 [2024-07-16 00:27:46.979411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.375 [2024-07-16 00:27:46.979441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.375 qpair failed and we were unable to recover it. 00:26:28.375 [2024-07-16 00:27:46.979675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.375 [2024-07-16 00:27:46.979704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.375 qpair failed and we were unable to recover it. 00:26:28.375 [2024-07-16 00:27:46.979990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.375 [2024-07-16 00:27:46.979999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.375 qpair failed and we were unable to recover it. 00:26:28.375 [2024-07-16 00:27:46.980233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.375 [2024-07-16 00:27:46.980244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.375 qpair failed and we were unable to recover it. 00:26:28.375 [2024-07-16 00:27:46.980377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.375 [2024-07-16 00:27:46.980386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.375 qpair failed and we were unable to recover it. 00:26:28.375 [2024-07-16 00:27:46.980545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.375 [2024-07-16 00:27:46.980574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.375 qpair failed and we were unable to recover it. 00:26:28.375 [2024-07-16 00:27:46.980834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.375 [2024-07-16 00:27:46.980865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.375 qpair failed and we were unable to recover it. 00:26:28.375 [2024-07-16 00:27:46.981110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.375 [2024-07-16 00:27:46.981139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.375 qpair failed and we were unable to recover it. 00:26:28.375 [2024-07-16 00:27:46.981371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.375 [2024-07-16 00:27:46.981401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.375 qpair failed and we were unable to recover it. 00:26:28.375 [2024-07-16 00:27:46.981577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.375 [2024-07-16 00:27:46.981606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.375 qpair failed and we were unable to recover it. 00:26:28.375 [2024-07-16 00:27:46.981751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.375 [2024-07-16 00:27:46.981779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.375 qpair failed and we were unable to recover it. 00:26:28.375 [2024-07-16 00:27:46.982025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.375 [2024-07-16 00:27:46.982056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.375 qpair failed and we were unable to recover it. 00:26:28.375 [2024-07-16 00:27:46.982318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.375 [2024-07-16 00:27:46.982328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.375 qpair failed and we were unable to recover it. 00:26:28.375 [2024-07-16 00:27:46.982586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.375 [2024-07-16 00:27:46.982614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.375 qpair failed and we were unable to recover it. 00:26:28.375 [2024-07-16 00:27:46.982842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.375 [2024-07-16 00:27:46.982872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.375 qpair failed and we were unable to recover it. 00:26:28.375 [2024-07-16 00:27:46.983168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.375 [2024-07-16 00:27:46.983178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.375 qpair failed and we were unable to recover it. 00:26:28.375 [2024-07-16 00:27:46.983389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.375 [2024-07-16 00:27:46.983399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.375 qpair failed and we were unable to recover it. 00:26:28.375 [2024-07-16 00:27:46.983672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.375 [2024-07-16 00:27:46.983685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.376 qpair failed and we were unable to recover it. 00:26:28.376 [2024-07-16 00:27:46.983939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.376 [2024-07-16 00:27:46.983949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.376 qpair failed and we were unable to recover it. 00:26:28.376 [2024-07-16 00:27:46.984223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.376 [2024-07-16 00:27:46.984254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.376 qpair failed and we were unable to recover it. 00:26:28.376 [2024-07-16 00:27:46.984482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.376 [2024-07-16 00:27:46.984511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.376 qpair failed and we were unable to recover it. 00:26:28.376 [2024-07-16 00:27:46.984741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.376 [2024-07-16 00:27:46.984773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.376 qpair failed and we were unable to recover it. 00:26:28.376 [2024-07-16 00:27:46.985083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.376 [2024-07-16 00:27:46.985093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.376 qpair failed and we were unable to recover it. 00:26:28.376 [2024-07-16 00:27:46.985238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.376 [2024-07-16 00:27:46.985248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.376 qpair failed and we were unable to recover it. 00:26:28.376 [2024-07-16 00:27:46.985403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.376 [2024-07-16 00:27:46.985433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.376 qpair failed and we were unable to recover it. 00:26:28.376 [2024-07-16 00:27:46.985698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.376 [2024-07-16 00:27:46.985727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.376 qpair failed and we were unable to recover it. 00:26:28.376 [2024-07-16 00:27:46.986036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.376 [2024-07-16 00:27:46.986067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.376 qpair failed and we were unable to recover it. 00:26:28.376 [2024-07-16 00:27:46.986309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.376 [2024-07-16 00:27:46.986341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.376 qpair failed and we were unable to recover it. 00:26:28.376 [2024-07-16 00:27:46.986518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.376 [2024-07-16 00:27:46.986547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.376 qpair failed and we were unable to recover it. 00:26:28.376 [2024-07-16 00:27:46.986795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.376 [2024-07-16 00:27:46.986824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.376 qpair failed and we were unable to recover it. 00:26:28.376 [2024-07-16 00:27:46.987011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.376 [2024-07-16 00:27:46.987041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.376 qpair failed and we were unable to recover it. 00:26:28.376 [2024-07-16 00:27:46.987260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.376 [2024-07-16 00:27:46.987270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.376 qpair failed and we were unable to recover it. 00:26:28.376 [2024-07-16 00:27:46.987415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.376 [2024-07-16 00:27:46.987425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.376 qpair failed and we were unable to recover it. 00:26:28.376 [2024-07-16 00:27:46.987625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.376 [2024-07-16 00:27:46.987635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.376 qpair failed and we were unable to recover it. 00:26:28.376 [2024-07-16 00:27:46.987777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.376 [2024-07-16 00:27:46.987786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.376 qpair failed and we were unable to recover it. 00:26:28.376 [2024-07-16 00:27:46.988041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.376 [2024-07-16 00:27:46.988051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.376 qpair failed and we were unable to recover it. 00:26:28.376 [2024-07-16 00:27:46.988240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.376 [2024-07-16 00:27:46.988250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.376 qpair failed and we were unable to recover it. 00:26:28.376 [2024-07-16 00:27:46.988508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.376 [2024-07-16 00:27:46.988538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.376 qpair failed and we were unable to recover it. 00:26:28.376 [2024-07-16 00:27:46.988808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.376 [2024-07-16 00:27:46.988836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.376 qpair failed and we were unable to recover it. 00:26:28.376 [2024-07-16 00:27:46.989077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.376 [2024-07-16 00:27:46.989087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.376 qpair failed and we were unable to recover it. 00:26:28.376 [2024-07-16 00:27:46.989204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.376 [2024-07-16 00:27:46.989214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.376 qpair failed and we were unable to recover it. 00:26:28.376 [2024-07-16 00:27:46.989411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.376 [2024-07-16 00:27:46.989420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.376 qpair failed and we were unable to recover it. 00:26:28.376 [2024-07-16 00:27:46.989567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.376 [2024-07-16 00:27:46.989577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.376 qpair failed and we were unable to recover it. 00:26:28.376 [2024-07-16 00:27:46.989776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.376 [2024-07-16 00:27:46.989785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.376 qpair failed and we were unable to recover it. 00:26:28.376 [2024-07-16 00:27:46.989990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.376 [2024-07-16 00:27:46.990027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.376 qpair failed and we were unable to recover it. 00:26:28.376 [2024-07-16 00:27:46.990257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.376 [2024-07-16 00:27:46.990288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.376 qpair failed and we were unable to recover it. 00:26:28.376 [2024-07-16 00:27:46.990609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.376 [2024-07-16 00:27:46.990638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.376 qpair failed and we were unable to recover it. 00:26:28.376 [2024-07-16 00:27:46.990930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.376 [2024-07-16 00:27:46.990960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.376 qpair failed and we were unable to recover it. 00:26:28.376 [2024-07-16 00:27:46.991138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.377 [2024-07-16 00:27:46.991169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.377 qpair failed and we were unable to recover it. 00:26:28.377 [2024-07-16 00:27:46.991358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.377 [2024-07-16 00:27:46.991369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.377 qpair failed and we were unable to recover it. 00:26:28.377 [2024-07-16 00:27:46.991530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.377 [2024-07-16 00:27:46.991559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.377 qpair failed and we were unable to recover it. 00:26:28.377 [2024-07-16 00:27:46.991794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.377 [2024-07-16 00:27:46.991825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.377 qpair failed and we were unable to recover it. 00:26:28.377 [2024-07-16 00:27:46.992011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.377 [2024-07-16 00:27:46.992040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.377 qpair failed and we were unable to recover it. 00:26:28.377 [2024-07-16 00:27:46.992282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.377 [2024-07-16 00:27:46.992293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.377 qpair failed and we were unable to recover it. 00:26:28.377 [2024-07-16 00:27:46.992544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.377 [2024-07-16 00:27:46.992555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.377 qpair failed and we were unable to recover it. 00:26:28.377 [2024-07-16 00:27:46.992796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.377 [2024-07-16 00:27:46.992826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.377 qpair failed and we were unable to recover it. 00:26:28.377 [2024-07-16 00:27:46.993126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.377 [2024-07-16 00:27:46.993156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.377 qpair failed and we were unable to recover it. 00:26:28.377 [2024-07-16 00:27:46.993332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.377 [2024-07-16 00:27:46.993343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.377 qpair failed and we were unable to recover it. 00:26:28.377 [2024-07-16 00:27:46.993465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.377 [2024-07-16 00:27:46.993475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.377 qpair failed and we were unable to recover it. 00:26:28.377 [2024-07-16 00:27:46.993663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.377 [2024-07-16 00:27:46.993673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.377 qpair failed and we were unable to recover it. 00:26:28.377 [2024-07-16 00:27:46.993878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.377 [2024-07-16 00:27:46.993888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.377 qpair failed and we were unable to recover it. 00:26:28.377 [2024-07-16 00:27:46.994021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.377 [2024-07-16 00:27:46.994031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.377 qpair failed and we were unable to recover it. 00:26:28.377 [2024-07-16 00:27:46.994272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.377 [2024-07-16 00:27:46.994304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.377 qpair failed and we were unable to recover it. 00:26:28.377 [2024-07-16 00:27:46.994567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.377 [2024-07-16 00:27:46.994597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.377 qpair failed and we were unable to recover it. 00:26:28.377 [2024-07-16 00:27:46.994831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.377 [2024-07-16 00:27:46.994861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.377 qpair failed and we were unable to recover it. 00:26:28.377 [2024-07-16 00:27:46.995039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.377 [2024-07-16 00:27:46.995070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.377 qpair failed and we were unable to recover it. 00:26:28.377 [2024-07-16 00:27:46.995436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.377 [2024-07-16 00:27:46.995467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.377 qpair failed and we were unable to recover it. 00:26:28.377 [2024-07-16 00:27:46.995694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.377 [2024-07-16 00:27:46.995734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.377 qpair failed and we were unable to recover it. 00:26:28.377 [2024-07-16 00:27:46.996044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.377 [2024-07-16 00:27:46.996076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.377 qpair failed and we were unable to recover it. 00:26:28.377 [2024-07-16 00:27:46.996395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.377 [2024-07-16 00:27:46.996426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.377 qpair failed and we were unable to recover it. 00:26:28.377 [2024-07-16 00:27:46.996726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.377 [2024-07-16 00:27:46.996755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.377 qpair failed and we were unable to recover it. 00:26:28.377 [2024-07-16 00:27:46.996935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.377 [2024-07-16 00:27:46.996944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.377 qpair failed and we were unable to recover it. 00:26:28.377 [2024-07-16 00:27:46.997084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.377 [2024-07-16 00:27:46.997093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.377 qpair failed and we were unable to recover it. 00:26:28.377 [2024-07-16 00:27:46.997341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.377 [2024-07-16 00:27:46.997350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.377 qpair failed and we were unable to recover it. 00:26:28.377 [2024-07-16 00:27:46.997496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.377 [2024-07-16 00:27:46.997506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.377 qpair failed and we were unable to recover it. 00:26:28.377 [2024-07-16 00:27:46.997713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.377 [2024-07-16 00:27:46.997723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.377 qpair failed and we were unable to recover it. 00:26:28.377 [2024-07-16 00:27:46.998005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.377 [2024-07-16 00:27:46.998035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.377 qpair failed and we were unable to recover it. 00:26:28.377 [2024-07-16 00:27:46.998216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.377 [2024-07-16 00:27:46.998257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.377 qpair failed and we were unable to recover it. 00:26:28.377 [2024-07-16 00:27:46.998441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.377 [2024-07-16 00:27:46.998471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.377 qpair failed and we were unable to recover it. 00:26:28.377 [2024-07-16 00:27:46.998652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.377 [2024-07-16 00:27:46.998682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.377 qpair failed and we were unable to recover it. 00:26:28.377 [2024-07-16 00:27:46.998935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.377 [2024-07-16 00:27:46.998969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.377 qpair failed and we were unable to recover it. 00:26:28.377 [2024-07-16 00:27:46.999122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.377 [2024-07-16 00:27:46.999132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.377 qpair failed and we were unable to recover it. 00:26:28.377 [2024-07-16 00:27:46.999284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.377 [2024-07-16 00:27:46.999295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.377 qpair failed and we were unable to recover it. 00:26:28.377 [2024-07-16 00:27:46.999448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.377 [2024-07-16 00:27:46.999459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.377 qpair failed and we were unable to recover it. 00:26:28.377 [2024-07-16 00:27:46.999645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.377 [2024-07-16 00:27:46.999655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.377 qpair failed and we were unable to recover it. 00:26:28.377 [2024-07-16 00:27:46.999852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.377 [2024-07-16 00:27:46.999883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.377 qpair failed and we were unable to recover it. 00:26:28.377 [2024-07-16 00:27:47.000204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.377 [2024-07-16 00:27:47.000256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.377 qpair failed and we were unable to recover it. 00:26:28.378 [2024-07-16 00:27:47.000499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.378 [2024-07-16 00:27:47.000529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.378 qpair failed and we were unable to recover it. 00:26:28.378 [2024-07-16 00:27:47.000829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.378 [2024-07-16 00:27:47.000862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.378 qpair failed and we were unable to recover it. 00:26:28.378 [2024-07-16 00:27:47.001105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.378 [2024-07-16 00:27:47.001115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.378 qpair failed and we were unable to recover it. 00:26:28.378 [2024-07-16 00:27:47.001301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.378 [2024-07-16 00:27:47.001313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.378 qpair failed and we were unable to recover it. 00:26:28.378 [2024-07-16 00:27:47.001447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.378 [2024-07-16 00:27:47.001457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.378 qpair failed and we were unable to recover it. 00:26:28.378 [2024-07-16 00:27:47.001657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.378 [2024-07-16 00:27:47.001667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.378 qpair failed and we were unable to recover it. 00:26:28.378 [2024-07-16 00:27:47.001877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.378 [2024-07-16 00:27:47.001907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.378 qpair failed and we were unable to recover it. 00:26:28.378 [2024-07-16 00:27:47.002129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.378 [2024-07-16 00:27:47.002158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.378 qpair failed and we were unable to recover it. 00:26:28.378 [2024-07-16 00:27:47.002396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.378 [2024-07-16 00:27:47.002426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.378 qpair failed and we were unable to recover it. 00:26:28.378 [2024-07-16 00:27:47.002727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.378 [2024-07-16 00:27:47.002758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.378 qpair failed and we were unable to recover it. 00:26:28.378 [2024-07-16 00:27:47.002981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.378 [2024-07-16 00:27:47.002992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.378 qpair failed and we were unable to recover it. 00:26:28.378 [2024-07-16 00:27:47.003177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.378 [2024-07-16 00:27:47.003207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.378 qpair failed and we were unable to recover it. 00:26:28.378 [2024-07-16 00:27:47.003394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.378 [2024-07-16 00:27:47.003425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.378 qpair failed and we were unable to recover it. 00:26:28.378 [2024-07-16 00:27:47.003673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.378 [2024-07-16 00:27:47.003703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.378 qpair failed and we were unable to recover it. 00:26:28.378 [2024-07-16 00:27:47.003891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.378 [2024-07-16 00:27:47.003921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.378 qpair failed and we were unable to recover it. 00:26:28.378 [2024-07-16 00:27:47.004148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.378 [2024-07-16 00:27:47.004178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.378 qpair failed and we were unable to recover it. 00:26:28.378 [2024-07-16 00:27:47.004332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.378 [2024-07-16 00:27:47.004342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.378 qpair failed and we were unable to recover it. 00:26:28.378 [2024-07-16 00:27:47.004551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.378 [2024-07-16 00:27:47.004580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.378 qpair failed and we were unable to recover it. 00:26:28.378 [2024-07-16 00:27:47.004842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.378 [2024-07-16 00:27:47.004872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.378 qpair failed and we were unable to recover it. 00:26:28.378 [2024-07-16 00:27:47.005095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.378 [2024-07-16 00:27:47.005125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.378 qpair failed and we were unable to recover it. 00:26:28.378 [2024-07-16 00:27:47.005371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.378 [2024-07-16 00:27:47.005401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.378 qpair failed and we were unable to recover it. 00:26:28.378 [2024-07-16 00:27:47.005702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.378 [2024-07-16 00:27:47.005732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.378 qpair failed and we were unable to recover it. 00:26:28.378 [2024-07-16 00:27:47.005901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.378 [2024-07-16 00:27:47.005931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.378 qpair failed and we were unable to recover it. 00:26:28.378 [2024-07-16 00:27:47.006168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.378 [2024-07-16 00:27:47.006178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.378 qpair failed and we were unable to recover it. 00:26:28.378 [2024-07-16 00:27:47.006492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.378 [2024-07-16 00:27:47.006502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.378 qpair failed and we were unable to recover it. 00:26:28.378 [2024-07-16 00:27:47.006708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.378 [2024-07-16 00:27:47.006718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.378 qpair failed and we were unable to recover it. 00:26:28.378 [2024-07-16 00:27:47.006909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.378 [2024-07-16 00:27:47.006939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.378 qpair failed and we were unable to recover it. 00:26:28.378 [2024-07-16 00:27:47.007125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.378 [2024-07-16 00:27:47.007156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.378 qpair failed and we were unable to recover it. 00:26:28.378 [2024-07-16 00:27:47.007347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.378 [2024-07-16 00:27:47.007377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.378 qpair failed and we were unable to recover it. 00:26:28.378 [2024-07-16 00:27:47.007637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.378 [2024-07-16 00:27:47.007667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.378 qpair failed and we were unable to recover it. 00:26:28.378 [2024-07-16 00:27:47.007904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.378 [2024-07-16 00:27:47.007915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.378 qpair failed and we were unable to recover it. 00:26:28.378 [2024-07-16 00:27:47.008131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.378 [2024-07-16 00:27:47.008161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.378 qpair failed and we were unable to recover it. 00:26:28.378 [2024-07-16 00:27:47.008350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.378 [2024-07-16 00:27:47.008381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.378 qpair failed and we were unable to recover it. 00:26:28.378 [2024-07-16 00:27:47.008628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.378 [2024-07-16 00:27:47.008658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.378 qpair failed and we were unable to recover it. 00:26:28.378 [2024-07-16 00:27:47.008909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.378 [2024-07-16 00:27:47.008941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.378 qpair failed and we were unable to recover it. 00:26:28.378 [2024-07-16 00:27:47.009162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.378 [2024-07-16 00:27:47.009191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.378 qpair failed and we were unable to recover it. 00:26:28.378 [2024-07-16 00:27:47.009413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.378 [2024-07-16 00:27:47.009423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.378 qpair failed and we were unable to recover it. 00:26:28.378 [2024-07-16 00:27:47.009673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.378 [2024-07-16 00:27:47.009683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.379 qpair failed and we were unable to recover it. 00:26:28.379 [2024-07-16 00:27:47.009818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.379 [2024-07-16 00:27:47.009847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.379 qpair failed and we were unable to recover it. 00:26:28.379 [2024-07-16 00:27:47.010103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.379 [2024-07-16 00:27:47.010133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.379 qpair failed and we were unable to recover it. 00:26:28.379 [2024-07-16 00:27:47.010360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.379 [2024-07-16 00:27:47.010390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.379 qpair failed and we were unable to recover it. 00:26:28.379 [2024-07-16 00:27:47.010634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.379 [2024-07-16 00:27:47.010663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.379 qpair failed and we were unable to recover it. 00:26:28.379 [2024-07-16 00:27:47.010899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.379 [2024-07-16 00:27:47.010909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.379 qpair failed and we were unable to recover it. 00:26:28.379 [2024-07-16 00:27:47.011134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.379 [2024-07-16 00:27:47.011164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.379 qpair failed and we were unable to recover it. 00:26:28.379 [2024-07-16 00:27:47.011391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.379 [2024-07-16 00:27:47.011421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.379 qpair failed and we were unable to recover it. 00:26:28.379 [2024-07-16 00:27:47.011611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.379 [2024-07-16 00:27:47.011640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.379 qpair failed and we were unable to recover it. 00:26:28.379 [2024-07-16 00:27:47.011932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.379 [2024-07-16 00:27:47.011962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.379 qpair failed and we were unable to recover it. 00:26:28.379 [2024-07-16 00:27:47.012127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.379 [2024-07-16 00:27:47.012136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.379 qpair failed and we were unable to recover it. 00:26:28.379 [2024-07-16 00:27:47.012434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.379 [2024-07-16 00:27:47.012465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.379 qpair failed and we were unable to recover it. 00:26:28.379 [2024-07-16 00:27:47.012719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.379 [2024-07-16 00:27:47.012750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.379 qpair failed and we were unable to recover it. 00:26:28.379 [2024-07-16 00:27:47.012911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.379 [2024-07-16 00:27:47.012921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.379 qpair failed and we were unable to recover it. 00:26:28.379 [2024-07-16 00:27:47.013057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.379 [2024-07-16 00:27:47.013067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.379 qpair failed and we were unable to recover it. 00:26:28.379 [2024-07-16 00:27:47.013262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.379 [2024-07-16 00:27:47.013272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.379 qpair failed and we were unable to recover it. 00:26:28.379 [2024-07-16 00:27:47.013378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.379 [2024-07-16 00:27:47.013408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.379 qpair failed and we were unable to recover it. 00:26:28.379 [2024-07-16 00:27:47.013655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.379 [2024-07-16 00:27:47.013685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.379 qpair failed and we were unable to recover it. 00:26:28.379 [2024-07-16 00:27:47.013912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.379 [2024-07-16 00:27:47.013941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.379 qpair failed and we were unable to recover it. 00:26:28.379 [2024-07-16 00:27:47.014106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.379 [2024-07-16 00:27:47.014135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.379 qpair failed and we were unable to recover it. 00:26:28.379 [2024-07-16 00:27:47.014363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.379 [2024-07-16 00:27:47.014394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.379 qpair failed and we were unable to recover it. 00:26:28.379 [2024-07-16 00:27:47.014622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.379 [2024-07-16 00:27:47.014651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.379 qpair failed and we were unable to recover it. 00:26:28.379 [2024-07-16 00:27:47.014893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.379 [2024-07-16 00:27:47.014903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.379 qpair failed and we were unable to recover it. 00:26:28.379 [2024-07-16 00:27:47.015085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.379 [2024-07-16 00:27:47.015094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.379 qpair failed and we were unable to recover it. 00:26:28.379 [2024-07-16 00:27:47.015190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.379 [2024-07-16 00:27:47.015219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.379 qpair failed and we were unable to recover it. 00:26:28.379 [2024-07-16 00:27:47.015454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.379 [2024-07-16 00:27:47.015483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.379 qpair failed and we were unable to recover it. 00:26:28.379 [2024-07-16 00:27:47.015711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.379 [2024-07-16 00:27:47.015741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.379 qpair failed and we were unable to recover it. 00:26:28.379 [2024-07-16 00:27:47.015928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.379 [2024-07-16 00:27:47.015957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.379 qpair failed and we were unable to recover it. 00:26:28.379 [2024-07-16 00:27:47.016196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.379 [2024-07-16 00:27:47.016205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.379 qpair failed and we were unable to recover it. 00:26:28.379 [2024-07-16 00:27:47.016352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.379 [2024-07-16 00:27:47.016382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.379 qpair failed and we were unable to recover it. 00:26:28.379 [2024-07-16 00:27:47.016569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.379 [2024-07-16 00:27:47.016599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.379 qpair failed and we were unable to recover it. 00:26:28.379 [2024-07-16 00:27:47.016764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.379 [2024-07-16 00:27:47.016793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.379 qpair failed and we were unable to recover it. 00:26:28.379 [2024-07-16 00:27:47.017041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.379 [2024-07-16 00:27:47.017051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.379 qpair failed and we were unable to recover it. 00:26:28.379 [2024-07-16 00:27:47.017169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.379 [2024-07-16 00:27:47.017205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.379 qpair failed and we were unable to recover it. 00:26:28.379 [2024-07-16 00:27:47.017377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.379 [2024-07-16 00:27:47.017408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.379 qpair failed and we were unable to recover it. 00:26:28.379 [2024-07-16 00:27:47.017636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.379 [2024-07-16 00:27:47.017665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.379 qpair failed and we were unable to recover it. 00:26:28.379 [2024-07-16 00:27:47.017852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.379 [2024-07-16 00:27:47.017861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.379 qpair failed and we were unable to recover it. 00:26:28.379 [2024-07-16 00:27:47.017987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.379 [2024-07-16 00:27:47.017997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.379 qpair failed and we were unable to recover it. 00:26:28.379 [2024-07-16 00:27:47.018191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.379 [2024-07-16 00:27:47.018200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.379 qpair failed and we were unable to recover it. 00:26:28.379 [2024-07-16 00:27:47.018335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.379 [2024-07-16 00:27:47.018346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.379 qpair failed and we were unable to recover it. 00:26:28.380 [2024-07-16 00:27:47.018552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.380 [2024-07-16 00:27:47.018562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.380 qpair failed and we were unable to recover it. 00:26:28.380 [2024-07-16 00:27:47.018682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.380 [2024-07-16 00:27:47.018692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.380 qpair failed and we were unable to recover it. 00:26:28.380 [2024-07-16 00:27:47.018834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.380 [2024-07-16 00:27:47.018843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.380 qpair failed and we were unable to recover it. 00:26:28.380 [2024-07-16 00:27:47.019082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.380 [2024-07-16 00:27:47.019111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.380 qpair failed and we were unable to recover it. 00:26:28.380 [2024-07-16 00:27:47.019354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.380 [2024-07-16 00:27:47.019385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.380 qpair failed and we were unable to recover it. 00:26:28.380 [2024-07-16 00:27:47.019634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.380 [2024-07-16 00:27:47.019662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.380 qpair failed and we were unable to recover it. 00:26:28.380 [2024-07-16 00:27:47.019898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.380 [2024-07-16 00:27:47.019928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.380 qpair failed and we were unable to recover it. 00:26:28.380 [2024-07-16 00:27:47.020104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.380 [2024-07-16 00:27:47.020134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.380 qpair failed and we were unable to recover it. 00:26:28.380 [2024-07-16 00:27:47.020349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.380 [2024-07-16 00:27:47.020358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.380 qpair failed and we were unable to recover it. 00:26:28.380 [2024-07-16 00:27:47.020613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.380 [2024-07-16 00:27:47.020642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.380 qpair failed and we were unable to recover it. 00:26:28.380 [2024-07-16 00:27:47.020829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.380 [2024-07-16 00:27:47.020859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.380 qpair failed and we were unable to recover it. 00:26:28.380 [2024-07-16 00:27:47.021017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.380 [2024-07-16 00:27:47.021047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.380 qpair failed and we were unable to recover it. 00:26:28.380 [2024-07-16 00:27:47.021266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.380 [2024-07-16 00:27:47.021276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.380 qpair failed and we were unable to recover it. 00:26:28.380 [2024-07-16 00:27:47.021432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.380 [2024-07-16 00:27:47.021467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.380 qpair failed and we were unable to recover it. 00:26:28.380 [2024-07-16 00:27:47.021624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.380 [2024-07-16 00:27:47.021653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.380 qpair failed and we were unable to recover it. 00:26:28.380 [2024-07-16 00:27:47.021942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.380 [2024-07-16 00:27:47.021971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.380 qpair failed and we were unable to recover it. 00:26:28.380 [2024-07-16 00:27:47.022229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.380 [2024-07-16 00:27:47.022251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.380 qpair failed and we were unable to recover it. 00:26:28.380 [2024-07-16 00:27:47.022525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.380 [2024-07-16 00:27:47.022534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.380 qpair failed and we were unable to recover it. 00:26:28.380 [2024-07-16 00:27:47.022721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.380 [2024-07-16 00:27:47.022731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.380 qpair failed and we were unable to recover it. 00:26:28.380 [2024-07-16 00:27:47.022916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.380 [2024-07-16 00:27:47.022926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.380 qpair failed and we were unable to recover it. 00:26:28.380 [2024-07-16 00:27:47.023175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.380 [2024-07-16 00:27:47.023184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.380 qpair failed and we were unable to recover it. 00:26:28.380 [2024-07-16 00:27:47.023382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.380 [2024-07-16 00:27:47.023391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.380 qpair failed and we were unable to recover it. 00:26:28.380 [2024-07-16 00:27:47.023614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.380 [2024-07-16 00:27:47.023624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.380 qpair failed and we were unable to recover it. 00:26:28.380 [2024-07-16 00:27:47.023754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.380 [2024-07-16 00:27:47.023763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.380 qpair failed and we were unable to recover it. 00:26:28.380 [2024-07-16 00:27:47.024045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.380 [2024-07-16 00:27:47.024074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.380 qpair failed and we were unable to recover it. 00:26:28.380 [2024-07-16 00:27:47.024331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.380 [2024-07-16 00:27:47.024362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.380 qpair failed and we were unable to recover it. 00:26:28.380 [2024-07-16 00:27:47.024526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.380 [2024-07-16 00:27:47.024556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.380 qpair failed and we were unable to recover it. 00:26:28.380 [2024-07-16 00:27:47.024849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.380 [2024-07-16 00:27:47.024878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.380 qpair failed and we were unable to recover it. 00:26:28.380 [2024-07-16 00:27:47.025210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.380 [2024-07-16 00:27:47.025249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.380 qpair failed and we were unable to recover it. 00:26:28.380 [2024-07-16 00:27:47.025566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.380 [2024-07-16 00:27:47.025596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.380 qpair failed and we were unable to recover it. 00:26:28.380 [2024-07-16 00:27:47.025839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.380 [2024-07-16 00:27:47.025869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.380 qpair failed and we were unable to recover it. 00:26:28.380 [2024-07-16 00:27:47.026190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.380 [2024-07-16 00:27:47.026231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.380 qpair failed and we were unable to recover it. 00:26:28.380 [2024-07-16 00:27:47.026376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.380 [2024-07-16 00:27:47.026385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.380 qpair failed and we were unable to recover it. 00:26:28.380 [2024-07-16 00:27:47.026689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.380 [2024-07-16 00:27:47.026718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.380 qpair failed and we were unable to recover it. 00:26:28.380 [2024-07-16 00:27:47.026963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.380 [2024-07-16 00:27:47.026993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.380 qpair failed and we were unable to recover it. 00:26:28.380 [2024-07-16 00:27:47.027234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.380 [2024-07-16 00:27:47.027244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.380 qpair failed and we were unable to recover it. 00:26:28.380 [2024-07-16 00:27:47.027520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.380 [2024-07-16 00:27:47.027550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.380 qpair failed and we were unable to recover it. 00:26:28.380 [2024-07-16 00:27:47.027747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.380 [2024-07-16 00:27:47.027777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.380 qpair failed and we were unable to recover it. 00:26:28.381 [2024-07-16 00:27:47.028037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.381 [2024-07-16 00:27:47.028067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.381 qpair failed and we were unable to recover it. 00:26:28.381 [2024-07-16 00:27:47.028239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.381 [2024-07-16 00:27:47.028269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.381 qpair failed and we were unable to recover it. 00:26:28.381 [2024-07-16 00:27:47.028571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.381 [2024-07-16 00:27:47.028602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.381 qpair failed and we were unable to recover it. 00:26:28.381 [2024-07-16 00:27:47.028847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.381 [2024-07-16 00:27:47.028876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.381 qpair failed and we were unable to recover it. 00:26:28.381 [2024-07-16 00:27:47.029100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.381 [2024-07-16 00:27:47.029143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.381 qpair failed and we were unable to recover it. 00:26:28.381 [2024-07-16 00:27:47.029279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.381 [2024-07-16 00:27:47.029289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.381 qpair failed and we were unable to recover it. 00:26:28.381 [2024-07-16 00:27:47.029408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.381 [2024-07-16 00:27:47.029418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.381 qpair failed and we were unable to recover it. 00:26:28.381 [2024-07-16 00:27:47.029702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.381 [2024-07-16 00:27:47.029731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.381 qpair failed and we were unable to recover it. 00:26:28.381 [2024-07-16 00:27:47.029922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.381 [2024-07-16 00:27:47.029951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.381 qpair failed and we were unable to recover it. 00:26:28.381 [2024-07-16 00:27:47.030100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.381 [2024-07-16 00:27:47.030109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.381 qpair failed and we were unable to recover it. 00:26:28.381 [2024-07-16 00:27:47.030365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.381 [2024-07-16 00:27:47.030395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.381 qpair failed and we were unable to recover it. 00:26:28.381 [2024-07-16 00:27:47.030566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.381 [2024-07-16 00:27:47.030596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.381 qpair failed and we were unable to recover it. 00:26:28.381 [2024-07-16 00:27:47.030838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.381 [2024-07-16 00:27:47.030868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.381 qpair failed and we were unable to recover it. 00:26:28.381 [2024-07-16 00:27:47.031017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.381 [2024-07-16 00:27:47.031027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.381 qpair failed and we were unable to recover it. 00:26:28.381 [2024-07-16 00:27:47.031302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.381 [2024-07-16 00:27:47.031333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.381 qpair failed and we were unable to recover it. 00:26:28.381 [2024-07-16 00:27:47.031497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.381 [2024-07-16 00:27:47.031536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.381 qpair failed and we were unable to recover it. 00:26:28.381 [2024-07-16 00:27:47.031855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.381 [2024-07-16 00:27:47.031891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.381 qpair failed and we were unable to recover it. 00:26:28.381 [2024-07-16 00:27:47.032227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.381 [2024-07-16 00:27:47.032237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.381 qpair failed and we were unable to recover it. 00:26:28.381 [2024-07-16 00:27:47.032370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.381 [2024-07-16 00:27:47.032379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.381 qpair failed and we were unable to recover it. 00:26:28.381 [2024-07-16 00:27:47.032657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.381 [2024-07-16 00:27:47.032686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.381 qpair failed and we were unable to recover it. 00:26:28.381 [2024-07-16 00:27:47.032935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.381 [2024-07-16 00:27:47.032966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.381 qpair failed and we were unable to recover it. 00:26:28.381 [2024-07-16 00:27:47.033193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.381 [2024-07-16 00:27:47.033203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.381 qpair failed and we were unable to recover it. 00:26:28.381 [2024-07-16 00:27:47.033346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.381 [2024-07-16 00:27:47.033376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.381 qpair failed and we were unable to recover it. 00:26:28.381 [2024-07-16 00:27:47.033613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.381 [2024-07-16 00:27:47.033643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.381 qpair failed and we were unable to recover it. 00:26:28.381 [2024-07-16 00:27:47.033822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.381 [2024-07-16 00:27:47.033852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.381 qpair failed and we were unable to recover it. 00:26:28.381 [2024-07-16 00:27:47.034087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.381 [2024-07-16 00:27:47.034096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.381 qpair failed and we were unable to recover it. 00:26:28.381 [2024-07-16 00:27:47.034350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.381 [2024-07-16 00:27:47.034360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.381 qpair failed and we were unable to recover it. 00:26:28.381 [2024-07-16 00:27:47.034438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.381 [2024-07-16 00:27:47.034447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.381 qpair failed and we were unable to recover it. 00:26:28.381 [2024-07-16 00:27:47.034566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.381 [2024-07-16 00:27:47.034575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.381 qpair failed and we were unable to recover it. 00:26:28.381 [2024-07-16 00:27:47.034766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.381 [2024-07-16 00:27:47.034775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.381 qpair failed and we were unable to recover it. 00:26:28.381 [2024-07-16 00:27:47.034971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.381 [2024-07-16 00:27:47.035000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.381 qpair failed and we were unable to recover it. 00:26:28.382 [2024-07-16 00:27:47.035244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.382 [2024-07-16 00:27:47.035274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.382 qpair failed and we were unable to recover it. 00:26:28.382 [2024-07-16 00:27:47.035540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.382 [2024-07-16 00:27:47.035570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.382 qpair failed and we were unable to recover it. 00:26:28.382 [2024-07-16 00:27:47.035802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.382 [2024-07-16 00:27:47.035831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.382 qpair failed and we were unable to recover it. 00:26:28.382 [2024-07-16 00:27:47.036131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.382 [2024-07-16 00:27:47.036161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.382 qpair failed and we were unable to recover it. 00:26:28.382 [2024-07-16 00:27:47.036360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.382 [2024-07-16 00:27:47.036370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.382 qpair failed and we were unable to recover it. 00:26:28.382 [2024-07-16 00:27:47.036503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.382 [2024-07-16 00:27:47.036533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.382 qpair failed and we were unable to recover it. 00:26:28.382 [2024-07-16 00:27:47.036841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.382 [2024-07-16 00:27:47.036870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.382 qpair failed and we were unable to recover it. 00:26:28.382 [2024-07-16 00:27:47.037193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.382 [2024-07-16 00:27:47.037203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.382 qpair failed and we were unable to recover it. 00:26:28.382 [2024-07-16 00:27:47.037428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.382 [2024-07-16 00:27:47.037437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.382 qpair failed and we were unable to recover it. 00:26:28.382 [2024-07-16 00:27:47.037628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.382 [2024-07-16 00:27:47.037638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.382 qpair failed and we were unable to recover it. 00:26:28.382 [2024-07-16 00:27:47.037727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.382 [2024-07-16 00:27:47.037736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.382 qpair failed and we were unable to recover it. 00:26:28.382 [2024-07-16 00:27:47.037948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.382 [2024-07-16 00:27:47.037977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.382 qpair failed and we were unable to recover it. 00:26:28.382 [2024-07-16 00:27:47.038234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.382 [2024-07-16 00:27:47.038264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.382 qpair failed and we were unable to recover it. 00:26:28.382 [2024-07-16 00:27:47.038607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.382 [2024-07-16 00:27:47.038637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.382 qpair failed and we were unable to recover it. 00:26:28.382 [2024-07-16 00:27:47.038783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.382 [2024-07-16 00:27:47.038812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.382 qpair failed and we were unable to recover it. 00:26:28.382 [2024-07-16 00:27:47.039036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.382 [2024-07-16 00:27:47.039065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.382 qpair failed and we were unable to recover it. 00:26:28.382 [2024-07-16 00:27:47.039244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.382 [2024-07-16 00:27:47.039253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.382 qpair failed and we were unable to recover it. 00:26:28.382 [2024-07-16 00:27:47.039430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.382 [2024-07-16 00:27:47.039439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.382 qpair failed and we were unable to recover it. 00:26:28.382 [2024-07-16 00:27:47.039618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.382 [2024-07-16 00:27:47.039628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.382 qpair failed and we were unable to recover it. 00:26:28.382 [2024-07-16 00:27:47.039812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.382 [2024-07-16 00:27:47.039821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.382 qpair failed and we were unable to recover it. 00:26:28.382 [2024-07-16 00:27:47.039946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.382 [2024-07-16 00:27:47.039975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.382 qpair failed and we were unable to recover it. 00:26:28.382 [2024-07-16 00:27:47.040146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.382 [2024-07-16 00:27:47.040176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.382 qpair failed and we were unable to recover it. 00:26:28.382 [2024-07-16 00:27:47.040427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.382 [2024-07-16 00:27:47.040458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.382 qpair failed and we were unable to recover it. 00:26:28.382 [2024-07-16 00:27:47.040699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.382 [2024-07-16 00:27:47.040728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.382 qpair failed and we were unable to recover it. 00:26:28.382 [2024-07-16 00:27:47.040906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.382 [2024-07-16 00:27:47.040917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.382 qpair failed and we were unable to recover it. 00:26:28.382 [2024-07-16 00:27:47.041128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.382 [2024-07-16 00:27:47.041158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.382 qpair failed and we were unable to recover it. 00:26:28.382 [2024-07-16 00:27:47.041470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.382 [2024-07-16 00:27:47.041500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.382 qpair failed and we were unable to recover it. 00:26:28.382 [2024-07-16 00:27:47.041726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.382 [2024-07-16 00:27:47.041755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.382 qpair failed and we were unable to recover it. 00:26:28.382 [2024-07-16 00:27:47.041924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.382 [2024-07-16 00:27:47.041954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.382 qpair failed and we were unable to recover it. 00:26:28.382 [2024-07-16 00:27:47.042197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.382 [2024-07-16 00:27:47.042251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.382 qpair failed and we were unable to recover it. 00:26:28.382 [2024-07-16 00:27:47.042481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.382 [2024-07-16 00:27:47.042491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.382 qpair failed and we were unable to recover it. 00:26:28.382 [2024-07-16 00:27:47.042772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.382 [2024-07-16 00:27:47.042802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.382 qpair failed and we were unable to recover it. 00:26:28.382 [2024-07-16 00:27:47.042961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.382 [2024-07-16 00:27:47.042991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.382 qpair failed and we were unable to recover it. 00:26:28.382 [2024-07-16 00:27:47.043245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.382 [2024-07-16 00:27:47.043276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.382 qpair failed and we were unable to recover it. 00:26:28.382 [2024-07-16 00:27:47.043511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.382 [2024-07-16 00:27:47.043541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.382 qpair failed and we were unable to recover it. 00:26:28.382 [2024-07-16 00:27:47.043856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.382 [2024-07-16 00:27:47.043886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.382 qpair failed and we were unable to recover it. 00:26:28.382 [2024-07-16 00:27:47.044057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.383 [2024-07-16 00:27:47.044086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.383 qpair failed and we were unable to recover it. 00:26:28.383 [2024-07-16 00:27:47.044404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.383 [2024-07-16 00:27:47.044435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.383 qpair failed and we were unable to recover it. 00:26:28.383 [2024-07-16 00:27:47.044682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.383 [2024-07-16 00:27:47.044712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.383 qpair failed and we were unable to recover it. 00:26:28.383 [2024-07-16 00:27:47.044883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.383 [2024-07-16 00:27:47.044892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.383 qpair failed and we were unable to recover it. 00:26:28.383 [2024-07-16 00:27:47.045154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.383 [2024-07-16 00:27:47.045184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.383 qpair failed and we were unable to recover it. 00:26:28.383 [2024-07-16 00:27:47.045440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.383 [2024-07-16 00:27:47.045469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.383 qpair failed and we were unable to recover it. 00:26:28.383 [2024-07-16 00:27:47.045712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.383 [2024-07-16 00:27:47.045742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.383 qpair failed and we were unable to recover it. 00:26:28.383 [2024-07-16 00:27:47.045983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.383 [2024-07-16 00:27:47.046014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.383 qpair failed and we were unable to recover it. 00:26:28.383 [2024-07-16 00:27:47.046172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.383 [2024-07-16 00:27:47.046181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.383 qpair failed and we were unable to recover it. 00:26:28.383 [2024-07-16 00:27:47.046297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.383 [2024-07-16 00:27:47.046307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.383 qpair failed and we were unable to recover it. 00:26:28.383 [2024-07-16 00:27:47.046503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.383 [2024-07-16 00:27:47.046513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.383 qpair failed and we were unable to recover it. 00:26:28.383 [2024-07-16 00:27:47.046732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.383 [2024-07-16 00:27:47.046741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.383 qpair failed and we were unable to recover it. 00:26:28.383 [2024-07-16 00:27:47.046868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.383 [2024-07-16 00:27:47.046878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.383 qpair failed and we were unable to recover it. 00:26:28.383 [2024-07-16 00:27:47.047073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.383 [2024-07-16 00:27:47.047082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.383 qpair failed and we were unable to recover it. 00:26:28.383 [2024-07-16 00:27:47.047337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.383 [2024-07-16 00:27:47.047367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.383 qpair failed and we were unable to recover it. 00:26:28.383 [2024-07-16 00:27:47.047599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.383 [2024-07-16 00:27:47.047668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.383 qpair failed and we were unable to recover it. 00:26:28.383 [2024-07-16 00:27:47.047858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.383 [2024-07-16 00:27:47.047893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.383 qpair failed and we were unable to recover it. 00:26:28.383 [2024-07-16 00:27:47.048243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.383 [2024-07-16 00:27:47.048277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.383 qpair failed and we were unable to recover it. 00:26:28.383 [2024-07-16 00:27:47.048595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.383 [2024-07-16 00:27:47.048625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.383 qpair failed and we were unable to recover it. 00:26:28.383 [2024-07-16 00:27:47.048817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.383 [2024-07-16 00:27:47.048848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.383 qpair failed and we were unable to recover it. 00:26:28.383 [2024-07-16 00:27:47.049172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.383 [2024-07-16 00:27:47.049202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.383 qpair failed and we were unable to recover it. 00:26:28.383 [2024-07-16 00:27:47.049452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.383 [2024-07-16 00:27:47.049483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.383 qpair failed and we were unable to recover it. 00:26:28.383 [2024-07-16 00:27:47.049785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.383 [2024-07-16 00:27:47.049816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.383 qpair failed and we were unable to recover it. 00:26:28.383 [2024-07-16 00:27:47.050059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.383 [2024-07-16 00:27:47.050088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.383 qpair failed and we were unable to recover it. 00:26:28.383 [2024-07-16 00:27:47.050344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.383 [2024-07-16 00:27:47.050376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.383 qpair failed and we were unable to recover it. 00:26:28.383 [2024-07-16 00:27:47.050566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.383 [2024-07-16 00:27:47.050597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.383 qpair failed and we were unable to recover it. 00:26:28.383 [2024-07-16 00:27:47.050822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.383 [2024-07-16 00:27:47.050852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.383 qpair failed and we were unable to recover it. 00:26:28.383 [2024-07-16 00:27:47.051034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.383 [2024-07-16 00:27:47.051064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.383 qpair failed and we were unable to recover it. 00:26:28.383 [2024-07-16 00:27:47.051323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.383 [2024-07-16 00:27:47.051337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.383 qpair failed and we were unable to recover it. 00:26:28.383 [2024-07-16 00:27:47.051535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.383 [2024-07-16 00:27:47.051548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.383 qpair failed and we were unable to recover it. 00:26:28.383 [2024-07-16 00:27:47.051677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.383 [2024-07-16 00:27:47.051690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.383 qpair failed and we were unable to recover it. 00:26:28.383 [2024-07-16 00:27:47.051918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.383 [2024-07-16 00:27:47.051949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.383 qpair failed and we were unable to recover it. 00:26:28.383 [2024-07-16 00:27:47.052184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.383 [2024-07-16 00:27:47.052214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.383 qpair failed and we were unable to recover it. 00:26:28.383 [2024-07-16 00:27:47.052392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.383 [2024-07-16 00:27:47.052405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.383 qpair failed and we were unable to recover it. 00:26:28.383 [2024-07-16 00:27:47.052615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.383 [2024-07-16 00:27:47.052645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.383 qpair failed and we were unable to recover it. 00:26:28.383 [2024-07-16 00:27:47.052820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.383 [2024-07-16 00:27:47.052850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.383 qpair failed and we were unable to recover it. 00:26:28.383 [2024-07-16 00:27:47.053108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.383 [2024-07-16 00:27:47.053137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.383 qpair failed and we were unable to recover it. 00:26:28.383 [2024-07-16 00:27:47.053307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.383 [2024-07-16 00:27:47.053321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.383 qpair failed and we were unable to recover it. 00:26:28.383 [2024-07-16 00:27:47.053466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.383 [2024-07-16 00:27:47.053496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.383 qpair failed and we were unable to recover it. 00:26:28.383 [2024-07-16 00:27:47.053762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.383 [2024-07-16 00:27:47.053792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.383 qpair failed and we were unable to recover it. 00:26:28.384 [2024-07-16 00:27:47.054018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.384 [2024-07-16 00:27:47.054057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.384 qpair failed and we were unable to recover it. 00:26:28.384 [2024-07-16 00:27:47.054316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.384 [2024-07-16 00:27:47.054330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.384 qpair failed and we were unable to recover it. 00:26:28.384 [2024-07-16 00:27:47.054631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.384 [2024-07-16 00:27:47.054664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.384 qpair failed and we were unable to recover it. 00:26:28.384 [2024-07-16 00:27:47.054924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.384 [2024-07-16 00:27:47.054954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.384 qpair failed and we were unable to recover it. 00:26:28.384 [2024-07-16 00:27:47.055185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.384 [2024-07-16 00:27:47.055215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.384 qpair failed and we were unable to recover it. 00:26:28.384 [2024-07-16 00:27:47.055403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.384 [2024-07-16 00:27:47.055432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.384 qpair failed and we were unable to recover it. 00:26:28.384 [2024-07-16 00:27:47.055728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.384 [2024-07-16 00:27:47.055759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.384 qpair failed and we were unable to recover it. 00:26:28.384 [2024-07-16 00:27:47.056003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.384 [2024-07-16 00:27:47.056032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.384 qpair failed and we were unable to recover it. 00:26:28.384 [2024-07-16 00:27:47.056218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.384 [2024-07-16 00:27:47.056263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.384 qpair failed and we were unable to recover it. 00:26:28.384 [2024-07-16 00:27:47.056334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.384 [2024-07-16 00:27:47.056343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.384 qpair failed and we were unable to recover it. 00:26:28.384 [2024-07-16 00:27:47.056625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.384 [2024-07-16 00:27:47.056655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.384 qpair failed and we were unable to recover it. 00:26:28.384 [2024-07-16 00:27:47.056900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.384 [2024-07-16 00:27:47.056929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.384 qpair failed and we were unable to recover it. 00:26:28.384 [2024-07-16 00:27:47.057172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.384 [2024-07-16 00:27:47.057202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.384 qpair failed and we were unable to recover it. 00:26:28.384 [2024-07-16 00:27:47.057460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.384 [2024-07-16 00:27:47.057490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.384 qpair failed and we were unable to recover it. 00:26:28.384 [2024-07-16 00:27:47.057663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.384 [2024-07-16 00:27:47.057693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.384 qpair failed and we were unable to recover it. 00:26:28.384 [2024-07-16 00:27:47.057876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.384 [2024-07-16 00:27:47.057906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.384 qpair failed and we were unable to recover it. 00:26:28.384 [2024-07-16 00:27:47.058209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.384 [2024-07-16 00:27:47.058250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.384 qpair failed and we were unable to recover it. 00:26:28.384 [2024-07-16 00:27:47.058557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.384 [2024-07-16 00:27:47.058567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.384 qpair failed and we were unable to recover it. 00:26:28.384 [2024-07-16 00:27:47.058830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.384 [2024-07-16 00:27:47.058839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.384 qpair failed and we were unable to recover it. 00:26:28.384 [2024-07-16 00:27:47.059166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.384 [2024-07-16 00:27:47.059196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.384 qpair failed and we were unable to recover it. 00:26:28.384 [2024-07-16 00:27:47.059506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.384 [2024-07-16 00:27:47.059540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.384 qpair failed and we were unable to recover it. 00:26:28.384 [2024-07-16 00:27:47.059784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.384 [2024-07-16 00:27:47.059816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.384 qpair failed and we were unable to recover it. 00:26:28.384 [2024-07-16 00:27:47.059993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.384 [2024-07-16 00:27:47.060023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.384 qpair failed and we were unable to recover it. 00:26:28.384 [2024-07-16 00:27:47.060320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.384 [2024-07-16 00:27:47.060350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.384 qpair failed and we were unable to recover it. 00:26:28.384 [2024-07-16 00:27:47.060613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.384 [2024-07-16 00:27:47.060643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.384 qpair failed and we were unable to recover it. 00:26:28.384 [2024-07-16 00:27:47.060887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.384 [2024-07-16 00:27:47.060916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.384 qpair failed and we were unable to recover it. 00:26:28.384 [2024-07-16 00:27:47.061205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.384 [2024-07-16 00:27:47.061245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.384 qpair failed and we were unable to recover it. 00:26:28.384 [2024-07-16 00:27:47.061534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.384 [2024-07-16 00:27:47.061564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.384 qpair failed and we were unable to recover it. 00:26:28.384 [2024-07-16 00:27:47.061792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.384 [2024-07-16 00:27:47.061822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.384 qpair failed and we were unable to recover it. 00:26:28.384 [2024-07-16 00:27:47.062002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.384 [2024-07-16 00:27:47.062033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.384 qpair failed and we were unable to recover it. 00:26:28.384 [2024-07-16 00:27:47.062358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.384 [2024-07-16 00:27:47.062372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.384 qpair failed and we were unable to recover it. 00:26:28.384 [2024-07-16 00:27:47.062584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.384 [2024-07-16 00:27:47.062598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.384 qpair failed and we were unable to recover it. 00:26:28.384 [2024-07-16 00:27:47.062856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.384 [2024-07-16 00:27:47.062870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.384 qpair failed and we were unable to recover it. 00:26:28.384 [2024-07-16 00:27:47.063121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.384 [2024-07-16 00:27:47.063152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.384 qpair failed and we were unable to recover it. 00:26:28.384 [2024-07-16 00:27:47.063351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.384 [2024-07-16 00:27:47.063381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.384 qpair failed and we were unable to recover it. 00:26:28.384 [2024-07-16 00:27:47.063610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.384 [2024-07-16 00:27:47.063640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.384 qpair failed and we were unable to recover it. 00:26:28.384 [2024-07-16 00:27:47.063887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.384 [2024-07-16 00:27:47.063917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.384 qpair failed and we were unable to recover it. 00:26:28.384 [2024-07-16 00:27:47.064257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.384 [2024-07-16 00:27:47.064290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.384 qpair failed and we were unable to recover it. 00:26:28.384 [2024-07-16 00:27:47.064524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.384 [2024-07-16 00:27:47.064554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.384 qpair failed and we were unable to recover it. 00:26:28.384 [2024-07-16 00:27:47.064825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.384 [2024-07-16 00:27:47.064861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.384 qpair failed and we were unable to recover it. 00:26:28.384 [2024-07-16 00:27:47.065052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.384 [2024-07-16 00:27:47.065065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.384 qpair failed and we were unable to recover it. 00:26:28.384 [2024-07-16 00:27:47.065272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.385 [2024-07-16 00:27:47.065302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.385 qpair failed and we were unable to recover it. 00:26:28.385 [2024-07-16 00:27:47.065488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.385 [2024-07-16 00:27:47.065520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.385 qpair failed and we were unable to recover it. 00:26:28.385 [2024-07-16 00:27:47.065762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.385 [2024-07-16 00:27:47.065792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.385 qpair failed and we were unable to recover it. 00:26:28.385 [2024-07-16 00:27:47.066094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.385 [2024-07-16 00:27:47.066124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.385 qpair failed and we were unable to recover it. 00:26:28.385 [2024-07-16 00:27:47.066299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.385 [2024-07-16 00:27:47.066313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.385 qpair failed and we were unable to recover it. 00:26:28.385 [2024-07-16 00:27:47.066597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.385 [2024-07-16 00:27:47.066628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.385 qpair failed and we were unable to recover it. 00:26:28.385 [2024-07-16 00:27:47.066857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.385 [2024-07-16 00:27:47.066887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.385 qpair failed and we were unable to recover it. 00:26:28.385 [2024-07-16 00:27:47.067134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.385 [2024-07-16 00:27:47.067164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.385 qpair failed and we were unable to recover it. 00:26:28.385 [2024-07-16 00:27:47.067473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.385 [2024-07-16 00:27:47.067503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.385 qpair failed and we were unable to recover it. 00:26:28.385 [2024-07-16 00:27:47.067677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.385 [2024-07-16 00:27:47.067707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.385 qpair failed and we were unable to recover it. 00:26:28.385 [2024-07-16 00:27:47.068028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.385 [2024-07-16 00:27:47.068059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.385 qpair failed and we were unable to recover it. 00:26:28.385 [2024-07-16 00:27:47.068296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.385 [2024-07-16 00:27:47.068326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.385 qpair failed and we were unable to recover it. 00:26:28.385 [2024-07-16 00:27:47.068504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.385 [2024-07-16 00:27:47.068534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.385 qpair failed and we were unable to recover it. 00:26:28.385 [2024-07-16 00:27:47.068710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.385 [2024-07-16 00:27:47.068742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.385 qpair failed and we were unable to recover it. 00:26:28.385 [2024-07-16 00:27:47.068981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.385 [2024-07-16 00:27:47.069011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.385 qpair failed and we were unable to recover it. 00:26:28.385 [2024-07-16 00:27:47.069276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.385 [2024-07-16 00:27:47.069309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.385 qpair failed and we were unable to recover it. 00:26:28.385 [2024-07-16 00:27:47.069536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.385 [2024-07-16 00:27:47.069546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.385 qpair failed and we were unable to recover it. 00:26:28.385 [2024-07-16 00:27:47.069695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.385 [2024-07-16 00:27:47.069704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.385 qpair failed and we were unable to recover it. 00:26:28.385 [2024-07-16 00:27:47.069887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.385 [2024-07-16 00:27:47.069896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.385 qpair failed and we were unable to recover it. 00:26:28.385 [2024-07-16 00:27:47.070110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.385 [2024-07-16 00:27:47.070119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.385 qpair failed and we were unable to recover it. 00:26:28.385 [2024-07-16 00:27:47.070255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.385 [2024-07-16 00:27:47.070265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.385 qpair failed and we were unable to recover it. 00:26:28.385 [2024-07-16 00:27:47.070453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.385 [2024-07-16 00:27:47.070483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.385 qpair failed and we were unable to recover it. 00:26:28.385 [2024-07-16 00:27:47.070728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.385 [2024-07-16 00:27:47.070759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.385 qpair failed and we were unable to recover it. 00:26:28.385 [2024-07-16 00:27:47.070879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.385 [2024-07-16 00:27:47.070908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.385 qpair failed and we were unable to recover it. 00:26:28.385 [2024-07-16 00:27:47.071126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.385 [2024-07-16 00:27:47.071136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.385 qpair failed and we were unable to recover it. 00:26:28.385 [2024-07-16 00:27:47.071321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.385 [2024-07-16 00:27:47.071331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.385 qpair failed and we were unable to recover it. 00:26:28.385 [2024-07-16 00:27:47.071549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.385 [2024-07-16 00:27:47.071579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.385 qpair failed and we were unable to recover it. 00:26:28.385 [2024-07-16 00:27:47.071755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.385 [2024-07-16 00:27:47.071785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.385 qpair failed and we were unable to recover it. 00:26:28.385 [2024-07-16 00:27:47.071961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.385 [2024-07-16 00:27:47.071996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.385 qpair failed and we were unable to recover it. 00:26:28.385 [2024-07-16 00:27:47.072261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.385 [2024-07-16 00:27:47.072291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.385 qpair failed and we were unable to recover it. 00:26:28.385 [2024-07-16 00:27:47.072610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.385 [2024-07-16 00:27:47.072639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.385 qpair failed and we were unable to recover it. 00:26:28.385 [2024-07-16 00:27:47.072872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.385 [2024-07-16 00:27:47.072902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.385 qpair failed and we were unable to recover it. 00:26:28.385 [2024-07-16 00:27:47.073139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.385 [2024-07-16 00:27:47.073169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.385 qpair failed and we were unable to recover it. 00:26:28.385 [2024-07-16 00:27:47.073396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.385 [2024-07-16 00:27:47.073406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.385 qpair failed and we were unable to recover it. 00:26:28.385 [2024-07-16 00:27:47.073481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.385 [2024-07-16 00:27:47.073491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.385 qpair failed and we were unable to recover it. 00:26:28.385 [2024-07-16 00:27:47.073627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.385 [2024-07-16 00:27:47.073636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.386 qpair failed and we were unable to recover it. 00:26:28.386 [2024-07-16 00:27:47.073882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.386 [2024-07-16 00:27:47.073891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.386 qpair failed and we were unable to recover it. 00:26:28.386 [2024-07-16 00:27:47.074088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.386 [2024-07-16 00:27:47.074098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.386 qpair failed and we were unable to recover it. 00:26:28.386 [2024-07-16 00:27:47.074242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.386 [2024-07-16 00:27:47.074272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.386 qpair failed and we were unable to recover it. 00:26:28.386 [2024-07-16 00:27:47.074445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.386 [2024-07-16 00:27:47.074475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.386 qpair failed and we were unable to recover it. 00:26:28.386 [2024-07-16 00:27:47.074705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.386 [2024-07-16 00:27:47.074736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.386 qpair failed and we were unable to recover it. 00:26:28.386 [2024-07-16 00:27:47.075041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.386 [2024-07-16 00:27:47.075070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.386 qpair failed and we were unable to recover it. 00:26:28.386 [2024-07-16 00:27:47.075266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.386 [2024-07-16 00:27:47.075295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.386 qpair failed and we were unable to recover it. 00:26:28.386 [2024-07-16 00:27:47.075605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.386 [2024-07-16 00:27:47.075615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.386 qpair failed and we were unable to recover it. 00:26:28.386 [2024-07-16 00:27:47.075826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.386 [2024-07-16 00:27:47.075835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.386 qpair failed and we were unable to recover it. 00:26:28.386 [2024-07-16 00:27:47.076114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.386 [2024-07-16 00:27:47.076144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.386 qpair failed and we were unable to recover it. 00:26:28.386 [2024-07-16 00:27:47.076328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.386 [2024-07-16 00:27:47.076359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.386 qpair failed and we were unable to recover it. 00:26:28.386 [2024-07-16 00:27:47.076599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.386 [2024-07-16 00:27:47.076629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.386 qpair failed and we were unable to recover it. 00:26:28.386 [2024-07-16 00:27:47.076869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.386 [2024-07-16 00:27:47.076899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.386 qpair failed and we were unable to recover it. 00:26:28.386 [2024-07-16 00:27:47.077071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.386 [2024-07-16 00:27:47.077102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.386 qpair failed and we were unable to recover it. 00:26:28.386 [2024-07-16 00:27:47.077327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.386 [2024-07-16 00:27:47.077337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.386 qpair failed and we were unable to recover it. 00:26:28.386 [2024-07-16 00:27:47.077544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.386 [2024-07-16 00:27:47.077574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.386 qpair failed and we were unable to recover it. 00:26:28.386 [2024-07-16 00:27:47.077759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.386 [2024-07-16 00:27:47.077789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.386 qpair failed and we were unable to recover it. 00:26:28.386 [2024-07-16 00:27:47.078023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.386 [2024-07-16 00:27:47.078064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.386 qpair failed and we were unable to recover it. 00:26:28.386 [2024-07-16 00:27:47.078246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.386 [2024-07-16 00:27:47.078257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.386 qpair failed and we were unable to recover it. 00:26:28.386 [2024-07-16 00:27:47.078466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.386 [2024-07-16 00:27:47.078497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.386 qpair failed and we were unable to recover it. 00:26:28.386 [2024-07-16 00:27:47.078672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.386 [2024-07-16 00:27:47.078702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.386 qpair failed and we were unable to recover it. 00:26:28.386 [2024-07-16 00:27:47.078864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.386 [2024-07-16 00:27:47.078893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.386 qpair failed and we were unable to recover it. 00:26:28.386 [2024-07-16 00:27:47.079122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.386 [2024-07-16 00:27:47.079153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.386 qpair failed and we were unable to recover it. 00:26:28.386 [2024-07-16 00:27:47.079391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.386 [2024-07-16 00:27:47.079421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.386 qpair failed and we were unable to recover it. 00:26:28.386 [2024-07-16 00:27:47.079621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.386 [2024-07-16 00:27:47.079631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.386 qpair failed and we were unable to recover it. 00:26:28.386 [2024-07-16 00:27:47.079752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.386 [2024-07-16 00:27:47.079763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.386 qpair failed and we were unable to recover it. 00:26:28.386 [2024-07-16 00:27:47.079972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.386 [2024-07-16 00:27:47.080001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.386 qpair failed and we were unable to recover it. 00:26:28.386 [2024-07-16 00:27:47.080174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.386 [2024-07-16 00:27:47.080204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.386 qpair failed and we were unable to recover it. 00:26:28.386 [2024-07-16 00:27:47.080473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.386 [2024-07-16 00:27:47.080504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.386 qpair failed and we were unable to recover it. 00:26:28.386 [2024-07-16 00:27:47.080733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.386 [2024-07-16 00:27:47.080763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.386 qpair failed and we were unable to recover it. 00:26:28.386 [2024-07-16 00:27:47.081037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.386 [2024-07-16 00:27:47.081067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.386 qpair failed and we were unable to recover it. 00:26:28.386 [2024-07-16 00:27:47.081335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.386 [2024-07-16 00:27:47.081375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.386 qpair failed and we were unable to recover it. 00:26:28.386 [2024-07-16 00:27:47.081567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.386 [2024-07-16 00:27:47.081580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.386 qpair failed and we were unable to recover it. 00:26:28.386 [2024-07-16 00:27:47.081703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.386 [2024-07-16 00:27:47.081733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.386 qpair failed and we were unable to recover it. 00:26:28.386 [2024-07-16 00:27:47.082031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.386 [2024-07-16 00:27:47.082061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.386 qpair failed and we were unable to recover it. 00:26:28.386 [2024-07-16 00:27:47.082286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.386 [2024-07-16 00:27:47.082316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.386 qpair failed and we were unable to recover it. 00:26:28.386 [2024-07-16 00:27:47.082550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.386 [2024-07-16 00:27:47.082579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.386 qpair failed and we were unable to recover it. 00:26:28.386 [2024-07-16 00:27:47.082808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.386 [2024-07-16 00:27:47.082838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.386 qpair failed and we were unable to recover it. 00:26:28.386 [2024-07-16 00:27:47.083076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.386 [2024-07-16 00:27:47.083106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.386 qpair failed and we were unable to recover it. 00:26:28.386 [2024-07-16 00:27:47.083288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.386 [2024-07-16 00:27:47.083318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.386 qpair failed and we were unable to recover it. 00:26:28.387 [2024-07-16 00:27:47.083606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.387 [2024-07-16 00:27:47.083615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.387 qpair failed and we were unable to recover it. 00:26:28.387 [2024-07-16 00:27:47.083806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.387 [2024-07-16 00:27:47.083816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.387 qpair failed and we were unable to recover it. 00:26:28.387 [2024-07-16 00:27:47.083929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.387 [2024-07-16 00:27:47.083939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.387 qpair failed and we were unable to recover it. 00:26:28.387 [2024-07-16 00:27:47.084151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.387 [2024-07-16 00:27:47.084181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.387 qpair failed and we were unable to recover it. 00:26:28.387 [2024-07-16 00:27:47.084430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.387 [2024-07-16 00:27:47.084461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.387 qpair failed and we were unable to recover it. 00:26:28.387 [2024-07-16 00:27:47.084632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.387 [2024-07-16 00:27:47.084661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.387 qpair failed and we were unable to recover it. 00:26:28.387 [2024-07-16 00:27:47.084909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.387 [2024-07-16 00:27:47.084939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.387 qpair failed and we were unable to recover it. 00:26:28.387 [2024-07-16 00:27:47.085171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.387 [2024-07-16 00:27:47.085201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.387 qpair failed and we were unable to recover it. 00:26:28.387 [2024-07-16 00:27:47.085457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.387 [2024-07-16 00:27:47.085487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.387 qpair failed and we were unable to recover it. 00:26:28.387 [2024-07-16 00:27:47.085805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.387 [2024-07-16 00:27:47.085835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.387 qpair failed and we were unable to recover it. 00:26:28.387 [2024-07-16 00:27:47.086153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.387 [2024-07-16 00:27:47.086183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.387 qpair failed and we were unable to recover it. 00:26:28.387 [2024-07-16 00:27:47.086358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.387 [2024-07-16 00:27:47.086388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.387 qpair failed and we were unable to recover it. 00:26:28.387 [2024-07-16 00:27:47.086568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.387 [2024-07-16 00:27:47.086599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.387 qpair failed and we were unable to recover it. 00:26:28.387 [2024-07-16 00:27:47.086826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.387 [2024-07-16 00:27:47.086856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.387 qpair failed and we were unable to recover it. 00:26:28.387 [2024-07-16 00:27:47.087095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.387 [2024-07-16 00:27:47.087125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.387 qpair failed and we were unable to recover it. 00:26:28.387 [2024-07-16 00:27:47.087354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.387 [2024-07-16 00:27:47.087385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.387 qpair failed and we were unable to recover it. 00:26:28.387 [2024-07-16 00:27:47.087616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.387 [2024-07-16 00:27:47.087626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.387 qpair failed and we were unable to recover it. 00:26:28.387 [2024-07-16 00:27:47.087824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.387 [2024-07-16 00:27:47.087854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.387 qpair failed and we were unable to recover it. 00:26:28.387 [2024-07-16 00:27:47.088147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.387 [2024-07-16 00:27:47.088177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.387 qpair failed and we were unable to recover it. 00:26:28.387 [2024-07-16 00:27:47.088410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.387 [2024-07-16 00:27:47.088422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.387 qpair failed and we were unable to recover it. 00:26:28.387 [2024-07-16 00:27:47.088602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.387 [2024-07-16 00:27:47.088611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.387 qpair failed and we were unable to recover it. 00:26:28.387 [2024-07-16 00:27:47.088816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.387 [2024-07-16 00:27:47.088853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.387 qpair failed and we were unable to recover it. 00:26:28.387 [2024-07-16 00:27:47.089080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.387 [2024-07-16 00:27:47.089110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.387 qpair failed and we were unable to recover it. 00:26:28.387 [2024-07-16 00:27:47.089367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.387 [2024-07-16 00:27:47.089397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.387 qpair failed and we were unable to recover it. 00:26:28.387 [2024-07-16 00:27:47.089652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.387 [2024-07-16 00:27:47.089683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.387 qpair failed and we were unable to recover it. 00:26:28.387 [2024-07-16 00:27:47.089869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.387 [2024-07-16 00:27:47.089898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.387 qpair failed and we were unable to recover it. 00:26:28.387 [2024-07-16 00:27:47.090063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.387 [2024-07-16 00:27:47.090092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.387 qpair failed and we were unable to recover it. 00:26:28.387 [2024-07-16 00:27:47.090244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.387 [2024-07-16 00:27:47.090275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.387 qpair failed and we were unable to recover it. 00:26:28.387 [2024-07-16 00:27:47.090501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.387 [2024-07-16 00:27:47.090511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.387 qpair failed and we were unable to recover it. 00:26:28.387 [2024-07-16 00:27:47.090627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.387 [2024-07-16 00:27:47.090637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.387 qpair failed and we were unable to recover it. 00:26:28.387 [2024-07-16 00:27:47.090838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.387 [2024-07-16 00:27:47.090848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.387 qpair failed and we were unable to recover it. 00:26:28.387 [2024-07-16 00:27:47.090924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.387 [2024-07-16 00:27:47.090934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.387 qpair failed and we were unable to recover it. 00:26:28.387 [2024-07-16 00:27:47.091133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.387 [2024-07-16 00:27:47.091142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.387 qpair failed and we were unable to recover it. 00:26:28.387 [2024-07-16 00:27:47.091269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.387 [2024-07-16 00:27:47.091279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.387 qpair failed and we were unable to recover it. 00:26:28.387 [2024-07-16 00:27:47.091415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.387 [2024-07-16 00:27:47.091425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.387 qpair failed and we were unable to recover it. 00:26:28.387 [2024-07-16 00:27:47.091608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.387 [2024-07-16 00:27:47.091637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.387 qpair failed and we were unable to recover it. 00:26:28.387 [2024-07-16 00:27:47.091958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.387 [2024-07-16 00:27:47.091988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.387 qpair failed and we were unable to recover it. 00:26:28.387 [2024-07-16 00:27:47.092223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.387 [2024-07-16 00:27:47.092235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.387 qpair failed and we were unable to recover it. 00:26:28.387 [2024-07-16 00:27:47.092373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.387 [2024-07-16 00:27:47.092403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.387 qpair failed and we were unable to recover it. 00:26:28.387 [2024-07-16 00:27:47.092594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.387 [2024-07-16 00:27:47.092624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.387 qpair failed and we were unable to recover it. 00:26:28.387 [2024-07-16 00:27:47.092849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.387 [2024-07-16 00:27:47.092878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.387 qpair failed and we were unable to recover it. 00:26:28.388 [2024-07-16 00:27:47.093047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.388 [2024-07-16 00:27:47.093076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.388 qpair failed and we were unable to recover it. 00:26:28.388 [2024-07-16 00:27:47.093273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.388 [2024-07-16 00:27:47.093305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.388 qpair failed and we were unable to recover it. 00:26:28.388 [2024-07-16 00:27:47.093617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.388 [2024-07-16 00:27:47.093627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.388 qpair failed and we were unable to recover it. 00:26:28.388 [2024-07-16 00:27:47.093810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.388 [2024-07-16 00:27:47.093820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.388 qpair failed and we were unable to recover it. 00:26:28.388 [2024-07-16 00:27:47.094039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.388 [2024-07-16 00:27:47.094048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.388 qpair failed and we were unable to recover it. 00:26:28.388 [2024-07-16 00:27:47.094321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.388 [2024-07-16 00:27:47.094331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.388 qpair failed and we were unable to recover it. 00:26:28.388 [2024-07-16 00:27:47.094522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.388 [2024-07-16 00:27:47.094532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.388 qpair failed and we were unable to recover it. 00:26:28.388 [2024-07-16 00:27:47.094668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.388 [2024-07-16 00:27:47.094677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.388 qpair failed and we were unable to recover it. 00:26:28.388 [2024-07-16 00:27:47.094883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.388 [2024-07-16 00:27:47.094912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.388 qpair failed and we were unable to recover it. 00:26:28.388 [2024-07-16 00:27:47.095081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.388 [2024-07-16 00:27:47.095111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.388 qpair failed and we were unable to recover it. 00:26:28.388 [2024-07-16 00:27:47.095361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.388 [2024-07-16 00:27:47.095393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.388 qpair failed and we were unable to recover it. 00:26:28.388 [2024-07-16 00:27:47.095620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.388 [2024-07-16 00:27:47.095650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.388 qpair failed and we were unable to recover it. 00:26:28.388 [2024-07-16 00:27:47.095830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.388 [2024-07-16 00:27:47.095860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.388 qpair failed and we were unable to recover it. 00:26:28.388 [2024-07-16 00:27:47.096175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.388 [2024-07-16 00:27:47.096185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.388 qpair failed and we were unable to recover it. 00:26:28.388 [2024-07-16 00:27:47.096365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.388 [2024-07-16 00:27:47.096375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.388 qpair failed and we were unable to recover it. 00:26:28.388 [2024-07-16 00:27:47.096626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.388 [2024-07-16 00:27:47.096656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.388 qpair failed and we were unable to recover it. 00:26:28.388 [2024-07-16 00:27:47.096821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.388 [2024-07-16 00:27:47.096852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.388 qpair failed and we were unable to recover it. 00:26:28.388 [2024-07-16 00:27:47.097087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.388 [2024-07-16 00:27:47.097117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.388 qpair failed and we were unable to recover it. 00:26:28.388 [2024-07-16 00:27:47.097409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.388 [2024-07-16 00:27:47.097448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.388 qpair failed and we were unable to recover it. 00:26:28.388 [2024-07-16 00:27:47.097675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.388 [2024-07-16 00:27:47.097706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.388 qpair failed and we were unable to recover it. 00:26:28.388 [2024-07-16 00:27:47.097945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.388 [2024-07-16 00:27:47.097974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.388 qpair failed and we were unable to recover it. 00:26:28.388 [2024-07-16 00:27:47.098234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.388 [2024-07-16 00:27:47.098264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.388 qpair failed and we were unable to recover it. 00:26:28.388 [2024-07-16 00:27:47.098499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.388 [2024-07-16 00:27:47.098509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.388 qpair failed and we were unable to recover it. 00:26:28.388 [2024-07-16 00:27:47.098659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.388 [2024-07-16 00:27:47.098668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.388 qpair failed and we were unable to recover it. 00:26:28.388 [2024-07-16 00:27:47.098849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.388 [2024-07-16 00:27:47.098859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.388 qpair failed and we were unable to recover it. 00:26:28.388 [2024-07-16 00:27:47.099047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.388 [2024-07-16 00:27:47.099067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.388 qpair failed and we were unable to recover it. 00:26:28.388 [2024-07-16 00:27:47.099199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.388 [2024-07-16 00:27:47.099209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.388 qpair failed and we were unable to recover it. 00:26:28.388 [2024-07-16 00:27:47.099402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.388 [2024-07-16 00:27:47.099412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.388 qpair failed and we were unable to recover it. 00:26:28.388 [2024-07-16 00:27:47.099541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.388 [2024-07-16 00:27:47.099551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.388 qpair failed and we were unable to recover it. 00:26:28.388 [2024-07-16 00:27:47.099683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.388 [2024-07-16 00:27:47.099692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.388 qpair failed and we were unable to recover it. 00:26:28.388 [2024-07-16 00:27:47.099896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.388 [2024-07-16 00:27:47.099907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.388 qpair failed and we were unable to recover it. 00:26:28.388 [2024-07-16 00:27:47.100104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.388 [2024-07-16 00:27:47.100114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.388 qpair failed and we were unable to recover it. 00:26:28.388 [2024-07-16 00:27:47.100252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.388 [2024-07-16 00:27:47.100262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.388 qpair failed and we were unable to recover it. 00:26:28.388 [2024-07-16 00:27:47.100521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.388 [2024-07-16 00:27:47.100551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.388 qpair failed and we were unable to recover it. 00:26:28.388 [2024-07-16 00:27:47.100848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.388 [2024-07-16 00:27:47.100879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.388 qpair failed and we were unable to recover it. 00:26:28.388 [2024-07-16 00:27:47.101134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.388 [2024-07-16 00:27:47.101164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.388 qpair failed and we were unable to recover it. 00:26:28.388 [2024-07-16 00:27:47.101338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.388 [2024-07-16 00:27:47.101348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.388 qpair failed and we were unable to recover it. 00:26:28.388 [2024-07-16 00:27:47.101525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.388 [2024-07-16 00:27:47.101542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.388 qpair failed and we were unable to recover it. 00:26:28.388 [2024-07-16 00:27:47.101663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.388 [2024-07-16 00:27:47.101674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.388 qpair failed and we were unable to recover it. 00:26:28.388 [2024-07-16 00:27:47.101830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.388 [2024-07-16 00:27:47.101859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.388 qpair failed and we were unable to recover it. 00:26:28.388 [2024-07-16 00:27:47.102182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.389 [2024-07-16 00:27:47.102212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.389 qpair failed and we were unable to recover it. 00:26:28.389 [2024-07-16 00:27:47.102462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.389 [2024-07-16 00:27:47.102472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.389 qpair failed and we were unable to recover it. 00:26:28.389 [2024-07-16 00:27:47.102601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.389 [2024-07-16 00:27:47.102612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.389 qpair failed and we were unable to recover it. 00:26:28.389 [2024-07-16 00:27:47.102744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.389 [2024-07-16 00:27:47.102754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.389 qpair failed and we were unable to recover it. 00:26:28.389 [2024-07-16 00:27:47.102960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.389 [2024-07-16 00:27:47.102970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.389 qpair failed and we were unable to recover it. 00:26:28.389 [2024-07-16 00:27:47.103132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.389 [2024-07-16 00:27:47.103142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.389 qpair failed and we were unable to recover it. 00:26:28.389 [2024-07-16 00:27:47.103338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.389 [2024-07-16 00:27:47.103386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.389 qpair failed and we were unable to recover it. 00:26:28.389 [2024-07-16 00:27:47.103564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.389 [2024-07-16 00:27:47.103594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.389 qpair failed and we were unable to recover it. 00:26:28.389 [2024-07-16 00:27:47.103886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.389 [2024-07-16 00:27:47.103916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.389 qpair failed and we were unable to recover it. 00:26:28.389 [2024-07-16 00:27:47.104139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.389 [2024-07-16 00:27:47.104150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.389 qpair failed and we were unable to recover it. 00:26:28.389 [2024-07-16 00:27:47.104403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.389 [2024-07-16 00:27:47.104428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.389 qpair failed and we were unable to recover it. 00:26:28.389 [2024-07-16 00:27:47.104628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.389 [2024-07-16 00:27:47.104638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.389 qpair failed and we were unable to recover it. 00:26:28.389 [2024-07-16 00:27:47.104844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.389 [2024-07-16 00:27:47.104866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.389 qpair failed and we were unable to recover it. 00:26:28.389 [2024-07-16 00:27:47.105075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.389 [2024-07-16 00:27:47.105097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.389 qpair failed and we were unable to recover it. 00:26:28.389 [2024-07-16 00:27:47.105239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.389 [2024-07-16 00:27:47.105249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.389 qpair failed and we were unable to recover it. 00:26:28.389 [2024-07-16 00:27:47.105509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.389 [2024-07-16 00:27:47.105538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.389 qpair failed and we were unable to recover it. 00:26:28.389 [2024-07-16 00:27:47.105830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.389 [2024-07-16 00:27:47.105860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.389 qpair failed and we were unable to recover it. 00:26:28.389 [2024-07-16 00:27:47.106030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.389 [2024-07-16 00:27:47.106060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.389 qpair failed and we were unable to recover it. 00:26:28.389 [2024-07-16 00:27:47.106375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.389 [2024-07-16 00:27:47.106411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.389 qpair failed and we were unable to recover it. 00:26:28.389 [2024-07-16 00:27:47.106589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.389 [2024-07-16 00:27:47.106618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.389 qpair failed and we were unable to recover it. 00:26:28.389 [2024-07-16 00:27:47.106917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.389 [2024-07-16 00:27:47.106947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.389 qpair failed and we were unable to recover it. 00:26:28.389 [2024-07-16 00:27:47.107207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.389 [2024-07-16 00:27:47.107217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.389 qpair failed and we were unable to recover it. 00:26:28.389 [2024-07-16 00:27:47.107470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.389 [2024-07-16 00:27:47.107480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.389 qpair failed and we were unable to recover it. 00:26:28.389 [2024-07-16 00:27:47.107598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.389 [2024-07-16 00:27:47.107608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.389 qpair failed and we were unable to recover it. 00:26:28.389 [2024-07-16 00:27:47.107725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.389 [2024-07-16 00:27:47.107735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.389 qpair failed and we were unable to recover it. 00:26:28.389 [2024-07-16 00:27:47.107950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.389 [2024-07-16 00:27:47.107960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.389 qpair failed and we were unable to recover it. 00:26:28.389 [2024-07-16 00:27:47.108105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.389 [2024-07-16 00:27:47.108114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.389 qpair failed and we were unable to recover it. 00:26:28.389 [2024-07-16 00:27:47.108275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.389 [2024-07-16 00:27:47.108307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.389 qpair failed and we were unable to recover it. 00:26:28.389 [2024-07-16 00:27:47.108538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.389 [2024-07-16 00:27:47.108567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.389 qpair failed and we were unable to recover it. 00:26:28.389 [2024-07-16 00:27:47.108798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.389 [2024-07-16 00:27:47.108828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.389 qpair failed and we were unable to recover it. 00:26:28.389 [2024-07-16 00:27:47.109146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.389 [2024-07-16 00:27:47.109177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.389 qpair failed and we were unable to recover it. 00:26:28.389 [2024-07-16 00:27:47.109396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.389 [2024-07-16 00:27:47.109406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.389 qpair failed and we were unable to recover it. 00:26:28.389 [2024-07-16 00:27:47.109662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.389 [2024-07-16 00:27:47.109691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.389 qpair failed and we were unable to recover it. 00:26:28.389 [2024-07-16 00:27:47.109944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.389 [2024-07-16 00:27:47.109973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.389 qpair failed and we were unable to recover it. 00:26:28.389 [2024-07-16 00:27:47.110277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.389 [2024-07-16 00:27:47.110303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.389 qpair failed and we were unable to recover it. 00:26:28.389 [2024-07-16 00:27:47.110420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.389 [2024-07-16 00:27:47.110430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.389 qpair failed and we were unable to recover it. 00:26:28.389 [2024-07-16 00:27:47.110505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.389 [2024-07-16 00:27:47.110515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.389 qpair failed and we were unable to recover it. 00:26:28.389 [2024-07-16 00:27:47.110651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.389 [2024-07-16 00:27:47.110661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.389 qpair failed and we were unable to recover it. 00:26:28.389 [2024-07-16 00:27:47.110810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.389 [2024-07-16 00:27:47.110820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.389 qpair failed and we were unable to recover it. 00:26:28.389 [2024-07-16 00:27:47.110967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.389 [2024-07-16 00:27:47.110977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.389 qpair failed and we were unable to recover it. 00:26:28.389 [2024-07-16 00:27:47.111174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.389 [2024-07-16 00:27:47.111183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.389 qpair failed and we were unable to recover it. 00:26:28.389 [2024-07-16 00:27:47.111389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.390 [2024-07-16 00:27:47.111419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.390 qpair failed and we were unable to recover it. 00:26:28.390 [2024-07-16 00:27:47.111591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.390 [2024-07-16 00:27:47.111620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.390 qpair failed and we were unable to recover it. 00:26:28.390 [2024-07-16 00:27:47.111855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.390 [2024-07-16 00:27:47.111884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.390 qpair failed and we were unable to recover it. 00:26:28.390 [2024-07-16 00:27:47.112128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.390 [2024-07-16 00:27:47.112137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.390 qpair failed and we were unable to recover it. 00:26:28.390 [2024-07-16 00:27:47.112279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.390 [2024-07-16 00:27:47.112310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.390 qpair failed and we were unable to recover it. 00:26:28.390 [2024-07-16 00:27:47.112556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.390 [2024-07-16 00:27:47.112585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.390 qpair failed and we were unable to recover it. 00:26:28.390 [2024-07-16 00:27:47.112812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.390 [2024-07-16 00:27:47.112842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.390 qpair failed and we were unable to recover it. 00:26:28.390 [2024-07-16 00:27:47.113062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.390 [2024-07-16 00:27:47.113092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.390 qpair failed and we were unable to recover it. 00:26:28.390 [2024-07-16 00:27:47.113254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.390 [2024-07-16 00:27:47.113284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.390 qpair failed and we were unable to recover it. 00:26:28.390 [2024-07-16 00:27:47.113450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.390 [2024-07-16 00:27:47.113460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.390 qpair failed and we were unable to recover it. 00:26:28.390 [2024-07-16 00:27:47.113643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.390 [2024-07-16 00:27:47.113653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.390 qpair failed and we were unable to recover it. 00:26:28.390 [2024-07-16 00:27:47.113835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.390 [2024-07-16 00:27:47.113844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.390 qpair failed and we were unable to recover it. 00:26:28.390 [2024-07-16 00:27:47.114046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.390 [2024-07-16 00:27:47.114056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.390 qpair failed and we were unable to recover it. 00:26:28.390 [2024-07-16 00:27:47.114242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.390 [2024-07-16 00:27:47.114252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.390 qpair failed and we were unable to recover it. 00:26:28.390 [2024-07-16 00:27:47.114439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.390 [2024-07-16 00:27:47.114468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.390 qpair failed and we were unable to recover it. 00:26:28.390 [2024-07-16 00:27:47.114647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.390 [2024-07-16 00:27:47.114676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.390 qpair failed and we were unable to recover it. 00:26:28.390 [2024-07-16 00:27:47.114924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.390 [2024-07-16 00:27:47.114954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.390 qpair failed and we were unable to recover it. 00:26:28.390 [2024-07-16 00:27:47.115269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.390 [2024-07-16 00:27:47.115305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.390 qpair failed and we were unable to recover it. 00:26:28.390 [2024-07-16 00:27:47.115531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.390 [2024-07-16 00:27:47.115541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.390 qpair failed and we were unable to recover it. 00:26:28.390 [2024-07-16 00:27:47.115800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.390 [2024-07-16 00:27:47.115830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.390 qpair failed and we were unable to recover it. 00:26:28.390 [2024-07-16 00:27:47.116071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.390 [2024-07-16 00:27:47.116101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.390 qpair failed and we were unable to recover it. 00:26:28.390 [2024-07-16 00:27:47.116316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.390 [2024-07-16 00:27:47.116325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.390 qpair failed and we were unable to recover it. 00:26:28.390 [2024-07-16 00:27:47.116551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.390 [2024-07-16 00:27:47.116561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.390 qpair failed and we were unable to recover it. 00:26:28.390 [2024-07-16 00:27:47.116761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.390 [2024-07-16 00:27:47.116771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.390 qpair failed and we were unable to recover it. 00:26:28.390 [2024-07-16 00:27:47.117035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.390 [2024-07-16 00:27:47.117045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.390 qpair failed and we were unable to recover it. 00:26:28.390 [2024-07-16 00:27:47.117192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.390 [2024-07-16 00:27:47.117202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.390 qpair failed and we were unable to recover it. 00:26:28.390 [2024-07-16 00:27:47.117390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.390 [2024-07-16 00:27:47.117401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.390 qpair failed and we were unable to recover it. 00:26:28.390 [2024-07-16 00:27:47.117592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.390 [2024-07-16 00:27:47.117601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.390 qpair failed and we were unable to recover it. 00:26:28.390 [2024-07-16 00:27:47.117732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.390 [2024-07-16 00:27:47.117742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.390 qpair failed and we were unable to recover it. 00:26:28.390 [2024-07-16 00:27:47.117941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.390 [2024-07-16 00:27:47.117951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.390 qpair failed and we were unable to recover it. 00:26:28.390 [2024-07-16 00:27:47.118139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.390 [2024-07-16 00:27:47.118149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.390 qpair failed and we were unable to recover it. 00:26:28.390 [2024-07-16 00:27:47.118427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.390 [2024-07-16 00:27:47.118458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.390 qpair failed and we were unable to recover it. 00:26:28.390 [2024-07-16 00:27:47.118636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.390 [2024-07-16 00:27:47.118665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.390 qpair failed and we were unable to recover it. 00:26:28.390 [2024-07-16 00:27:47.118900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.390 [2024-07-16 00:27:47.118929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.390 qpair failed and we were unable to recover it. 00:26:28.390 [2024-07-16 00:27:47.119187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.390 [2024-07-16 00:27:47.119197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.390 qpair failed and we were unable to recover it. 00:26:28.390 [2024-07-16 00:27:47.119334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.391 [2024-07-16 00:27:47.119364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.391 qpair failed and we were unable to recover it. 00:26:28.391 [2024-07-16 00:27:47.119608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.391 [2024-07-16 00:27:47.119638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.391 qpair failed and we were unable to recover it. 00:26:28.391 [2024-07-16 00:27:47.119906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.391 [2024-07-16 00:27:47.119936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.391 qpair failed and we were unable to recover it. 00:26:28.391 [2024-07-16 00:27:47.120244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.391 [2024-07-16 00:27:47.120276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.391 qpair failed and we were unable to recover it. 00:26:28.391 [2024-07-16 00:27:47.120566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.391 [2024-07-16 00:27:47.120576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.391 qpair failed and we were unable to recover it. 00:26:28.391 [2024-07-16 00:27:47.120779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.391 [2024-07-16 00:27:47.120819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.391 qpair failed and we were unable to recover it. 00:26:28.391 [2024-07-16 00:27:47.121086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.391 [2024-07-16 00:27:47.121116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.391 qpair failed and we were unable to recover it. 00:26:28.391 [2024-07-16 00:27:47.121362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.391 [2024-07-16 00:27:47.121393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.391 qpair failed and we were unable to recover it. 00:26:28.391 [2024-07-16 00:27:47.121642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.391 [2024-07-16 00:27:47.121652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.391 qpair failed and we were unable to recover it. 00:26:28.391 [2024-07-16 00:27:47.121778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.391 [2024-07-16 00:27:47.121788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.391 qpair failed and we were unable to recover it. 00:26:28.391 [2024-07-16 00:27:47.122040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.391 [2024-07-16 00:27:47.122049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.391 qpair failed and we were unable to recover it. 00:26:28.391 [2024-07-16 00:27:47.122247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.391 [2024-07-16 00:27:47.122257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.391 qpair failed and we were unable to recover it. 00:26:28.391 [2024-07-16 00:27:47.122379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.391 [2024-07-16 00:27:47.122388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.391 qpair failed and we were unable to recover it. 00:26:28.391 [2024-07-16 00:27:47.122527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.391 [2024-07-16 00:27:47.122537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.391 qpair failed and we were unable to recover it. 00:26:28.391 [2024-07-16 00:27:47.122732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.391 [2024-07-16 00:27:47.122742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.391 qpair failed and we were unable to recover it. 00:26:28.391 [2024-07-16 00:27:47.122939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.391 [2024-07-16 00:27:47.122949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.391 qpair failed and we were unable to recover it. 00:26:28.391 [2024-07-16 00:27:47.123110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.391 [2024-07-16 00:27:47.123120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.391 qpair failed and we were unable to recover it. 00:26:28.391 [2024-07-16 00:27:47.123361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.391 [2024-07-16 00:27:47.123391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.391 qpair failed and we were unable to recover it. 00:26:28.391 [2024-07-16 00:27:47.123630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.391 [2024-07-16 00:27:47.123659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.391 qpair failed and we were unable to recover it. 00:26:28.391 [2024-07-16 00:27:47.123909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.391 [2024-07-16 00:27:47.123938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.391 qpair failed and we were unable to recover it. 00:26:28.391 [2024-07-16 00:27:47.124246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.391 [2024-07-16 00:27:47.124276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.391 qpair failed and we were unable to recover it. 00:26:28.391 [2024-07-16 00:27:47.124586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.391 [2024-07-16 00:27:47.124595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.391 qpair failed and we were unable to recover it. 00:26:28.391 [2024-07-16 00:27:47.124775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.391 [2024-07-16 00:27:47.124787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.391 qpair failed and we were unable to recover it. 00:26:28.391 [2024-07-16 00:27:47.124971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.391 [2024-07-16 00:27:47.124981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.391 qpair failed and we were unable to recover it. 00:26:28.391 [2024-07-16 00:27:47.125106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.391 [2024-07-16 00:27:47.125116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.391 qpair failed and we were unable to recover it. 00:26:28.391 [2024-07-16 00:27:47.125316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.391 [2024-07-16 00:27:47.125326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.391 qpair failed and we were unable to recover it. 00:26:28.391 [2024-07-16 00:27:47.125459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.391 [2024-07-16 00:27:47.125470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.391 qpair failed and we were unable to recover it. 00:26:28.391 [2024-07-16 00:27:47.125601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.391 [2024-07-16 00:27:47.125610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.391 qpair failed and we were unable to recover it. 00:26:28.391 [2024-07-16 00:27:47.125796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.391 [2024-07-16 00:27:47.125805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.391 qpair failed and we were unable to recover it. 00:26:28.391 [2024-07-16 00:27:47.125945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.391 [2024-07-16 00:27:47.125955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.391 qpair failed and we were unable to recover it. 00:26:28.391 [2024-07-16 00:27:47.126084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.391 [2024-07-16 00:27:47.126094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.391 qpair failed and we were unable to recover it. 00:26:28.391 [2024-07-16 00:27:47.126325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.391 [2024-07-16 00:27:47.126356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.391 qpair failed and we were unable to recover it. 00:26:28.391 [2024-07-16 00:27:47.126595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.391 [2024-07-16 00:27:47.126624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.391 qpair failed and we were unable to recover it. 00:26:28.391 [2024-07-16 00:27:47.126881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.391 [2024-07-16 00:27:47.126911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.391 qpair failed and we were unable to recover it. 00:26:28.391 [2024-07-16 00:27:47.127078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.391 [2024-07-16 00:27:47.127107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.391 qpair failed and we were unable to recover it. 00:26:28.391 [2024-07-16 00:27:47.127283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.391 [2024-07-16 00:27:47.127314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.391 qpair failed and we were unable to recover it. 00:26:28.391 [2024-07-16 00:27:47.127458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.391 [2024-07-16 00:27:47.127487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.391 qpair failed and we were unable to recover it. 00:26:28.391 [2024-07-16 00:27:47.127630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.391 [2024-07-16 00:27:47.127639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.391 qpair failed and we were unable to recover it. 00:26:28.391 [2024-07-16 00:27:47.127784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.391 [2024-07-16 00:27:47.127821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.391 qpair failed and we were unable to recover it. 00:26:28.391 [2024-07-16 00:27:47.128076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.391 [2024-07-16 00:27:47.128105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.391 qpair failed and we were unable to recover it. 00:26:28.391 [2024-07-16 00:27:47.128421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.391 [2024-07-16 00:27:47.128451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.391 qpair failed and we were unable to recover it. 00:26:28.391 [2024-07-16 00:27:47.128647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.392 [2024-07-16 00:27:47.128677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.392 qpair failed and we were unable to recover it. 00:26:28.392 [2024-07-16 00:27:47.128998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.392 [2024-07-16 00:27:47.129027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.392 qpair failed and we were unable to recover it. 00:26:28.392 [2024-07-16 00:27:47.129319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.392 [2024-07-16 00:27:47.129329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.392 qpair failed and we were unable to recover it. 00:26:28.392 [2024-07-16 00:27:47.129473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.392 [2024-07-16 00:27:47.129483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.392 qpair failed and we were unable to recover it. 00:26:28.392 [2024-07-16 00:27:47.129670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.392 [2024-07-16 00:27:47.129700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.392 qpair failed and we were unable to recover it. 00:26:28.392 [2024-07-16 00:27:47.130038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.392 [2024-07-16 00:27:47.130067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.392 qpair failed and we were unable to recover it. 00:26:28.392 [2024-07-16 00:27:47.130246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.392 [2024-07-16 00:27:47.130257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.392 qpair failed and we were unable to recover it. 00:26:28.392 [2024-07-16 00:27:47.130393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.392 [2024-07-16 00:27:47.130403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.392 qpair failed and we were unable to recover it. 00:26:28.392 [2024-07-16 00:27:47.130597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.392 [2024-07-16 00:27:47.130606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.392 qpair failed and we were unable to recover it. 00:26:28.392 [2024-07-16 00:27:47.130740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.392 [2024-07-16 00:27:47.130749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.392 qpair failed and we were unable to recover it. 00:26:28.392 [2024-07-16 00:27:47.130969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.392 [2024-07-16 00:27:47.130978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.392 qpair failed and we were unable to recover it. 00:26:28.392 [2024-07-16 00:27:47.131115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.392 [2024-07-16 00:27:47.131125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.392 qpair failed and we were unable to recover it. 00:26:28.392 [2024-07-16 00:27:47.131321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.392 [2024-07-16 00:27:47.131331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.392 qpair failed and we were unable to recover it. 00:26:28.392 [2024-07-16 00:27:47.131627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.392 [2024-07-16 00:27:47.131636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.392 qpair failed and we were unable to recover it. 00:26:28.392 [2024-07-16 00:27:47.131836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.392 [2024-07-16 00:27:47.131846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.392 qpair failed and we were unable to recover it. 00:26:28.392 [2024-07-16 00:27:47.131978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.392 [2024-07-16 00:27:47.131987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.392 qpair failed and we were unable to recover it. 00:26:28.392 [2024-07-16 00:27:47.132118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.392 [2024-07-16 00:27:47.132127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.392 qpair failed and we were unable to recover it. 00:26:28.392 [2024-07-16 00:27:47.132345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.392 [2024-07-16 00:27:47.132355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.392 qpair failed and we were unable to recover it. 00:26:28.392 [2024-07-16 00:27:47.132483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.392 [2024-07-16 00:27:47.132493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.392 qpair failed and we were unable to recover it. 00:26:28.392 [2024-07-16 00:27:47.132726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.392 [2024-07-16 00:27:47.132735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.392 qpair failed and we were unable to recover it. 00:26:28.392 [2024-07-16 00:27:47.132895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.392 [2024-07-16 00:27:47.132925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.392 qpair failed and we were unable to recover it. 00:26:28.392 [2024-07-16 00:27:47.133104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.392 [2024-07-16 00:27:47.133139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.392 qpair failed and we were unable to recover it. 00:26:28.392 [2024-07-16 00:27:47.133407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.392 [2024-07-16 00:27:47.133437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.392 qpair failed and we were unable to recover it. 00:26:28.392 [2024-07-16 00:27:47.133677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.392 [2024-07-16 00:27:47.133707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.392 qpair failed and we were unable to recover it. 00:26:28.392 [2024-07-16 00:27:47.133871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.392 [2024-07-16 00:27:47.133901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.392 qpair failed and we were unable to recover it. 00:26:28.392 [2024-07-16 00:27:47.134131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.392 [2024-07-16 00:27:47.134160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.392 qpair failed and we were unable to recover it. 00:26:28.392 [2024-07-16 00:27:47.134467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.392 [2024-07-16 00:27:47.134498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.392 qpair failed and we were unable to recover it. 00:26:28.392 [2024-07-16 00:27:47.134736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.392 [2024-07-16 00:27:47.134766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.392 qpair failed and we were unable to recover it. 00:26:28.392 [2024-07-16 00:27:47.135011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.392 [2024-07-16 00:27:47.135040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.392 qpair failed and we were unable to recover it. 00:26:28.392 [2024-07-16 00:27:47.135279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.392 [2024-07-16 00:27:47.135289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.392 qpair failed and we were unable to recover it. 00:26:28.392 [2024-07-16 00:27:47.135596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.392 [2024-07-16 00:27:47.135627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.392 qpair failed and we were unable to recover it. 00:26:28.392 [2024-07-16 00:27:47.135808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.392 [2024-07-16 00:27:47.135837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.392 qpair failed and we were unable to recover it. 00:26:28.392 [2024-07-16 00:27:47.136065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.392 [2024-07-16 00:27:47.136094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.392 qpair failed and we were unable to recover it. 00:26:28.392 [2024-07-16 00:27:47.136273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.392 [2024-07-16 00:27:47.136283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.392 qpair failed and we were unable to recover it. 00:26:28.392 [2024-07-16 00:27:47.136586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.392 [2024-07-16 00:27:47.136616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.392 qpair failed and we were unable to recover it. 00:26:28.392 [2024-07-16 00:27:47.136859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.392 [2024-07-16 00:27:47.136888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.392 qpair failed and we were unable to recover it. 00:26:28.392 [2024-07-16 00:27:47.137070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.392 [2024-07-16 00:27:47.137100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.392 qpair failed and we were unable to recover it. 00:26:28.392 [2024-07-16 00:27:47.137332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.392 [2024-07-16 00:27:47.137342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.392 qpair failed and we were unable to recover it. 00:26:28.392 [2024-07-16 00:27:47.137551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.392 [2024-07-16 00:27:47.137560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.392 qpair failed and we were unable to recover it. 00:26:28.392 [2024-07-16 00:27:47.137755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.392 [2024-07-16 00:27:47.137765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.392 qpair failed and we were unable to recover it. 00:26:28.392 [2024-07-16 00:27:47.137948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.392 [2024-07-16 00:27:47.137958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.392 qpair failed and we were unable to recover it. 00:26:28.393 [2024-07-16 00:27:47.138141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.393 [2024-07-16 00:27:47.138151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.393 qpair failed and we were unable to recover it. 00:26:28.393 [2024-07-16 00:27:47.138338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.393 [2024-07-16 00:27:47.138349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.393 qpair failed and we were unable to recover it. 00:26:28.393 [2024-07-16 00:27:47.138495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.393 [2024-07-16 00:27:47.138505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.393 qpair failed and we were unable to recover it. 00:26:28.393 [2024-07-16 00:27:47.138636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.393 [2024-07-16 00:27:47.138645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.393 qpair failed and we were unable to recover it. 00:26:28.393 [2024-07-16 00:27:47.138875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.393 [2024-07-16 00:27:47.138905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.393 qpair failed and we were unable to recover it. 00:26:28.393 [2024-07-16 00:27:47.139202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.393 [2024-07-16 00:27:47.139239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.393 qpair failed and we were unable to recover it. 00:26:28.393 [2024-07-16 00:27:47.139499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.393 [2024-07-16 00:27:47.139509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.393 qpair failed and we were unable to recover it. 00:26:28.393 [2024-07-16 00:27:47.139649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.393 [2024-07-16 00:27:47.139659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.393 qpair failed and we were unable to recover it. 00:26:28.393 [2024-07-16 00:27:47.139932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.393 [2024-07-16 00:27:47.139942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.393 qpair failed and we were unable to recover it. 00:26:28.393 [2024-07-16 00:27:47.140068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.393 [2024-07-16 00:27:47.140078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.393 qpair failed and we were unable to recover it. 00:26:28.393 [2024-07-16 00:27:47.140266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.393 [2024-07-16 00:27:47.140298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.393 qpair failed and we were unable to recover it. 00:26:28.393 [2024-07-16 00:27:47.140523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.393 [2024-07-16 00:27:47.140554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.393 qpair failed and we were unable to recover it. 00:26:28.393 [2024-07-16 00:27:47.140776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.393 [2024-07-16 00:27:47.140787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.393 qpair failed and we were unable to recover it. 00:26:28.393 [2024-07-16 00:27:47.140989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.393 [2024-07-16 00:27:47.141019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.393 qpair failed and we were unable to recover it. 00:26:28.393 [2024-07-16 00:27:47.141190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.393 [2024-07-16 00:27:47.141200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.393 qpair failed and we were unable to recover it. 00:26:28.393 [2024-07-16 00:27:47.141324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.393 [2024-07-16 00:27:47.141336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.393 qpair failed and we were unable to recover it. 00:26:28.393 [2024-07-16 00:27:47.141528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.393 [2024-07-16 00:27:47.141539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.393 qpair failed and we were unable to recover it. 00:26:28.393 [2024-07-16 00:27:47.141738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.393 [2024-07-16 00:27:47.141768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.393 qpair failed and we were unable to recover it. 00:26:28.393 [2024-07-16 00:27:47.141950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.393 [2024-07-16 00:27:47.141979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.393 qpair failed and we were unable to recover it. 00:26:28.393 [2024-07-16 00:27:47.142168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.393 [2024-07-16 00:27:47.142198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.393 qpair failed and we were unable to recover it. 00:26:28.393 [2024-07-16 00:27:47.142434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.393 [2024-07-16 00:27:47.142450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.393 qpair failed and we were unable to recover it. 00:26:28.393 [2024-07-16 00:27:47.142647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.393 [2024-07-16 00:27:47.142677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.393 qpair failed and we were unable to recover it. 00:26:28.393 [2024-07-16 00:27:47.142838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.393 [2024-07-16 00:27:47.142869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.393 qpair failed and we were unable to recover it. 00:26:28.393 [2024-07-16 00:27:47.143034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.393 [2024-07-16 00:27:47.143074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.393 qpair failed and we were unable to recover it. 00:26:28.393 [2024-07-16 00:27:47.143298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.393 [2024-07-16 00:27:47.143311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.393 qpair failed and we were unable to recover it. 00:26:28.393 [2024-07-16 00:27:47.143515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.393 [2024-07-16 00:27:47.143547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.393 qpair failed and we were unable to recover it. 00:26:28.393 [2024-07-16 00:27:47.143720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.393 [2024-07-16 00:27:47.143749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.393 qpair failed and we were unable to recover it. 00:26:28.393 [2024-07-16 00:27:47.143912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.393 [2024-07-16 00:27:47.143941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.393 qpair failed and we were unable to recover it. 00:26:28.393 [2024-07-16 00:27:47.144130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.393 [2024-07-16 00:27:47.144140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.393 qpair failed and we were unable to recover it. 00:26:28.393 [2024-07-16 00:27:47.144393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.393 [2024-07-16 00:27:47.144424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.393 qpair failed and we were unable to recover it. 00:26:28.393 [2024-07-16 00:27:47.144711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.393 [2024-07-16 00:27:47.144741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.393 qpair failed and we were unable to recover it. 00:26:28.393 [2024-07-16 00:27:47.145037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.393 [2024-07-16 00:27:47.145067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.393 qpair failed and we were unable to recover it. 00:26:28.393 [2024-07-16 00:27:47.145332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.393 [2024-07-16 00:27:47.145342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.393 qpair failed and we were unable to recover it. 00:26:28.393 [2024-07-16 00:27:47.145567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.393 [2024-07-16 00:27:47.145577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.393 qpair failed and we were unable to recover it. 00:26:28.393 [2024-07-16 00:27:47.145725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.393 [2024-07-16 00:27:47.145735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.393 qpair failed and we were unable to recover it. 00:26:28.393 [2024-07-16 00:27:47.145873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.393 [2024-07-16 00:27:47.145883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.393 qpair failed and we were unable to recover it. 00:26:28.393 [2024-07-16 00:27:47.146031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.393 [2024-07-16 00:27:47.146041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.393 qpair failed and we were unable to recover it. 00:26:28.393 [2024-07-16 00:27:47.146286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.393 [2024-07-16 00:27:47.146296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.393 qpair failed and we were unable to recover it. 00:26:28.393 [2024-07-16 00:27:47.146516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.393 [2024-07-16 00:27:47.146546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.393 qpair failed and we were unable to recover it. 00:26:28.393 [2024-07-16 00:27:47.146776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.393 [2024-07-16 00:27:47.146806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.393 qpair failed and we were unable to recover it. 00:26:28.393 [2024-07-16 00:27:47.147047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.393 [2024-07-16 00:27:47.147077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.393 qpair failed and we were unable to recover it. 00:26:28.394 [2024-07-16 00:27:47.147295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.394 [2024-07-16 00:27:47.147305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.394 qpair failed and we were unable to recover it. 00:26:28.394 [2024-07-16 00:27:47.147442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.394 [2024-07-16 00:27:47.147472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.394 qpair failed and we were unable to recover it. 00:26:28.394 [2024-07-16 00:27:47.147708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.394 [2024-07-16 00:27:47.147738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.394 qpair failed and we were unable to recover it. 00:26:28.394 [2024-07-16 00:27:47.147924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.394 [2024-07-16 00:27:47.147954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.394 qpair failed and we were unable to recover it. 00:26:28.394 [2024-07-16 00:27:47.148183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.394 [2024-07-16 00:27:47.148192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.394 qpair failed and we were unable to recover it. 00:26:28.394 [2024-07-16 00:27:47.148381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.394 [2024-07-16 00:27:47.148391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.394 qpair failed and we were unable to recover it. 00:26:28.394 [2024-07-16 00:27:47.148511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.394 [2024-07-16 00:27:47.148521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.394 qpair failed and we were unable to recover it. 00:26:28.394 [2024-07-16 00:27:47.148645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.394 [2024-07-16 00:27:47.148655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.394 qpair failed and we were unable to recover it. 00:26:28.394 [2024-07-16 00:27:47.148825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.394 [2024-07-16 00:27:47.148835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.394 qpair failed and we were unable to recover it. 00:26:28.394 [2024-07-16 00:27:47.149014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.394 [2024-07-16 00:27:47.149023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.394 qpair failed and we were unable to recover it. 00:26:28.394 [2024-07-16 00:27:47.149145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.394 [2024-07-16 00:27:47.149154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.394 qpair failed and we were unable to recover it. 00:26:28.394 [2024-07-16 00:27:47.149404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.394 [2024-07-16 00:27:47.149414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.394 qpair failed and we were unable to recover it. 00:26:28.394 [2024-07-16 00:27:47.149531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.394 [2024-07-16 00:27:47.149541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.394 qpair failed and we were unable to recover it. 00:26:28.394 [2024-07-16 00:27:47.149667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.394 [2024-07-16 00:27:47.149676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.394 qpair failed and we were unable to recover it. 00:26:28.394 [2024-07-16 00:27:47.149900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.394 [2024-07-16 00:27:47.149931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.394 qpair failed and we were unable to recover it. 00:26:28.394 [2024-07-16 00:27:47.150097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.394 [2024-07-16 00:27:47.150127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.394 qpair failed and we were unable to recover it. 00:26:28.394 [2024-07-16 00:27:47.150411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.394 [2024-07-16 00:27:47.150440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.394 qpair failed and we were unable to recover it. 00:26:28.394 [2024-07-16 00:27:47.150598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.394 [2024-07-16 00:27:47.150628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.394 qpair failed and we were unable to recover it. 00:26:28.394 [2024-07-16 00:27:47.150794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.394 [2024-07-16 00:27:47.150823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.394 qpair failed and we were unable to recover it. 00:26:28.394 [2024-07-16 00:27:47.151075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.394 [2024-07-16 00:27:47.151110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.394 qpair failed and we were unable to recover it. 00:26:28.394 [2024-07-16 00:27:47.151305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.394 [2024-07-16 00:27:47.151336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.394 qpair failed and we were unable to recover it. 00:26:28.394 [2024-07-16 00:27:47.151491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.394 [2024-07-16 00:27:47.151521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.394 qpair failed and we were unable to recover it. 00:26:28.394 [2024-07-16 00:27:47.151729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.394 [2024-07-16 00:27:47.151739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.394 qpair failed and we were unable to recover it. 00:26:28.394 [2024-07-16 00:27:47.151882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.394 [2024-07-16 00:27:47.151892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.394 qpair failed and we were unable to recover it. 00:26:28.394 [2024-07-16 00:27:47.152080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.394 [2024-07-16 00:27:47.152089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.394 qpair failed and we were unable to recover it. 00:26:28.394 [2024-07-16 00:27:47.152274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.394 [2024-07-16 00:27:47.152284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.394 qpair failed and we were unable to recover it. 00:26:28.394 [2024-07-16 00:27:47.152469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.394 [2024-07-16 00:27:47.152478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.394 qpair failed and we were unable to recover it. 00:26:28.394 [2024-07-16 00:27:47.152634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.394 [2024-07-16 00:27:47.152663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.394 qpair failed and we were unable to recover it. 00:26:28.394 [2024-07-16 00:27:47.152904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.394 [2024-07-16 00:27:47.152933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.394 qpair failed and we were unable to recover it. 00:26:28.394 [2024-07-16 00:27:47.153163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.394 [2024-07-16 00:27:47.153194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.394 qpair failed and we were unable to recover it. 00:26:28.394 [2024-07-16 00:27:47.153443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.394 [2024-07-16 00:27:47.153473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.394 qpair failed and we were unable to recover it. 00:26:28.394 [2024-07-16 00:27:47.153713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.394 [2024-07-16 00:27:47.153743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.394 qpair failed and we were unable to recover it. 00:26:28.394 [2024-07-16 00:27:47.153877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.394 [2024-07-16 00:27:47.153908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.394 qpair failed and we were unable to recover it. 00:26:28.394 [2024-07-16 00:27:47.154142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.394 [2024-07-16 00:27:47.154171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.394 qpair failed and we were unable to recover it. 00:26:28.394 [2024-07-16 00:27:47.154401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.394 [2024-07-16 00:27:47.154430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.394 qpair failed and we were unable to recover it. 00:26:28.394 [2024-07-16 00:27:47.154730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.395 [2024-07-16 00:27:47.154760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.395 qpair failed and we were unable to recover it. 00:26:28.395 [2024-07-16 00:27:47.154949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.395 [2024-07-16 00:27:47.154978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.395 qpair failed and we were unable to recover it. 00:26:28.395 [2024-07-16 00:27:47.155203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.395 [2024-07-16 00:27:47.155259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.395 qpair failed and we were unable to recover it. 00:26:28.395 [2024-07-16 00:27:47.155483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.395 [2024-07-16 00:27:47.155493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.395 qpair failed and we were unable to recover it. 00:26:28.395 [2024-07-16 00:27:47.155637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.395 [2024-07-16 00:27:47.155647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.395 qpair failed and we were unable to recover it. 00:26:28.395 [2024-07-16 00:27:47.155920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.395 [2024-07-16 00:27:47.155949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.395 qpair failed and we were unable to recover it. 00:26:28.395 [2024-07-16 00:27:47.156187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.395 [2024-07-16 00:27:47.156217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.395 qpair failed and we were unable to recover it. 00:26:28.395 [2024-07-16 00:27:47.156405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.395 [2024-07-16 00:27:47.156435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.395 qpair failed and we were unable to recover it. 00:26:28.395 [2024-07-16 00:27:47.156663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.395 [2024-07-16 00:27:47.156673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.395 qpair failed and we were unable to recover it. 00:26:28.395 [2024-07-16 00:27:47.156924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.395 [2024-07-16 00:27:47.156933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.395 qpair failed and we were unable to recover it. 00:26:28.395 [2024-07-16 00:27:47.157127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.395 [2024-07-16 00:27:47.157136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.395 qpair failed and we were unable to recover it. 00:26:28.395 [2024-07-16 00:27:47.157320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.395 [2024-07-16 00:27:47.157333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.395 qpair failed and we were unable to recover it. 00:26:28.395 [2024-07-16 00:27:47.157549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.395 [2024-07-16 00:27:47.157580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.395 qpair failed and we were unable to recover it. 00:26:28.395 [2024-07-16 00:27:47.157828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.395 [2024-07-16 00:27:47.157858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.395 qpair failed and we were unable to recover it. 00:26:28.395 [2024-07-16 00:27:47.158079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.395 [2024-07-16 00:27:47.158108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.395 qpair failed and we were unable to recover it. 00:26:28.395 [2024-07-16 00:27:47.158341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.395 [2024-07-16 00:27:47.158351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.395 qpair failed and we were unable to recover it. 00:26:28.395 [2024-07-16 00:27:47.158548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.395 [2024-07-16 00:27:47.158577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.395 qpair failed and we were unable to recover it. 00:26:28.395 [2024-07-16 00:27:47.158821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.395 [2024-07-16 00:27:47.158850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.395 qpair failed and we were unable to recover it. 00:26:28.395 [2024-07-16 00:27:47.159148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.395 [2024-07-16 00:27:47.159178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.395 qpair failed and we were unable to recover it. 00:26:28.395 [2024-07-16 00:27:47.159443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.395 [2024-07-16 00:27:47.159473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.395 qpair failed and we were unable to recover it. 00:26:28.395 [2024-07-16 00:27:47.159636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.395 [2024-07-16 00:27:47.159665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.395 qpair failed and we were unable to recover it. 00:26:28.395 [2024-07-16 00:27:47.159904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.395 [2024-07-16 00:27:47.159934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.395 qpair failed and we were unable to recover it. 00:26:28.395 [2024-07-16 00:27:47.160232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.395 [2024-07-16 00:27:47.160262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.395 qpair failed and we were unable to recover it. 00:26:28.395 [2024-07-16 00:27:47.160508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.395 [2024-07-16 00:27:47.160518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.395 qpair failed and we were unable to recover it. 00:26:28.395 [2024-07-16 00:27:47.160613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.395 [2024-07-16 00:27:47.160624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.395 qpair failed and we were unable to recover it. 00:26:28.395 [2024-07-16 00:27:47.160803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.395 [2024-07-16 00:27:47.160829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.395 qpair failed and we were unable to recover it. 00:26:28.395 [2024-07-16 00:27:47.161002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.395 [2024-07-16 00:27:47.161031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.395 qpair failed and we were unable to recover it. 00:26:28.395 [2024-07-16 00:27:47.161323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.395 [2024-07-16 00:27:47.161355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.395 qpair failed and we were unable to recover it. 00:26:28.395 [2024-07-16 00:27:47.161510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.395 [2024-07-16 00:27:47.161519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.395 qpair failed and we were unable to recover it. 00:26:28.395 [2024-07-16 00:27:47.161711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.395 [2024-07-16 00:27:47.161721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.395 qpair failed and we were unable to recover it. 00:26:28.395 [2024-07-16 00:27:47.162007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.395 [2024-07-16 00:27:47.162037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.395 qpair failed and we were unable to recover it. 00:26:28.395 [2024-07-16 00:27:47.162282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.395 [2024-07-16 00:27:47.162312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.395 qpair failed and we were unable to recover it. 00:26:28.395 [2024-07-16 00:27:47.162482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.395 [2024-07-16 00:27:47.162512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.395 qpair failed and we were unable to recover it. 00:26:28.395 [2024-07-16 00:27:47.162736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.395 [2024-07-16 00:27:47.162746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.395 qpair failed and we were unable to recover it. 00:26:28.395 [2024-07-16 00:27:47.162870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.395 [2024-07-16 00:27:47.162880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.395 qpair failed and we were unable to recover it. 00:26:28.395 [2024-07-16 00:27:47.163071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.395 [2024-07-16 00:27:47.163080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.395 qpair failed and we were unable to recover it. 00:26:28.395 [2024-07-16 00:27:47.163213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.395 [2024-07-16 00:27:47.163271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.395 qpair failed and we were unable to recover it. 00:26:28.395 [2024-07-16 00:27:47.163586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.395 [2024-07-16 00:27:47.163615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.395 qpair failed and we were unable to recover it. 00:26:28.395 [2024-07-16 00:27:47.163778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.395 [2024-07-16 00:27:47.163808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.395 qpair failed and we were unable to recover it. 00:26:28.395 [2024-07-16 00:27:47.164044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.395 [2024-07-16 00:27:47.164075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.395 qpair failed and we were unable to recover it. 00:26:28.395 [2024-07-16 00:27:47.164243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.395 [2024-07-16 00:27:47.164274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.395 qpair failed and we were unable to recover it. 00:26:28.395 [2024-07-16 00:27:47.164443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.396 [2024-07-16 00:27:47.164473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.396 qpair failed and we were unable to recover it. 00:26:28.396 [2024-07-16 00:27:47.164646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.396 [2024-07-16 00:27:47.164676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.396 qpair failed and we were unable to recover it. 00:26:28.396 [2024-07-16 00:27:47.164918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.396 [2024-07-16 00:27:47.164948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.396 qpair failed and we were unable to recover it. 00:26:28.396 [2024-07-16 00:27:47.165197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.396 [2024-07-16 00:27:47.165235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.396 qpair failed and we were unable to recover it. 00:26:28.396 [2024-07-16 00:27:47.165502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.396 [2024-07-16 00:27:47.165532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.396 qpair failed and we were unable to recover it. 00:26:28.396 [2024-07-16 00:27:47.165823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.396 [2024-07-16 00:27:47.165833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.396 qpair failed and we were unable to recover it. 00:26:28.396 [2024-07-16 00:27:47.165960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.396 [2024-07-16 00:27:47.165969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.396 qpair failed and we were unable to recover it. 00:26:28.396 [2024-07-16 00:27:47.166164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.396 [2024-07-16 00:27:47.166174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.396 qpair failed and we were unable to recover it. 00:26:28.396 [2024-07-16 00:27:47.166370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.396 [2024-07-16 00:27:47.166381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.396 qpair failed and we were unable to recover it. 00:26:28.396 [2024-07-16 00:27:47.166566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.396 [2024-07-16 00:27:47.166575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.396 qpair failed and we were unable to recover it. 00:26:28.396 [2024-07-16 00:27:47.166720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.396 [2024-07-16 00:27:47.166730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.396 qpair failed and we were unable to recover it. 00:26:28.396 [2024-07-16 00:27:47.167007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.396 [2024-07-16 00:27:47.167036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.396 qpair failed and we were unable to recover it. 00:26:28.396 [2024-07-16 00:27:47.167210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.396 [2024-07-16 00:27:47.167248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.396 qpair failed and we were unable to recover it. 00:26:28.396 [2024-07-16 00:27:47.167575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.396 [2024-07-16 00:27:47.167605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.396 qpair failed and we were unable to recover it. 00:26:28.396 [2024-07-16 00:27:47.167844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.396 [2024-07-16 00:27:47.167872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.396 qpair failed and we were unable to recover it. 00:26:28.396 [2024-07-16 00:27:47.168041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.396 [2024-07-16 00:27:47.168070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.396 qpair failed and we were unable to recover it. 00:26:28.396 [2024-07-16 00:27:47.168237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.396 [2024-07-16 00:27:47.168248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.396 qpair failed and we were unable to recover it. 00:26:28.396 [2024-07-16 00:27:47.168382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.396 [2024-07-16 00:27:47.168392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.396 qpair failed and we were unable to recover it. 00:26:28.396 [2024-07-16 00:27:47.168686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.396 [2024-07-16 00:27:47.168716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.396 qpair failed and we were unable to recover it. 00:26:28.396 [2024-07-16 00:27:47.168954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.396 [2024-07-16 00:27:47.168984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.396 qpair failed and we were unable to recover it. 00:26:28.396 [2024-07-16 00:27:47.169168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.396 [2024-07-16 00:27:47.169199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.396 qpair failed and we were unable to recover it. 00:26:28.396 [2024-07-16 00:27:47.169513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.396 [2024-07-16 00:27:47.169547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.396 qpair failed and we were unable to recover it. 00:26:28.396 [2024-07-16 00:27:47.169815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.396 [2024-07-16 00:27:47.169845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.396 qpair failed and we were unable to recover it. 00:26:28.396 [2024-07-16 00:27:47.170032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.396 [2024-07-16 00:27:47.170071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.396 qpair failed and we were unable to recover it. 00:26:28.396 [2024-07-16 00:27:47.170264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.396 [2024-07-16 00:27:47.170297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.396 qpair failed and we were unable to recover it. 00:26:28.396 [2024-07-16 00:27:47.170528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.396 [2024-07-16 00:27:47.170542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.396 qpair failed and we were unable to recover it. 00:26:28.396 [2024-07-16 00:27:47.170827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.396 [2024-07-16 00:27:47.170857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.396 qpair failed and we were unable to recover it. 00:26:28.396 [2024-07-16 00:27:47.171079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.396 [2024-07-16 00:27:47.171109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.396 qpair failed and we were unable to recover it. 00:26:28.396 [2024-07-16 00:27:47.171268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.396 [2024-07-16 00:27:47.171283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.396 qpair failed and we were unable to recover it. 00:26:28.396 [2024-07-16 00:27:47.171586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.396 [2024-07-16 00:27:47.171617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.396 qpair failed and we were unable to recover it. 00:26:28.396 [2024-07-16 00:27:47.171796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.396 [2024-07-16 00:27:47.171825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.396 qpair failed and we were unable to recover it. 00:26:28.396 [2024-07-16 00:27:47.172149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.396 [2024-07-16 00:27:47.172180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.396 qpair failed and we were unable to recover it. 00:26:28.396 [2024-07-16 00:27:47.172515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.396 [2024-07-16 00:27:47.172546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.396 qpair failed and we were unable to recover it. 00:26:28.396 [2024-07-16 00:27:47.172834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.396 [2024-07-16 00:27:47.172864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.396 qpair failed and we were unable to recover it. 00:26:28.396 [2024-07-16 00:27:47.173099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.396 [2024-07-16 00:27:47.173130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.396 qpair failed and we were unable to recover it. 00:26:28.396 [2024-07-16 00:27:47.173449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.396 [2024-07-16 00:27:47.173479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.396 qpair failed and we were unable to recover it. 00:26:28.396 [2024-07-16 00:27:47.173739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.396 [2024-07-16 00:27:47.173769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.396 qpair failed and we were unable to recover it. 00:26:28.396 [2024-07-16 00:27:47.174047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.396 [2024-07-16 00:27:47.174078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.396 qpair failed and we were unable to recover it. 00:26:28.396 [2024-07-16 00:27:47.174311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.396 [2024-07-16 00:27:47.174325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.396 qpair failed and we were unable to recover it. 00:26:28.396 [2024-07-16 00:27:47.174534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.396 [2024-07-16 00:27:47.174564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.396 qpair failed and we were unable to recover it. 00:26:28.396 [2024-07-16 00:27:47.174777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.396 [2024-07-16 00:27:47.174807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.396 qpair failed and we were unable to recover it. 00:26:28.396 [2024-07-16 00:27:47.175002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.397 [2024-07-16 00:27:47.175032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.397 qpair failed and we were unable to recover it. 00:26:28.397 [2024-07-16 00:27:47.175290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.397 [2024-07-16 00:27:47.175320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.397 qpair failed and we were unable to recover it. 00:26:28.397 [2024-07-16 00:27:47.175636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.397 [2024-07-16 00:27:47.175666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.397 qpair failed and we were unable to recover it. 00:26:28.397 [2024-07-16 00:27:47.176006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.397 [2024-07-16 00:27:47.176035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.397 qpair failed and we were unable to recover it. 00:26:28.397 [2024-07-16 00:27:47.176204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.397 [2024-07-16 00:27:47.176243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.397 qpair failed and we were unable to recover it. 00:26:28.397 [2024-07-16 00:27:47.176561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.397 [2024-07-16 00:27:47.176592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.397 qpair failed and we were unable to recover it. 00:26:28.397 [2024-07-16 00:27:47.176943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.397 [2024-07-16 00:27:47.176973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.397 qpair failed and we were unable to recover it. 00:26:28.397 [2024-07-16 00:27:47.177213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.397 [2024-07-16 00:27:47.177254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.397 qpair failed and we were unable to recover it. 00:26:28.397 [2024-07-16 00:27:47.177495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.397 [2024-07-16 00:27:47.177538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.397 qpair failed and we were unable to recover it. 00:26:28.397 [2024-07-16 00:27:47.177769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.397 [2024-07-16 00:27:47.177782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.397 qpair failed and we were unable to recover it. 00:26:28.397 [2024-07-16 00:27:47.178000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.397 [2024-07-16 00:27:47.178014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.397 qpair failed and we were unable to recover it. 00:26:28.397 [2024-07-16 00:27:47.178296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.397 [2024-07-16 00:27:47.178310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.397 qpair failed and we were unable to recover it. 00:26:28.397 [2024-07-16 00:27:47.178464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.397 [2024-07-16 00:27:47.178477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.397 qpair failed and we were unable to recover it. 00:26:28.397 [2024-07-16 00:27:47.178604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.397 [2024-07-16 00:27:47.178634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.397 qpair failed and we were unable to recover it. 00:26:28.397 [2024-07-16 00:27:47.178901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.397 [2024-07-16 00:27:47.178931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.397 qpair failed and we were unable to recover it. 00:26:28.397 [2024-07-16 00:27:47.179086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.397 [2024-07-16 00:27:47.179116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.397 qpair failed and we were unable to recover it. 00:26:28.397 [2024-07-16 00:27:47.179426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.397 [2024-07-16 00:27:47.179467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.397 qpair failed and we were unable to recover it. 00:26:28.397 [2024-07-16 00:27:47.179786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.397 [2024-07-16 00:27:47.179817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.397 qpair failed and we were unable to recover it. 00:26:28.397 [2024-07-16 00:27:47.182517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.397 [2024-07-16 00:27:47.182550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.397 qpair failed and we were unable to recover it. 00:26:28.397 [2024-07-16 00:27:47.182870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.397 [2024-07-16 00:27:47.182883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.397 qpair failed and we were unable to recover it. 00:26:28.397 [2024-07-16 00:27:47.183169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.397 [2024-07-16 00:27:47.183183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.397 qpair failed and we were unable to recover it. 00:26:28.397 [2024-07-16 00:27:47.183477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.397 [2024-07-16 00:27:47.183508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.397 qpair failed and we were unable to recover it. 00:26:28.397 [2024-07-16 00:27:47.183769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.397 [2024-07-16 00:27:47.183799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.397 qpair failed and we were unable to recover it. 00:26:28.397 [2024-07-16 00:27:47.184003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.397 [2024-07-16 00:27:47.184034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.397 qpair failed and we were unable to recover it. 00:26:28.397 [2024-07-16 00:27:47.184274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.397 [2024-07-16 00:27:47.184305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.397 qpair failed and we were unable to recover it. 00:26:28.397 [2024-07-16 00:27:47.184523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.397 [2024-07-16 00:27:47.184536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.397 qpair failed and we were unable to recover it. 00:26:28.397 [2024-07-16 00:27:47.184729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.397 [2024-07-16 00:27:47.184760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.397 qpair failed and we were unable to recover it. 00:26:28.397 [2024-07-16 00:27:47.184988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.397 [2024-07-16 00:27:47.185018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.397 qpair failed and we were unable to recover it. 00:26:28.397 [2024-07-16 00:27:47.185256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.397 [2024-07-16 00:27:47.185286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.397 qpair failed and we were unable to recover it. 00:26:28.397 [2024-07-16 00:27:47.185594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.397 [2024-07-16 00:27:47.185625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.397 qpair failed and we were unable to recover it. 00:26:28.397 [2024-07-16 00:27:47.185914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.397 [2024-07-16 00:27:47.185944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.397 qpair failed and we were unable to recover it. 00:26:28.397 [2024-07-16 00:27:47.186210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.397 [2024-07-16 00:27:47.186263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.397 qpair failed and we were unable to recover it. 00:26:28.397 [2024-07-16 00:27:47.186564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.397 [2024-07-16 00:27:47.186594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.397 qpair failed and we were unable to recover it. 00:26:28.397 [2024-07-16 00:27:47.186818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.397 [2024-07-16 00:27:47.186831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.397 qpair failed and we were unable to recover it. 00:26:28.397 [2024-07-16 00:27:47.187092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.397 [2024-07-16 00:27:47.187121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.397 qpair failed and we were unable to recover it. 00:26:28.397 [2024-07-16 00:27:47.187438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.397 [2024-07-16 00:27:47.187469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.397 qpair failed and we were unable to recover it. 00:26:28.397 [2024-07-16 00:27:47.187628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.397 [2024-07-16 00:27:47.187641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.397 qpair failed and we were unable to recover it. 00:26:28.397 [2024-07-16 00:27:47.187869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.397 [2024-07-16 00:27:47.187882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.397 qpair failed and we were unable to recover it. 00:26:28.397 [2024-07-16 00:27:47.188113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.397 [2024-07-16 00:27:47.188143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.397 qpair failed and we were unable to recover it. 00:26:28.397 [2024-07-16 00:27:47.188375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.397 [2024-07-16 00:27:47.188405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.397 qpair failed and we were unable to recover it. 00:26:28.397 [2024-07-16 00:27:47.188653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.397 [2024-07-16 00:27:47.188682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.397 qpair failed and we were unable to recover it. 00:26:28.397 [2024-07-16 00:27:47.188924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.397 [2024-07-16 00:27:47.188955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.398 qpair failed and we were unable to recover it. 00:26:28.398 [2024-07-16 00:27:47.189266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.398 [2024-07-16 00:27:47.189296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.398 qpair failed and we were unable to recover it. 00:26:28.398 [2024-07-16 00:27:47.189620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.398 [2024-07-16 00:27:47.189650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.398 qpair failed and we were unable to recover it. 00:26:28.398 [2024-07-16 00:27:47.189815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.398 [2024-07-16 00:27:47.189845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.398 qpair failed and we were unable to recover it. 00:26:28.398 [2024-07-16 00:27:47.190031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.398 [2024-07-16 00:27:47.190061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.398 qpair failed and we were unable to recover it. 00:26:28.398 [2024-07-16 00:27:47.190371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.398 [2024-07-16 00:27:47.190401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.398 qpair failed and we were unable to recover it. 00:26:28.398 [2024-07-16 00:27:47.190589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.398 [2024-07-16 00:27:47.190620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.398 qpair failed and we were unable to recover it. 00:26:28.398 [2024-07-16 00:27:47.190877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.398 [2024-07-16 00:27:47.190890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.398 qpair failed and we were unable to recover it. 00:26:28.398 [2024-07-16 00:27:47.191099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.398 [2024-07-16 00:27:47.191112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.398 qpair failed and we were unable to recover it. 00:26:28.398 [2024-07-16 00:27:47.191316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.398 [2024-07-16 00:27:47.191352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.398 qpair failed and we were unable to recover it. 00:26:28.398 [2024-07-16 00:27:47.191574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.398 [2024-07-16 00:27:47.191604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.398 qpair failed and we were unable to recover it. 00:26:28.398 [2024-07-16 00:27:47.191970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.398 [2024-07-16 00:27:47.192000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.398 qpair failed and we were unable to recover it. 00:26:28.398 [2024-07-16 00:27:47.192273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.398 [2024-07-16 00:27:47.192304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.398 qpair failed and we were unable to recover it. 00:26:28.398 [2024-07-16 00:27:47.192494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.398 [2024-07-16 00:27:47.192523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.398 qpair failed and we were unable to recover it. 00:26:28.398 [2024-07-16 00:27:47.192855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.398 [2024-07-16 00:27:47.192885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.398 qpair failed and we were unable to recover it. 00:26:28.398 [2024-07-16 00:27:47.193065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.398 [2024-07-16 00:27:47.193095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.398 qpair failed and we were unable to recover it. 00:26:28.398 [2024-07-16 00:27:47.193412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.398 [2024-07-16 00:27:47.193442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.398 qpair failed and we were unable to recover it. 00:26:28.398 [2024-07-16 00:27:47.193739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.398 [2024-07-16 00:27:47.193753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.398 qpair failed and we were unable to recover it. 00:26:28.398 [2024-07-16 00:27:47.193911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.398 [2024-07-16 00:27:47.193925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.398 qpair failed and we were unable to recover it. 00:26:28.398 [2024-07-16 00:27:47.194155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.398 [2024-07-16 00:27:47.194169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.398 qpair failed and we were unable to recover it. 00:26:28.398 [2024-07-16 00:27:47.194395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.398 [2024-07-16 00:27:47.194409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.398 qpair failed and we were unable to recover it. 00:26:28.398 [2024-07-16 00:27:47.194616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.398 [2024-07-16 00:27:47.194646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.398 qpair failed and we were unable to recover it. 00:26:28.398 [2024-07-16 00:27:47.194904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.398 [2024-07-16 00:27:47.194935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.398 qpair failed and we were unable to recover it. 00:26:28.398 [2024-07-16 00:27:47.195178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.398 [2024-07-16 00:27:47.195208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.398 qpair failed and we were unable to recover it. 00:26:28.672 [2024-07-16 00:27:47.195485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.672 [2024-07-16 00:27:47.195517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.672 qpair failed and we were unable to recover it. 00:26:28.672 [2024-07-16 00:27:47.195756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.672 [2024-07-16 00:27:47.195772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.672 qpair failed and we were unable to recover it. 00:26:28.672 [2024-07-16 00:27:47.195978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.672 [2024-07-16 00:27:47.196007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.672 qpair failed and we were unable to recover it. 00:26:28.672 [2024-07-16 00:27:47.196244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.672 [2024-07-16 00:27:47.196275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.672 qpair failed and we were unable to recover it. 00:26:28.672 [2024-07-16 00:27:47.196513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.672 [2024-07-16 00:27:47.196543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.672 qpair failed and we were unable to recover it. 00:26:28.672 [2024-07-16 00:27:47.196785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.672 [2024-07-16 00:27:47.196815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.672 qpair failed and we were unable to recover it. 00:26:28.672 [2024-07-16 00:27:47.197068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.672 [2024-07-16 00:27:47.197098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.672 qpair failed and we were unable to recover it. 00:26:28.672 [2024-07-16 00:27:47.197353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.672 [2024-07-16 00:27:47.197384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.672 qpair failed and we were unable to recover it. 00:26:28.672 [2024-07-16 00:27:47.197630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.672 [2024-07-16 00:27:47.197659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.672 qpair failed and we were unable to recover it. 00:26:28.672 [2024-07-16 00:27:47.197890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.672 [2024-07-16 00:27:47.197920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.672 qpair failed and we were unable to recover it. 00:26:28.672 [2024-07-16 00:27:47.198111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.672 [2024-07-16 00:27:47.198141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.672 qpair failed and we were unable to recover it. 00:26:28.672 [2024-07-16 00:27:47.198387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.672 [2024-07-16 00:27:47.198417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.672 qpair failed and we were unable to recover it. 00:26:28.672 [2024-07-16 00:27:47.198677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.672 [2024-07-16 00:27:47.198713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.672 qpair failed and we were unable to recover it. 00:26:28.672 [2024-07-16 00:27:47.199027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.672 [2024-07-16 00:27:47.199040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.672 qpair failed and we were unable to recover it. 00:26:28.672 [2024-07-16 00:27:47.199195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.672 [2024-07-16 00:27:47.199209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.672 qpair failed and we were unable to recover it. 00:26:28.672 [2024-07-16 00:27:47.199441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.672 [2024-07-16 00:27:47.199473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.672 qpair failed and we were unable to recover it. 00:26:28.672 [2024-07-16 00:27:47.199774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.672 [2024-07-16 00:27:47.199803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.672 qpair failed and we were unable to recover it. 00:26:28.672 [2024-07-16 00:27:47.200047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.672 [2024-07-16 00:27:47.200077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.672 qpair failed and we were unable to recover it. 00:26:28.672 [2024-07-16 00:27:47.200259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.672 [2024-07-16 00:27:47.200290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.672 qpair failed and we were unable to recover it. 00:26:28.672 [2024-07-16 00:27:47.200468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.672 [2024-07-16 00:27:47.200481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.672 qpair failed and we were unable to recover it. 00:26:28.672 [2024-07-16 00:27:47.200705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.672 [2024-07-16 00:27:47.200735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.672 qpair failed and we were unable to recover it. 00:26:28.672 [2024-07-16 00:27:47.200979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.672 [2024-07-16 00:27:47.201009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.672 qpair failed and we were unable to recover it. 00:26:28.672 [2024-07-16 00:27:47.201328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.672 [2024-07-16 00:27:47.201359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.672 qpair failed and we were unable to recover it. 00:26:28.672 [2024-07-16 00:27:47.201675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.672 [2024-07-16 00:27:47.201705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.673 qpair failed and we were unable to recover it. 00:26:28.673 [2024-07-16 00:27:47.202017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.673 [2024-07-16 00:27:47.202052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.673 qpair failed and we were unable to recover it. 00:26:28.673 [2024-07-16 00:27:47.202287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.673 [2024-07-16 00:27:47.202318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.673 qpair failed and we were unable to recover it. 00:26:28.673 [2024-07-16 00:27:47.202642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.673 [2024-07-16 00:27:47.202671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.673 qpair failed and we were unable to recover it. 00:26:28.673 [2024-07-16 00:27:47.202976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.673 [2024-07-16 00:27:47.203007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.673 qpair failed and we were unable to recover it. 00:26:28.673 [2024-07-16 00:27:47.203274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.673 [2024-07-16 00:27:47.203288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.673 qpair failed and we were unable to recover it. 00:26:28.673 [2024-07-16 00:27:47.203415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.673 [2024-07-16 00:27:47.203428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.673 qpair failed and we were unable to recover it. 00:26:28.673 [2024-07-16 00:27:47.203572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.673 [2024-07-16 00:27:47.203586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.673 qpair failed and we were unable to recover it. 00:26:28.673 [2024-07-16 00:27:47.203940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.673 [2024-07-16 00:27:47.203970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.673 qpair failed and we were unable to recover it. 00:26:28.673 [2024-07-16 00:27:47.204167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.673 [2024-07-16 00:27:47.204197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.673 qpair failed and we were unable to recover it. 00:26:28.673 [2024-07-16 00:27:47.204505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.673 [2024-07-16 00:27:47.204537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.673 qpair failed and we were unable to recover it. 00:26:28.673 [2024-07-16 00:27:47.204882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.673 [2024-07-16 00:27:47.204911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.673 qpair failed and we were unable to recover it. 00:26:28.673 [2024-07-16 00:27:47.205160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.673 [2024-07-16 00:27:47.205190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.673 qpair failed and we were unable to recover it. 00:26:28.673 [2024-07-16 00:27:47.205521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.673 [2024-07-16 00:27:47.205553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.673 qpair failed and we were unable to recover it. 00:26:28.673 [2024-07-16 00:27:47.205794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.673 [2024-07-16 00:27:47.205823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.673 qpair failed and we were unable to recover it. 00:26:28.673 [2024-07-16 00:27:47.206088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.673 [2024-07-16 00:27:47.206118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.673 qpair failed and we were unable to recover it. 00:26:28.673 [2024-07-16 00:27:47.206291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.673 [2024-07-16 00:27:47.206323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.673 qpair failed and we were unable to recover it. 00:26:28.673 [2024-07-16 00:27:47.206572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.673 [2024-07-16 00:27:47.206601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.673 qpair failed and we were unable to recover it. 00:26:28.673 [2024-07-16 00:27:47.206828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.673 [2024-07-16 00:27:47.206841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.673 qpair failed and we were unable to recover it. 00:26:28.673 [2024-07-16 00:27:47.207075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.673 [2024-07-16 00:27:47.207105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.673 qpair failed and we were unable to recover it. 00:26:28.673 [2024-07-16 00:27:47.207346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.673 [2024-07-16 00:27:47.207376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.673 qpair failed and we were unable to recover it. 00:26:28.673 [2024-07-16 00:27:47.207706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.673 [2024-07-16 00:27:47.207736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.673 qpair failed and we were unable to recover it. 00:26:28.673 [2024-07-16 00:27:47.207918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.673 [2024-07-16 00:27:47.207947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.673 qpair failed and we were unable to recover it. 00:26:28.673 [2024-07-16 00:27:47.208182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.673 [2024-07-16 00:27:47.208211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.673 qpair failed and we were unable to recover it. 00:26:28.673 [2024-07-16 00:27:47.208485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.673 [2024-07-16 00:27:47.208499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.673 qpair failed and we were unable to recover it. 00:26:28.673 [2024-07-16 00:27:47.208704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.673 [2024-07-16 00:27:47.208718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.673 qpair failed and we were unable to recover it. 00:26:28.673 [2024-07-16 00:27:47.208920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.673 [2024-07-16 00:27:47.208933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.673 qpair failed and we were unable to recover it. 00:26:28.673 [2024-07-16 00:27:47.209127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.673 [2024-07-16 00:27:47.209141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.673 qpair failed and we were unable to recover it. 00:26:28.673 [2024-07-16 00:27:47.209374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.673 [2024-07-16 00:27:47.209405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.673 qpair failed and we were unable to recover it. 00:26:28.673 [2024-07-16 00:27:47.209718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.673 [2024-07-16 00:27:47.209748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.673 qpair failed and we were unable to recover it. 00:26:28.673 [2024-07-16 00:27:47.209913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.673 [2024-07-16 00:27:47.209926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.673 qpair failed and we were unable to recover it. 00:26:28.673 [2024-07-16 00:27:47.210121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.673 [2024-07-16 00:27:47.210151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.673 qpair failed and we were unable to recover it. 00:26:28.673 [2024-07-16 00:27:47.210377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.673 [2024-07-16 00:27:47.210408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.673 qpair failed and we were unable to recover it. 00:26:28.673 [2024-07-16 00:27:47.210540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.673 [2024-07-16 00:27:47.210570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.673 qpair failed and we were unable to recover it. 00:26:28.673 [2024-07-16 00:27:47.210816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.673 [2024-07-16 00:27:47.210829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.673 qpair failed and we were unable to recover it. 00:26:28.673 [2024-07-16 00:27:47.211096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.673 [2024-07-16 00:27:47.211109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.673 qpair failed and we were unable to recover it. 00:26:28.673 [2024-07-16 00:27:47.211301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.673 [2024-07-16 00:27:47.211315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.673 qpair failed and we were unable to recover it. 00:26:28.673 [2024-07-16 00:27:47.211463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.673 [2024-07-16 00:27:47.211477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.673 qpair failed and we were unable to recover it. 00:26:28.673 [2024-07-16 00:27:47.211679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.673 [2024-07-16 00:27:47.211708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.673 qpair failed and we were unable to recover it. 00:26:28.673 [2024-07-16 00:27:47.211998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.673 [2024-07-16 00:27:47.212027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.673 qpair failed and we were unable to recover it. 00:26:28.673 [2024-07-16 00:27:47.212262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.673 [2024-07-16 00:27:47.212293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.673 qpair failed and we were unable to recover it. 00:26:28.673 [2024-07-16 00:27:47.212538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.673 [2024-07-16 00:27:47.212552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.673 qpair failed and we were unable to recover it. 00:26:28.674 [2024-07-16 00:27:47.212705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.674 [2024-07-16 00:27:47.212719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.674 qpair failed and we were unable to recover it. 00:26:28.674 [2024-07-16 00:27:47.212817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.674 [2024-07-16 00:27:47.212831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.674 qpair failed and we were unable to recover it. 00:26:28.674 [2024-07-16 00:27:47.213039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.674 [2024-07-16 00:27:47.213052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.674 qpair failed and we were unable to recover it. 00:26:28.674 [2024-07-16 00:27:47.213265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.674 [2024-07-16 00:27:47.213279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.674 qpair failed and we were unable to recover it. 00:26:28.674 [2024-07-16 00:27:47.213488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.674 [2024-07-16 00:27:47.213501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.674 qpair failed and we were unable to recover it. 00:26:28.674 [2024-07-16 00:27:47.213689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.674 [2024-07-16 00:27:47.213702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.674 qpair failed and we were unable to recover it. 00:26:28.674 [2024-07-16 00:27:47.213840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.674 [2024-07-16 00:27:47.213854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.674 qpair failed and we were unable to recover it. 00:26:28.674 [2024-07-16 00:27:47.213982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.674 [2024-07-16 00:27:47.213995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.674 qpair failed and we were unable to recover it. 00:26:28.674 [2024-07-16 00:27:47.214251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.674 [2024-07-16 00:27:47.214264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.674 qpair failed and we were unable to recover it. 00:26:28.674 [2024-07-16 00:27:47.214466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.674 [2024-07-16 00:27:47.214479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.674 qpair failed and we were unable to recover it. 00:26:28.674 [2024-07-16 00:27:47.214689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.674 [2024-07-16 00:27:47.214703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.674 qpair failed and we were unable to recover it. 00:26:28.674 [2024-07-16 00:27:47.214848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.674 [2024-07-16 00:27:47.214861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.674 qpair failed and we were unable to recover it. 00:26:28.674 [2024-07-16 00:27:47.215089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.674 [2024-07-16 00:27:47.215119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.674 qpair failed and we were unable to recover it. 00:26:28.674 [2024-07-16 00:27:47.215302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.674 [2024-07-16 00:27:47.215335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.674 qpair failed and we were unable to recover it. 00:26:28.674 [2024-07-16 00:27:47.215546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.674 [2024-07-16 00:27:47.215559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.674 qpair failed and we were unable to recover it. 00:26:28.674 [2024-07-16 00:27:47.215702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.674 [2024-07-16 00:27:47.215717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.674 qpair failed and we were unable to recover it. 00:26:28.674 [2024-07-16 00:27:47.215859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.674 [2024-07-16 00:27:47.215896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.674 qpair failed and we were unable to recover it. 00:26:28.674 [2024-07-16 00:27:47.216189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.674 [2024-07-16 00:27:47.216219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.674 qpair failed and we were unable to recover it. 00:26:28.674 [2024-07-16 00:27:47.216462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.674 [2024-07-16 00:27:47.216475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.674 qpair failed and we were unable to recover it. 00:26:28.674 [2024-07-16 00:27:47.216702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.674 [2024-07-16 00:27:47.216715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.674 qpair failed and we were unable to recover it. 00:26:28.674 [2024-07-16 00:27:47.216920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.674 [2024-07-16 00:27:47.216934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.674 qpair failed and we were unable to recover it. 00:26:28.674 [2024-07-16 00:27:47.217127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.674 [2024-07-16 00:27:47.217140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.674 qpair failed and we were unable to recover it. 00:26:28.674 [2024-07-16 00:27:47.217369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.674 [2024-07-16 00:27:47.217400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.674 qpair failed and we were unable to recover it. 00:26:28.674 [2024-07-16 00:27:47.217629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.674 [2024-07-16 00:27:47.217658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.674 qpair failed and we were unable to recover it. 00:26:28.674 [2024-07-16 00:27:47.217902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.674 [2024-07-16 00:27:47.217932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.674 qpair failed and we were unable to recover it. 00:26:28.674 [2024-07-16 00:27:47.218172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.674 [2024-07-16 00:27:47.218203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.674 qpair failed and we were unable to recover it. 00:26:28.674 [2024-07-16 00:27:47.218434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.674 [2024-07-16 00:27:47.218448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.674 qpair failed and we were unable to recover it. 00:26:28.674 [2024-07-16 00:27:47.218691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.674 [2024-07-16 00:27:47.218720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.674 qpair failed and we were unable to recover it. 00:26:28.674 [2024-07-16 00:27:47.218957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.674 [2024-07-16 00:27:47.218987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.674 qpair failed and we were unable to recover it. 00:26:28.674 [2024-07-16 00:27:47.219284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.674 [2024-07-16 00:27:47.219315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.674 qpair failed and we were unable to recover it. 00:26:28.674 [2024-07-16 00:27:47.219576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.674 [2024-07-16 00:27:47.219606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.674 qpair failed and we were unable to recover it. 00:26:28.674 [2024-07-16 00:27:47.219849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.674 [2024-07-16 00:27:47.219863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.674 qpair failed and we were unable to recover it. 00:26:28.674 [2024-07-16 00:27:47.220065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.674 [2024-07-16 00:27:47.220095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.674 qpair failed and we were unable to recover it. 00:26:28.674 [2024-07-16 00:27:47.220268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.674 [2024-07-16 00:27:47.220298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.674 qpair failed and we were unable to recover it. 00:26:28.674 [2024-07-16 00:27:47.220614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.674 [2024-07-16 00:27:47.220645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.674 qpair failed and we were unable to recover it. 00:26:28.674 [2024-07-16 00:27:47.220935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.674 [2024-07-16 00:27:47.220965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.674 qpair failed and we were unable to recover it. 00:26:28.674 [2024-07-16 00:27:47.221143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.674 [2024-07-16 00:27:47.221173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.674 qpair failed and we were unable to recover it. 00:26:28.674 [2024-07-16 00:27:47.221422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.674 [2024-07-16 00:27:47.221453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.674 qpair failed and we were unable to recover it. 00:26:28.674 [2024-07-16 00:27:47.221610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.674 [2024-07-16 00:27:47.221639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.674 qpair failed and we were unable to recover it. 00:26:28.674 [2024-07-16 00:27:47.221971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.674 [2024-07-16 00:27:47.222001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.674 qpair failed and we were unable to recover it. 00:26:28.674 [2024-07-16 00:27:47.222259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.674 [2024-07-16 00:27:47.222290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.674 qpair failed and we were unable to recover it. 00:26:28.674 [2024-07-16 00:27:47.222532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.675 [2024-07-16 00:27:47.222562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.675 qpair failed and we were unable to recover it. 00:26:28.675 [2024-07-16 00:27:47.222784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.675 [2024-07-16 00:27:47.222819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.675 qpair failed and we were unable to recover it. 00:26:28.675 [2024-07-16 00:27:47.223071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.675 [2024-07-16 00:27:47.223102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.675 qpair failed and we were unable to recover it. 00:26:28.675 [2024-07-16 00:27:47.223440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.675 [2024-07-16 00:27:47.223470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.675 qpair failed and we were unable to recover it. 00:26:28.675 [2024-07-16 00:27:47.223702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.675 [2024-07-16 00:27:47.223715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.675 qpair failed and we were unable to recover it. 00:26:28.675 [2024-07-16 00:27:47.223946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.675 [2024-07-16 00:27:47.223976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.675 qpair failed and we were unable to recover it. 00:26:28.675 [2024-07-16 00:27:47.224205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.675 [2024-07-16 00:27:47.224258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.675 qpair failed and we were unable to recover it. 00:26:28.675 [2024-07-16 00:27:47.224601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.675 [2024-07-16 00:27:47.224630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.675 qpair failed and we were unable to recover it. 00:26:28.675 [2024-07-16 00:27:47.224877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.675 [2024-07-16 00:27:47.224908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.675 qpair failed and we were unable to recover it. 00:26:28.675 [2024-07-16 00:27:47.225234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.675 [2024-07-16 00:27:47.225265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.675 qpair failed and we were unable to recover it. 00:26:28.675 [2024-07-16 00:27:47.225421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.675 [2024-07-16 00:27:47.225450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.675 qpair failed and we were unable to recover it. 00:26:28.675 [2024-07-16 00:27:47.225569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.675 [2024-07-16 00:27:47.225599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.675 qpair failed and we were unable to recover it. 00:26:28.675 [2024-07-16 00:27:47.225834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.675 [2024-07-16 00:27:47.225863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.675 qpair failed and we were unable to recover it. 00:26:28.675 [2024-07-16 00:27:47.226176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.675 [2024-07-16 00:27:47.226206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.675 qpair failed and we were unable to recover it. 00:26:28.675 [2024-07-16 00:27:47.226406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.675 [2024-07-16 00:27:47.226419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.675 qpair failed and we were unable to recover it. 00:26:28.675 [2024-07-16 00:27:47.226704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.675 [2024-07-16 00:27:47.226734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.675 qpair failed and we were unable to recover it. 00:26:28.675 [2024-07-16 00:27:47.226903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.675 [2024-07-16 00:27:47.226933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.675 qpair failed and we were unable to recover it. 00:26:28.675 [2024-07-16 00:27:47.227173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.675 [2024-07-16 00:27:47.227203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.675 qpair failed and we were unable to recover it. 00:26:28.675 [2024-07-16 00:27:47.227452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.675 [2024-07-16 00:27:47.227483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.675 qpair failed and we were unable to recover it. 00:26:28.675 [2024-07-16 00:27:47.227618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.675 [2024-07-16 00:27:47.227647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.675 qpair failed and we were unable to recover it. 00:26:28.675 [2024-07-16 00:27:47.227907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.675 [2024-07-16 00:27:47.227921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.675 qpair failed and we were unable to recover it. 00:26:28.675 [2024-07-16 00:27:47.228219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.675 [2024-07-16 00:27:47.228237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.675 qpair failed and we were unable to recover it. 00:26:28.675 [2024-07-16 00:27:47.228468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.675 [2024-07-16 00:27:47.228481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.675 qpair failed and we were unable to recover it. 00:26:28.675 [2024-07-16 00:27:47.228740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.675 [2024-07-16 00:27:47.228754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.675 qpair failed and we were unable to recover it. 00:26:28.675 [2024-07-16 00:27:47.228955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.675 [2024-07-16 00:27:47.228969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.675 qpair failed and we were unable to recover it. 00:26:28.675 [2024-07-16 00:27:47.229169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.675 [2024-07-16 00:27:47.229183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.675 qpair failed and we were unable to recover it. 00:26:28.675 [2024-07-16 00:27:47.229327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.675 [2024-07-16 00:27:47.229342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.675 qpair failed and we were unable to recover it. 00:26:28.675 [2024-07-16 00:27:47.229581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.675 [2024-07-16 00:27:47.229611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.675 qpair failed and we were unable to recover it. 00:26:28.675 [2024-07-16 00:27:47.229842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.675 [2024-07-16 00:27:47.229877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.675 qpair failed and we were unable to recover it. 00:26:28.675 [2024-07-16 00:27:47.230061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.675 [2024-07-16 00:27:47.230092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.675 qpair failed and we were unable to recover it. 00:26:28.675 [2024-07-16 00:27:47.230387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.675 [2024-07-16 00:27:47.230417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.675 qpair failed and we were unable to recover it. 00:26:28.675 [2024-07-16 00:27:47.230648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.675 [2024-07-16 00:27:47.230678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.675 qpair failed and we were unable to recover it. 00:26:28.675 [2024-07-16 00:27:47.230927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.675 [2024-07-16 00:27:47.230958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.675 qpair failed and we were unable to recover it. 00:26:28.675 [2024-07-16 00:27:47.231147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.675 [2024-07-16 00:27:47.231175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.675 qpair failed and we were unable to recover it. 00:26:28.675 [2024-07-16 00:27:47.231433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.675 [2024-07-16 00:27:47.231464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.675 qpair failed and we were unable to recover it. 00:26:28.675 [2024-07-16 00:27:47.231793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.675 [2024-07-16 00:27:47.231823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.675 qpair failed and we were unable to recover it. 00:26:28.675 [2024-07-16 00:27:47.232151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.675 [2024-07-16 00:27:47.232181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.675 qpair failed and we were unable to recover it. 00:26:28.675 [2024-07-16 00:27:47.232481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.675 [2024-07-16 00:27:47.232512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.675 qpair failed and we were unable to recover it. 00:26:28.675 [2024-07-16 00:27:47.232764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.675 [2024-07-16 00:27:47.232778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.675 qpair failed and we were unable to recover it. 00:26:28.675 [2024-07-16 00:27:47.233039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.675 [2024-07-16 00:27:47.233070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.675 qpair failed and we were unable to recover it. 00:26:28.675 [2024-07-16 00:27:47.233294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.675 [2024-07-16 00:27:47.233324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.675 qpair failed and we were unable to recover it. 00:26:28.675 [2024-07-16 00:27:47.233619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.676 [2024-07-16 00:27:47.233649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.676 qpair failed and we were unable to recover it. 00:26:28.676 [2024-07-16 00:27:47.233922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.676 [2024-07-16 00:27:47.233952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.676 qpair failed and we were unable to recover it. 00:26:28.676 [2024-07-16 00:27:47.234115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.676 [2024-07-16 00:27:47.234145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.676 qpair failed and we were unable to recover it. 00:26:28.676 [2024-07-16 00:27:47.234440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.676 [2024-07-16 00:27:47.234471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.676 qpair failed and we were unable to recover it. 00:26:28.676 [2024-07-16 00:27:47.234703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.676 [2024-07-16 00:27:47.234732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.676 qpair failed and we were unable to recover it. 00:26:28.676 [2024-07-16 00:27:47.234967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.676 [2024-07-16 00:27:47.234997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.676 qpair failed and we were unable to recover it. 00:26:28.676 [2024-07-16 00:27:47.235291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.676 [2024-07-16 00:27:47.235322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.676 qpair failed and we were unable to recover it. 00:26:28.676 [2024-07-16 00:27:47.235439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.676 [2024-07-16 00:27:47.235452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.676 qpair failed and we were unable to recover it. 00:26:28.676 [2024-07-16 00:27:47.235741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.676 [2024-07-16 00:27:47.235771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.676 qpair failed and we were unable to recover it. 00:26:28.676 [2024-07-16 00:27:47.236103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.676 [2024-07-16 00:27:47.236133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.676 qpair failed and we were unable to recover it. 00:26:28.676 [2024-07-16 00:27:47.236308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.676 [2024-07-16 00:27:47.236339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.676 qpair failed and we were unable to recover it. 00:26:28.676 [2024-07-16 00:27:47.236528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.676 [2024-07-16 00:27:47.236557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.676 qpair failed and we were unable to recover it. 00:26:28.676 [2024-07-16 00:27:47.236792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.676 [2024-07-16 00:27:47.236823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.676 qpair failed and we were unable to recover it. 00:26:28.676 [2024-07-16 00:27:47.237137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.676 [2024-07-16 00:27:47.237166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.676 qpair failed and we were unable to recover it. 00:26:28.676 [2024-07-16 00:27:47.237455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.676 [2024-07-16 00:27:47.237486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.676 qpair failed and we were unable to recover it. 00:26:28.676 [2024-07-16 00:27:47.237700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.676 [2024-07-16 00:27:47.237713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.676 qpair failed and we were unable to recover it. 00:26:28.676 [2024-07-16 00:27:47.237841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.676 [2024-07-16 00:27:47.237854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.676 qpair failed and we were unable to recover it. 00:26:28.676 [2024-07-16 00:27:47.238089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.676 [2024-07-16 00:27:47.238119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.676 qpair failed and we were unable to recover it. 00:26:28.676 [2024-07-16 00:27:47.238295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.676 [2024-07-16 00:27:47.238326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.676 qpair failed and we were unable to recover it. 00:26:28.676 [2024-07-16 00:27:47.238636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.676 [2024-07-16 00:27:47.238666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.676 qpair failed and we were unable to recover it. 00:26:28.676 [2024-07-16 00:27:47.238909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.676 [2024-07-16 00:27:47.238939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.676 qpair failed and we were unable to recover it. 00:26:28.676 [2024-07-16 00:27:47.239260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.676 [2024-07-16 00:27:47.239291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.676 qpair failed and we were unable to recover it. 00:26:28.676 [2024-07-16 00:27:47.239522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.676 [2024-07-16 00:27:47.239535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.676 qpair failed and we were unable to recover it. 00:26:28.676 [2024-07-16 00:27:47.239630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.676 [2024-07-16 00:27:47.239643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.676 qpair failed and we were unable to recover it. 00:26:28.676 [2024-07-16 00:27:47.239930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.676 [2024-07-16 00:27:47.239960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.676 qpair failed and we were unable to recover it. 00:26:28.676 [2024-07-16 00:27:47.240204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.676 [2024-07-16 00:27:47.240254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.676 qpair failed and we were unable to recover it. 00:26:28.676 [2024-07-16 00:27:47.240439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.676 [2024-07-16 00:27:47.240453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.676 qpair failed and we were unable to recover it. 00:26:28.676 [2024-07-16 00:27:47.240616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.676 [2024-07-16 00:27:47.240647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.676 qpair failed and we were unable to recover it. 00:26:28.676 [2024-07-16 00:27:47.240865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.676 [2024-07-16 00:27:47.240933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.676 qpair failed and we were unable to recover it. 00:26:28.676 [2024-07-16 00:27:47.241257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.676 [2024-07-16 00:27:47.241324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.676 qpair failed and we were unable to recover it. 00:26:28.676 [2024-07-16 00:27:47.241527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.676 [2024-07-16 00:27:47.241562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.676 qpair failed and we were unable to recover it. 00:26:28.676 [2024-07-16 00:27:47.241827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.676 [2024-07-16 00:27:47.241857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.676 qpair failed and we were unable to recover it. 00:26:28.676 [2024-07-16 00:27:47.242095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.676 [2024-07-16 00:27:47.242124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.676 qpair failed and we were unable to recover it. 00:26:28.676 [2024-07-16 00:27:47.242423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.677 [2024-07-16 00:27:47.242455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.677 qpair failed and we were unable to recover it. 00:26:28.677 [2024-07-16 00:27:47.242669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.677 [2024-07-16 00:27:47.242679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.677 qpair failed and we were unable to recover it. 00:26:28.677 [2024-07-16 00:27:47.242952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.677 [2024-07-16 00:27:47.242982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.677 qpair failed and we were unable to recover it. 00:26:28.677 [2024-07-16 00:27:47.243169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.677 [2024-07-16 00:27:47.243199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.677 qpair failed and we were unable to recover it. 00:26:28.677 [2024-07-16 00:27:47.243387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.677 [2024-07-16 00:27:47.243431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.677 qpair failed and we were unable to recover it. 00:26:28.677 [2024-07-16 00:27:47.243589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.677 [2024-07-16 00:27:47.243603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.677 qpair failed and we were unable to recover it. 00:26:28.677 [2024-07-16 00:27:47.243774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.677 [2024-07-16 00:27:47.243805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.677 qpair failed and we were unable to recover it. 00:26:28.677 [2024-07-16 00:27:47.244039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.677 [2024-07-16 00:27:47.244068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.677 qpair failed and we were unable to recover it. 00:26:28.677 [2024-07-16 00:27:47.244359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.677 [2024-07-16 00:27:47.244389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.677 qpair failed and we were unable to recover it. 00:26:28.677 [2024-07-16 00:27:47.244572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.677 [2024-07-16 00:27:47.244603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.677 qpair failed and we were unable to recover it. 00:26:28.677 [2024-07-16 00:27:47.244839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.677 [2024-07-16 00:27:47.244868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.677 qpair failed and we were unable to recover it. 00:26:28.677 [2024-07-16 00:27:47.245174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.677 [2024-07-16 00:27:47.245204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.677 qpair failed and we were unable to recover it. 00:26:28.677 [2024-07-16 00:27:47.245551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.677 [2024-07-16 00:27:47.245582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.677 qpair failed and we were unable to recover it. 00:26:28.677 [2024-07-16 00:27:47.245919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.677 [2024-07-16 00:27:47.245949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.677 qpair failed and we were unable to recover it. 00:26:28.677 [2024-07-16 00:27:47.246183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.677 [2024-07-16 00:27:47.246213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.677 qpair failed and we were unable to recover it. 00:26:28.677 [2024-07-16 00:27:47.246384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.677 [2024-07-16 00:27:47.246404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.677 qpair failed and we were unable to recover it. 00:26:28.677 [2024-07-16 00:27:47.246556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.677 [2024-07-16 00:27:47.246570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.677 qpair failed and we were unable to recover it. 00:26:28.677 [2024-07-16 00:27:47.246848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.677 [2024-07-16 00:27:47.246877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.677 qpair failed and we were unable to recover it. 00:26:28.677 [2024-07-16 00:27:47.247189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.677 [2024-07-16 00:27:47.247219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.677 qpair failed and we were unable to recover it. 00:26:28.677 [2024-07-16 00:27:47.247485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.677 [2024-07-16 00:27:47.247515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.677 qpair failed and we were unable to recover it. 00:26:28.677 [2024-07-16 00:27:47.247763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.677 [2024-07-16 00:27:47.247792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.677 qpair failed and we were unable to recover it. 00:26:28.677 [2024-07-16 00:27:47.247896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.677 [2024-07-16 00:27:47.247910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.677 qpair failed and we were unable to recover it. 00:26:28.677 [2024-07-16 00:27:47.248166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.677 [2024-07-16 00:27:47.248177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.677 qpair failed and we were unable to recover it. 00:26:28.677 [2024-07-16 00:27:47.248385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.677 [2024-07-16 00:27:47.248416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.677 qpair failed and we were unable to recover it. 00:26:28.677 [2024-07-16 00:27:47.248677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.677 [2024-07-16 00:27:47.248707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.677 qpair failed and we were unable to recover it. 00:26:28.677 [2024-07-16 00:27:47.248894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.677 [2024-07-16 00:27:47.248923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.677 qpair failed and we were unable to recover it. 00:26:28.677 [2024-07-16 00:27:47.249246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.677 [2024-07-16 00:27:47.249277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.677 qpair failed and we were unable to recover it. 00:26:28.677 [2024-07-16 00:27:47.249514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.677 [2024-07-16 00:27:47.249545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.677 qpair failed and we were unable to recover it. 00:26:28.677 [2024-07-16 00:27:47.249734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.677 [2024-07-16 00:27:47.249763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.677 qpair failed and we were unable to recover it. 00:26:28.677 [2024-07-16 00:27:47.249979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.677 [2024-07-16 00:27:47.249988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.677 qpair failed and we were unable to recover it. 00:26:28.677 [2024-07-16 00:27:47.250263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.677 [2024-07-16 00:27:47.250273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.677 qpair failed and we were unable to recover it. 00:26:28.677 [2024-07-16 00:27:47.250522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.677 [2024-07-16 00:27:47.250551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.677 qpair failed and we were unable to recover it. 00:26:28.677 [2024-07-16 00:27:47.250786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.677 [2024-07-16 00:27:47.250815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.677 qpair failed and we were unable to recover it. 00:26:28.677 [2024-07-16 00:27:47.250990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.677 [2024-07-16 00:27:47.251000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.677 qpair failed and we were unable to recover it. 00:26:28.677 [2024-07-16 00:27:47.251182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.677 [2024-07-16 00:27:47.251191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.677 qpair failed and we were unable to recover it. 00:26:28.677 [2024-07-16 00:27:47.251475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.677 [2024-07-16 00:27:47.251514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.677 qpair failed and we were unable to recover it. 00:26:28.677 [2024-07-16 00:27:47.251762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.677 [2024-07-16 00:27:47.251791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.677 qpair failed and we were unable to recover it. 00:26:28.677 [2024-07-16 00:27:47.252085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.677 [2024-07-16 00:27:47.252115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.677 qpair failed and we were unable to recover it. 00:26:28.677 [2024-07-16 00:27:47.252349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.677 [2024-07-16 00:27:47.252379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.677 qpair failed and we were unable to recover it. 00:26:28.677 [2024-07-16 00:27:47.252705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.677 [2024-07-16 00:27:47.252735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.677 qpair failed and we were unable to recover it. 00:26:28.677 [2024-07-16 00:27:47.252925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.677 [2024-07-16 00:27:47.252955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.677 qpair failed and we were unable to recover it. 00:26:28.677 [2024-07-16 00:27:47.253269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.677 [2024-07-16 00:27:47.253299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.678 qpair failed and we were unable to recover it. 00:26:28.678 [2024-07-16 00:27:47.253612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.678 [2024-07-16 00:27:47.253642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.678 qpair failed and we were unable to recover it. 00:26:28.678 [2024-07-16 00:27:47.253858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.678 [2024-07-16 00:27:47.253868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.678 qpair failed and we were unable to recover it. 00:26:28.678 [2024-07-16 00:27:47.254010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.678 [2024-07-16 00:27:47.254039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.678 qpair failed and we were unable to recover it. 00:26:28.678 [2024-07-16 00:27:47.254333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.678 [2024-07-16 00:27:47.254363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.678 qpair failed and we were unable to recover it. 00:26:28.678 [2024-07-16 00:27:47.254533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.678 [2024-07-16 00:27:47.254563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.678 qpair failed and we were unable to recover it. 00:26:28.678 [2024-07-16 00:27:47.254766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.678 [2024-07-16 00:27:47.254796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.678 qpair failed and we were unable to recover it. 00:26:28.678 [2024-07-16 00:27:47.255026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.678 [2024-07-16 00:27:47.255055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.678 qpair failed and we were unable to recover it. 00:26:28.678 [2024-07-16 00:27:47.255286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.678 [2024-07-16 00:27:47.255317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.678 qpair failed and we were unable to recover it. 00:26:28.678 [2024-07-16 00:27:47.255492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.678 [2024-07-16 00:27:47.255521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.678 qpair failed and we were unable to recover it. 00:26:28.678 [2024-07-16 00:27:47.255807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.678 [2024-07-16 00:27:47.255816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.678 qpair failed and we were unable to recover it. 00:26:28.678 [2024-07-16 00:27:47.256063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.678 [2024-07-16 00:27:47.256072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.678 qpair failed and we were unable to recover it. 00:26:28.678 [2024-07-16 00:27:47.256238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.678 [2024-07-16 00:27:47.256248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.678 qpair failed and we were unable to recover it. 00:26:28.678 [2024-07-16 00:27:47.256402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.678 [2024-07-16 00:27:47.256411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.678 qpair failed and we were unable to recover it. 00:26:28.678 [2024-07-16 00:27:47.256542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.678 [2024-07-16 00:27:47.256551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.678 qpair failed and we were unable to recover it. 00:26:28.678 [2024-07-16 00:27:47.256729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.678 [2024-07-16 00:27:47.256738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.678 qpair failed and we were unable to recover it. 00:26:28.678 [2024-07-16 00:27:47.256941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.678 [2024-07-16 00:27:47.256951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.678 qpair failed and we were unable to recover it. 00:26:28.678 [2024-07-16 00:27:47.257161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.678 [2024-07-16 00:27:47.257191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.678 qpair failed and we were unable to recover it. 00:26:28.678 [2024-07-16 00:27:47.257514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.678 [2024-07-16 00:27:47.257544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.678 qpair failed and we were unable to recover it. 00:26:28.678 [2024-07-16 00:27:47.257713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.678 [2024-07-16 00:27:47.257744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.678 qpair failed and we were unable to recover it. 00:26:28.678 [2024-07-16 00:27:47.257966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.678 [2024-07-16 00:27:47.257996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.678 qpair failed and we were unable to recover it. 00:26:28.678 [2024-07-16 00:27:47.258249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.678 [2024-07-16 00:27:47.258292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.678 qpair failed and we were unable to recover it. 00:26:28.678 [2024-07-16 00:27:47.258548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.678 [2024-07-16 00:27:47.258579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.678 qpair failed and we were unable to recover it. 00:26:28.678 [2024-07-16 00:27:47.258825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.678 [2024-07-16 00:27:47.258855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.678 qpair failed and we were unable to recover it. 00:26:28.678 [2024-07-16 00:27:47.259015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.678 [2024-07-16 00:27:47.259044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.678 qpair failed and we were unable to recover it. 00:26:28.678 [2024-07-16 00:27:47.259338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.678 [2024-07-16 00:27:47.259369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.678 qpair failed and we were unable to recover it. 00:26:28.678 [2024-07-16 00:27:47.259681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.678 [2024-07-16 00:27:47.259711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.678 qpair failed and we were unable to recover it. 00:26:28.678 [2024-07-16 00:27:47.259850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.678 [2024-07-16 00:27:47.259879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.678 qpair failed and we were unable to recover it. 00:26:28.678 [2024-07-16 00:27:47.260139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.678 [2024-07-16 00:27:47.260169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.678 qpair failed and we were unable to recover it. 00:26:28.678 [2024-07-16 00:27:47.260358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.678 [2024-07-16 00:27:47.260389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.678 qpair failed and we were unable to recover it. 00:26:28.678 [2024-07-16 00:27:47.260572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.678 [2024-07-16 00:27:47.260603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.678 qpair failed and we were unable to recover it. 00:26:28.678 [2024-07-16 00:27:47.260784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.678 [2024-07-16 00:27:47.260814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.678 qpair failed and we were unable to recover it. 00:26:28.678 [2024-07-16 00:27:47.260976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.678 [2024-07-16 00:27:47.260989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.678 qpair failed and we were unable to recover it. 00:26:28.678 [2024-07-16 00:27:47.261195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.678 [2024-07-16 00:27:47.261208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.678 qpair failed and we were unable to recover it. 00:26:28.678 [2024-07-16 00:27:47.261363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.678 [2024-07-16 00:27:47.261381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.678 qpair failed and we were unable to recover it. 00:26:28.678 [2024-07-16 00:27:47.261571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.678 [2024-07-16 00:27:47.261584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.678 qpair failed and we were unable to recover it. 00:26:28.678 [2024-07-16 00:27:47.261871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.678 [2024-07-16 00:27:47.261899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.678 qpair failed and we were unable to recover it. 00:26:28.678 [2024-07-16 00:27:47.262071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.678 [2024-07-16 00:27:47.262101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.678 qpair failed and we were unable to recover it. 00:26:28.678 [2024-07-16 00:27:47.262354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.678 [2024-07-16 00:27:47.262384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.678 qpair failed and we were unable to recover it. 00:26:28.678 [2024-07-16 00:27:47.262544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.678 [2024-07-16 00:27:47.262574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.678 qpair failed and we were unable to recover it. 00:26:28.678 [2024-07-16 00:27:47.262828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.678 [2024-07-16 00:27:47.262858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.678 qpair failed and we were unable to recover it. 00:26:28.678 [2024-07-16 00:27:47.263096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.679 [2024-07-16 00:27:47.263126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.679 qpair failed and we were unable to recover it. 00:26:28.679 [2024-07-16 00:27:47.263307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.679 [2024-07-16 00:27:47.263336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.679 qpair failed and we were unable to recover it. 00:26:28.679 [2024-07-16 00:27:47.263515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.679 [2024-07-16 00:27:47.263528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.679 qpair failed and we were unable to recover it. 00:26:28.679 [2024-07-16 00:27:47.263790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.679 [2024-07-16 00:27:47.263820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.679 qpair failed and we were unable to recover it. 00:26:28.679 [2024-07-16 00:27:47.264126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.679 [2024-07-16 00:27:47.264155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.679 qpair failed and we were unable to recover it. 00:26:28.679 [2024-07-16 00:27:47.264450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.679 [2024-07-16 00:27:47.264481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.679 qpair failed and we were unable to recover it. 00:26:28.679 [2024-07-16 00:27:47.264724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.679 [2024-07-16 00:27:47.264754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.679 qpair failed and we were unable to recover it. 00:26:28.679 [2024-07-16 00:27:47.264936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.679 [2024-07-16 00:27:47.264966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.679 qpair failed and we were unable to recover it. 00:26:28.679 [2024-07-16 00:27:47.265197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.679 [2024-07-16 00:27:47.265235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.679 qpair failed and we were unable to recover it. 00:26:28.679 [2024-07-16 00:27:47.265551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.679 [2024-07-16 00:27:47.265590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.679 qpair failed and we were unable to recover it. 00:26:28.679 [2024-07-16 00:27:47.265730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.679 [2024-07-16 00:27:47.265744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.679 qpair failed and we were unable to recover it. 00:26:28.679 [2024-07-16 00:27:47.265886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.679 [2024-07-16 00:27:47.265900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.679 qpair failed and we were unable to recover it. 00:26:28.679 [2024-07-16 00:27:47.266156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.679 [2024-07-16 00:27:47.266198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.679 qpair failed and we were unable to recover it. 00:26:28.679 [2024-07-16 00:27:47.266490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.679 [2024-07-16 00:27:47.266556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.679 qpair failed and we were unable to recover it. 00:26:28.679 [2024-07-16 00:27:47.266852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.679 [2024-07-16 00:27:47.266915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.679 qpair failed and we were unable to recover it. 00:26:28.679 [2024-07-16 00:27:47.267119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.679 [2024-07-16 00:27:47.267152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.679 qpair failed and we were unable to recover it. 00:26:28.679 [2024-07-16 00:27:47.267344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.679 [2024-07-16 00:27:47.267379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.679 qpair failed and we were unable to recover it. 00:26:28.679 [2024-07-16 00:27:47.267646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.679 [2024-07-16 00:27:47.267660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.679 qpair failed and we were unable to recover it. 00:26:28.679 [2024-07-16 00:27:47.267866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.679 [2024-07-16 00:27:47.267880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.679 qpair failed and we were unable to recover it. 00:26:28.679 [2024-07-16 00:27:47.268116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.679 [2024-07-16 00:27:47.268129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.679 qpair failed and we were unable to recover it. 00:26:28.679 [2024-07-16 00:27:47.268337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.679 [2024-07-16 00:27:47.268351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.679 qpair failed and we were unable to recover it. 00:26:28.679 [2024-07-16 00:27:47.268502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.679 [2024-07-16 00:27:47.268512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.679 qpair failed and we were unable to recover it. 00:26:28.679 [2024-07-16 00:27:47.268702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.679 [2024-07-16 00:27:47.268732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.679 qpair failed and we were unable to recover it. 00:26:28.679 [2024-07-16 00:27:47.268922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.679 [2024-07-16 00:27:47.268952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.679 qpair failed and we were unable to recover it. 00:26:28.679 [2024-07-16 00:27:47.269141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.679 [2024-07-16 00:27:47.269172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.679 qpair failed and we were unable to recover it. 00:26:28.679 [2024-07-16 00:27:47.269353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.679 [2024-07-16 00:27:47.269384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.679 qpair failed and we were unable to recover it. 00:26:28.679 [2024-07-16 00:27:47.269623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.679 [2024-07-16 00:27:47.269653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.679 qpair failed and we were unable to recover it. 00:26:28.679 [2024-07-16 00:27:47.269914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.679 [2024-07-16 00:27:47.269944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.679 qpair failed and we were unable to recover it. 00:26:28.679 [2024-07-16 00:27:47.270122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.679 [2024-07-16 00:27:47.270152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.679 qpair failed and we were unable to recover it. 00:26:28.679 [2024-07-16 00:27:47.270328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.679 [2024-07-16 00:27:47.270358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.679 qpair failed and we were unable to recover it. 00:26:28.679 [2024-07-16 00:27:47.270675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.679 [2024-07-16 00:27:47.270704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.679 qpair failed and we were unable to recover it. 00:26:28.679 [2024-07-16 00:27:47.271020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.679 [2024-07-16 00:27:47.271050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.679 qpair failed and we were unable to recover it. 00:26:28.679 [2024-07-16 00:27:47.271350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.679 [2024-07-16 00:27:47.271381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.679 qpair failed and we were unable to recover it. 00:26:28.679 [2024-07-16 00:27:47.271646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.679 [2024-07-16 00:27:47.271682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.679 qpair failed and we were unable to recover it. 00:26:28.679 [2024-07-16 00:27:47.271933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.679 [2024-07-16 00:27:47.271962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.679 qpair failed and we were unable to recover it. 00:26:28.679 [2024-07-16 00:27:47.272274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.679 [2024-07-16 00:27:47.272304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.679 qpair failed and we were unable to recover it. 00:26:28.679 [2024-07-16 00:27:47.272520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.679 [2024-07-16 00:27:47.272550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.679 qpair failed and we were unable to recover it. 00:26:28.679 [2024-07-16 00:27:47.272786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.679 [2024-07-16 00:27:47.272816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.679 qpair failed and we were unable to recover it. 00:26:28.679 [2024-07-16 00:27:47.273042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.679 [2024-07-16 00:27:47.273052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.679 qpair failed and we were unable to recover it. 00:26:28.679 [2024-07-16 00:27:47.273336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.679 [2024-07-16 00:27:47.273366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.679 qpair failed and we were unable to recover it. 00:26:28.679 [2024-07-16 00:27:47.273560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.679 [2024-07-16 00:27:47.273589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.679 qpair failed and we were unable to recover it. 00:26:28.679 [2024-07-16 00:27:47.273815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.680 [2024-07-16 00:27:47.273845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.680 qpair failed and we were unable to recover it. 00:26:28.680 [2024-07-16 00:27:47.274099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.680 [2024-07-16 00:27:47.274109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.680 qpair failed and we were unable to recover it. 00:26:28.680 [2024-07-16 00:27:47.274404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.680 [2024-07-16 00:27:47.274434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.680 qpair failed and we were unable to recover it. 00:26:28.680 [2024-07-16 00:27:47.274672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.680 [2024-07-16 00:27:47.274701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.680 qpair failed and we were unable to recover it. 00:26:28.680 [2024-07-16 00:27:47.274936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.680 [2024-07-16 00:27:47.274966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.680 qpair failed and we were unable to recover it. 00:26:28.680 [2024-07-16 00:27:47.275122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.680 [2024-07-16 00:27:47.275153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.680 qpair failed and we were unable to recover it. 00:26:28.680 [2024-07-16 00:27:47.275473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.680 [2024-07-16 00:27:47.275504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.680 qpair failed and we were unable to recover it. 00:26:28.680 [2024-07-16 00:27:47.275795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.680 [2024-07-16 00:27:47.275824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.680 qpair failed and we were unable to recover it. 00:26:28.680 [2024-07-16 00:27:47.276068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.680 [2024-07-16 00:27:47.276097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.680 qpair failed and we were unable to recover it. 00:26:28.680 [2024-07-16 00:27:47.276389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.680 [2024-07-16 00:27:47.276419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.680 qpair failed and we were unable to recover it. 00:26:28.680 [2024-07-16 00:27:47.276603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.680 [2024-07-16 00:27:47.276633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.680 qpair failed and we were unable to recover it. 00:26:28.680 [2024-07-16 00:27:47.276854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.680 [2024-07-16 00:27:47.276864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.680 qpair failed and we were unable to recover it. 00:26:28.680 [2024-07-16 00:27:47.276988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.680 [2024-07-16 00:27:47.276998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.680 qpair failed and we were unable to recover it. 00:26:28.680 [2024-07-16 00:27:47.277274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.680 [2024-07-16 00:27:47.277284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.680 qpair failed and we were unable to recover it. 00:26:28.680 [2024-07-16 00:27:47.277416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.680 [2024-07-16 00:27:47.277425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.680 qpair failed and we were unable to recover it. 00:26:28.680 [2024-07-16 00:27:47.277539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.680 [2024-07-16 00:27:47.277548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.680 qpair failed and we were unable to recover it. 00:26:28.680 [2024-07-16 00:27:47.277747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.680 [2024-07-16 00:27:47.277776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.680 qpair failed and we were unable to recover it. 00:26:28.680 [2024-07-16 00:27:47.278009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.680 [2024-07-16 00:27:47.278039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.680 qpair failed and we were unable to recover it. 00:26:28.680 [2024-07-16 00:27:47.278267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.680 [2024-07-16 00:27:47.278298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.680 qpair failed and we were unable to recover it. 00:26:28.680 [2024-07-16 00:27:47.278481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.680 [2024-07-16 00:27:47.278517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.680 qpair failed and we were unable to recover it. 00:26:28.680 [2024-07-16 00:27:47.278749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.680 [2024-07-16 00:27:47.278779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.680 qpair failed and we were unable to recover it. 00:26:28.680 [2024-07-16 00:27:47.279066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.680 [2024-07-16 00:27:47.279096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.680 qpair failed and we were unable to recover it. 00:26:28.680 [2024-07-16 00:27:47.279411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.680 [2024-07-16 00:27:47.279442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.680 qpair failed and we were unable to recover it. 00:26:28.680 [2024-07-16 00:27:47.279673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.680 [2024-07-16 00:27:47.279683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.680 qpair failed and we were unable to recover it. 00:26:28.680 [2024-07-16 00:27:47.279867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.680 [2024-07-16 00:27:47.279877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.680 qpair failed and we were unable to recover it. 00:26:28.680 [2024-07-16 00:27:47.280095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.680 [2024-07-16 00:27:47.280124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.680 qpair failed and we were unable to recover it. 00:26:28.680 [2024-07-16 00:27:47.280355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.680 [2024-07-16 00:27:47.280385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.680 qpair failed and we were unable to recover it. 00:26:28.680 [2024-07-16 00:27:47.280700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.680 [2024-07-16 00:27:47.280730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.680 qpair failed and we were unable to recover it. 00:26:28.680 [2024-07-16 00:27:47.280982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.680 [2024-07-16 00:27:47.281011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.680 qpair failed and we were unable to recover it. 00:26:28.680 [2024-07-16 00:27:47.281257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.680 [2024-07-16 00:27:47.281287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.680 qpair failed and we were unable to recover it. 00:26:28.680 [2024-07-16 00:27:47.281526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.680 [2024-07-16 00:27:47.281556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.680 qpair failed and we were unable to recover it. 00:26:28.680 [2024-07-16 00:27:47.281739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.680 [2024-07-16 00:27:47.281769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.680 qpair failed and we were unable to recover it. 00:26:28.680 [2024-07-16 00:27:47.282015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.680 [2024-07-16 00:27:47.282025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.680 qpair failed and we were unable to recover it. 00:26:28.680 [2024-07-16 00:27:47.282221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.680 [2024-07-16 00:27:47.282235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.680 qpair failed and we were unable to recover it. 00:26:28.680 [2024-07-16 00:27:47.282465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.680 [2024-07-16 00:27:47.282495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.680 qpair failed and we were unable to recover it. 00:26:28.680 [2024-07-16 00:27:47.282785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.680 [2024-07-16 00:27:47.282814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.680 qpair failed and we were unable to recover it. 00:26:28.680 [2024-07-16 00:27:47.283053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.680 [2024-07-16 00:27:47.283063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.680 qpair failed and we were unable to recover it. 00:26:28.680 [2024-07-16 00:27:47.283270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.681 [2024-07-16 00:27:47.283280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.681 qpair failed and we were unable to recover it. 00:26:28.681 [2024-07-16 00:27:47.283481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.681 [2024-07-16 00:27:47.283510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.681 qpair failed and we were unable to recover it. 00:26:28.681 [2024-07-16 00:27:47.283700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.681 [2024-07-16 00:27:47.283730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.681 qpair failed and we were unable to recover it. 00:26:28.681 [2024-07-16 00:27:47.283955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.681 [2024-07-16 00:27:47.283984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.681 qpair failed and we were unable to recover it. 00:26:28.681 [2024-07-16 00:27:47.284155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.681 [2024-07-16 00:27:47.284183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.681 qpair failed and we were unable to recover it. 00:26:28.681 [2024-07-16 00:27:47.284419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.681 [2024-07-16 00:27:47.284450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.681 qpair failed and we were unable to recover it. 00:26:28.681 [2024-07-16 00:27:47.284627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.681 [2024-07-16 00:27:47.284657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.681 qpair failed and we were unable to recover it. 00:26:28.681 [2024-07-16 00:27:47.284833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.681 [2024-07-16 00:27:47.284862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.681 qpair failed and we were unable to recover it. 00:26:28.681 [2024-07-16 00:27:47.285153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.681 [2024-07-16 00:27:47.285183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.681 qpair failed and we were unable to recover it. 00:26:28.681 [2024-07-16 00:27:47.285515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.681 [2024-07-16 00:27:47.285547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.681 qpair failed and we were unable to recover it. 00:26:28.681 [2024-07-16 00:27:47.285814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.681 [2024-07-16 00:27:47.285823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.681 qpair failed and we were unable to recover it. 00:26:28.681 [2024-07-16 00:27:47.286039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.681 [2024-07-16 00:27:47.286048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.681 qpair failed and we were unable to recover it. 00:26:28.681 [2024-07-16 00:27:47.286324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.681 [2024-07-16 00:27:47.286354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.681 qpair failed and we were unable to recover it. 00:26:28.681 [2024-07-16 00:27:47.286544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.681 [2024-07-16 00:27:47.286573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.681 qpair failed and we were unable to recover it. 00:26:28.681 [2024-07-16 00:27:47.286747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.681 [2024-07-16 00:27:47.286776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.681 qpair failed and we were unable to recover it. 00:26:28.681 [2024-07-16 00:27:47.287006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.682 [2024-07-16 00:27:47.287015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.682 qpair failed and we were unable to recover it. 00:26:28.682 [2024-07-16 00:27:47.287195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.682 [2024-07-16 00:27:47.287205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.682 qpair failed and we were unable to recover it. 00:26:28.682 [2024-07-16 00:27:47.287405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.682 [2024-07-16 00:27:47.287436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.682 qpair failed and we were unable to recover it. 00:26:28.682 [2024-07-16 00:27:47.287675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.682 [2024-07-16 00:27:47.287703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.682 qpair failed and we were unable to recover it. 00:26:28.682 [2024-07-16 00:27:47.288024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.682 [2024-07-16 00:27:47.288054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.682 qpair failed and we were unable to recover it. 00:26:28.682 [2024-07-16 00:27:47.288287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.682 [2024-07-16 00:27:47.288318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.682 qpair failed and we were unable to recover it. 00:26:28.682 [2024-07-16 00:27:47.288557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.682 [2024-07-16 00:27:47.288587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.682 qpair failed and we were unable to recover it. 00:26:28.682 [2024-07-16 00:27:47.288895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.682 [2024-07-16 00:27:47.288930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.682 qpair failed and we were unable to recover it. 00:26:28.682 [2024-07-16 00:27:47.289153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.682 [2024-07-16 00:27:47.289182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.682 qpair failed and we were unable to recover it. 00:26:28.682 [2024-07-16 00:27:47.289380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.682 [2024-07-16 00:27:47.289411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.682 qpair failed and we were unable to recover it. 00:26:28.682 [2024-07-16 00:27:47.289601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.682 [2024-07-16 00:27:47.289631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.682 qpair failed and we were unable to recover it. 00:26:28.682 [2024-07-16 00:27:47.289948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.682 [2024-07-16 00:27:47.289977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.682 qpair failed and we were unable to recover it. 00:26:28.682 [2024-07-16 00:27:47.290305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.682 [2024-07-16 00:27:47.290336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.682 qpair failed and we were unable to recover it. 00:26:28.682 [2024-07-16 00:27:47.290560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.682 [2024-07-16 00:27:47.290590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.682 qpair failed and we were unable to recover it. 00:26:28.682 [2024-07-16 00:27:47.290920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.682 [2024-07-16 00:27:47.290950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.682 qpair failed and we were unable to recover it. 00:26:28.682 [2024-07-16 00:27:47.291194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.682 [2024-07-16 00:27:47.291231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.682 qpair failed and we were unable to recover it. 00:26:28.682 [2024-07-16 00:27:47.291455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.682 [2024-07-16 00:27:47.291485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.682 qpair failed and we were unable to recover it. 00:26:28.682 [2024-07-16 00:27:47.291747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.682 [2024-07-16 00:27:47.291776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.682 qpair failed and we were unable to recover it. 00:26:28.682 [2024-07-16 00:27:47.292007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.682 [2024-07-16 00:27:47.292036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.682 qpair failed and we were unable to recover it. 00:26:28.682 [2024-07-16 00:27:47.292271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.682 [2024-07-16 00:27:47.292301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.682 qpair failed and we were unable to recover it. 00:26:28.682 [2024-07-16 00:27:47.292536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.682 [2024-07-16 00:27:47.292566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.682 qpair failed and we were unable to recover it. 00:26:28.682 [2024-07-16 00:27:47.292808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.682 [2024-07-16 00:27:47.292839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.682 qpair failed and we were unable to recover it. 00:26:28.682 [2024-07-16 00:27:47.293024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.682 [2024-07-16 00:27:47.293033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.682 qpair failed and we were unable to recover it. 00:26:28.682 [2024-07-16 00:27:47.293282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.682 [2024-07-16 00:27:47.293293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.682 qpair failed and we were unable to recover it. 00:26:28.682 [2024-07-16 00:27:47.293440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.682 [2024-07-16 00:27:47.293475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.682 qpair failed and we were unable to recover it. 00:26:28.682 [2024-07-16 00:27:47.293764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.682 [2024-07-16 00:27:47.293793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.682 qpair failed and we were unable to recover it. 00:26:28.682 [2024-07-16 00:27:47.294029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.682 [2024-07-16 00:27:47.294059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.682 qpair failed and we were unable to recover it. 00:26:28.682 [2024-07-16 00:27:47.294189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.682 [2024-07-16 00:27:47.294218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.682 qpair failed and we were unable to recover it. 00:26:28.682 [2024-07-16 00:27:47.294528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.682 [2024-07-16 00:27:47.294558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.682 qpair failed and we were unable to recover it. 00:26:28.682 [2024-07-16 00:27:47.294822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.682 [2024-07-16 00:27:47.294851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.682 qpair failed and we were unable to recover it. 00:26:28.682 [2024-07-16 00:27:47.295165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.682 [2024-07-16 00:27:47.295194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.682 qpair failed and we were unable to recover it. 00:26:28.682 [2024-07-16 00:27:47.295446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.682 [2024-07-16 00:27:47.295456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.682 qpair failed and we were unable to recover it. 00:26:28.682 [2024-07-16 00:27:47.295684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.682 [2024-07-16 00:27:47.295713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.682 qpair failed and we were unable to recover it. 00:26:28.682 [2024-07-16 00:27:47.295938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.682 [2024-07-16 00:27:47.295968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.682 qpair failed and we were unable to recover it. 00:26:28.682 [2024-07-16 00:27:47.296206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.682 [2024-07-16 00:27:47.296248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.682 qpair failed and we were unable to recover it. 00:26:28.682 [2024-07-16 00:27:47.296499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.682 [2024-07-16 00:27:47.296529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.682 qpair failed and we were unable to recover it. 00:26:28.682 [2024-07-16 00:27:47.296807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.682 [2024-07-16 00:27:47.296836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.682 qpair failed and we were unable to recover it. 00:26:28.682 [2024-07-16 00:27:47.297171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.682 [2024-07-16 00:27:47.297181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.682 qpair failed and we were unable to recover it. 00:26:28.682 [2024-07-16 00:27:47.297323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.682 [2024-07-16 00:27:47.297333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.682 qpair failed and we were unable to recover it. 00:26:28.682 [2024-07-16 00:27:47.297449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.682 [2024-07-16 00:27:47.297459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.682 qpair failed and we were unable to recover it. 00:26:28.682 [2024-07-16 00:27:47.297730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.682 [2024-07-16 00:27:47.297740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.682 qpair failed and we were unable to recover it. 00:26:28.682 [2024-07-16 00:27:47.297974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.683 [2024-07-16 00:27:47.298003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.683 qpair failed and we were unable to recover it. 00:26:28.683 [2024-07-16 00:27:47.298326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.683 [2024-07-16 00:27:47.298356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.683 qpair failed and we were unable to recover it. 00:26:28.683 [2024-07-16 00:27:47.298599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.683 [2024-07-16 00:27:47.298636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.683 qpair failed and we were unable to recover it. 00:26:28.683 [2024-07-16 00:27:47.298825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.683 [2024-07-16 00:27:47.298835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.683 qpair failed and we were unable to recover it. 00:26:28.683 [2024-07-16 00:27:47.299042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.683 [2024-07-16 00:27:47.299071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.683 qpair failed and we were unable to recover it. 00:26:28.683 [2024-07-16 00:27:47.299337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.683 [2024-07-16 00:27:47.299367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.683 qpair failed and we were unable to recover it. 00:26:28.683 [2024-07-16 00:27:47.299656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.683 [2024-07-16 00:27:47.299691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.683 qpair failed and we were unable to recover it. 00:26:28.683 [2024-07-16 00:27:47.299957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.683 [2024-07-16 00:27:47.299987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.683 qpair failed and we were unable to recover it. 00:26:28.683 [2024-07-16 00:27:47.300160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.683 [2024-07-16 00:27:47.300189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.683 qpair failed and we were unable to recover it. 00:26:28.683 [2024-07-16 00:27:47.300526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.683 [2024-07-16 00:27:47.300557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.683 qpair failed and we were unable to recover it. 00:26:28.683 [2024-07-16 00:27:47.300794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.683 [2024-07-16 00:27:47.300823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.683 qpair failed and we were unable to recover it. 00:26:28.683 [2024-07-16 00:27:47.301138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.683 [2024-07-16 00:27:47.301167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.683 qpair failed and we were unable to recover it. 00:26:28.683 [2024-07-16 00:27:47.301484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.683 [2024-07-16 00:27:47.301514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.683 qpair failed and we were unable to recover it. 00:26:28.683 [2024-07-16 00:27:47.301766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.683 [2024-07-16 00:27:47.301795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.683 qpair failed and we were unable to recover it. 00:26:28.683 [2024-07-16 00:27:47.302027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.683 [2024-07-16 00:27:47.302057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.683 qpair failed and we were unable to recover it. 00:26:28.683 [2024-07-16 00:27:47.302241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.683 [2024-07-16 00:27:47.302272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.683 qpair failed and we were unable to recover it. 00:26:28.683 [2024-07-16 00:27:47.302459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.683 [2024-07-16 00:27:47.302489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.683 qpair failed and we were unable to recover it. 00:26:28.683 [2024-07-16 00:27:47.302666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.683 [2024-07-16 00:27:47.302695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.683 qpair failed and we were unable to recover it. 00:26:28.683 [2024-07-16 00:27:47.302979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.683 [2024-07-16 00:27:47.302988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.683 qpair failed and we were unable to recover it. 00:26:28.683 [2024-07-16 00:27:47.303190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.683 [2024-07-16 00:27:47.303199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.683 qpair failed and we were unable to recover it. 00:26:28.683 [2024-07-16 00:27:47.303460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.683 [2024-07-16 00:27:47.303470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.683 qpair failed and we were unable to recover it. 00:26:28.683 [2024-07-16 00:27:47.303687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.683 [2024-07-16 00:27:47.303698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.683 qpair failed and we were unable to recover it. 00:26:28.683 [2024-07-16 00:27:47.303977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.683 [2024-07-16 00:27:47.304007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.683 qpair failed and we were unable to recover it. 00:26:28.683 [2024-07-16 00:27:47.304198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.683 [2024-07-16 00:27:47.304244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.683 qpair failed and we were unable to recover it. 00:26:28.683 [2024-07-16 00:27:47.304485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.683 [2024-07-16 00:27:47.304515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.683 qpair failed and we were unable to recover it. 00:26:28.683 [2024-07-16 00:27:47.304829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.683 [2024-07-16 00:27:47.304859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.683 qpair failed and we were unable to recover it. 00:26:28.683 [2024-07-16 00:27:47.305085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.683 [2024-07-16 00:27:47.305114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.683 qpair failed and we were unable to recover it. 00:26:28.683 [2024-07-16 00:27:47.305350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.683 [2024-07-16 00:27:47.305381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.683 qpair failed and we were unable to recover it. 00:26:28.683 [2024-07-16 00:27:47.305605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.683 [2024-07-16 00:27:47.305633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.683 qpair failed and we were unable to recover it. 00:26:28.683 [2024-07-16 00:27:47.305870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.683 [2024-07-16 00:27:47.305900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.683 qpair failed and we were unable to recover it. 00:26:28.683 [2024-07-16 00:27:47.306202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.683 [2024-07-16 00:27:47.306211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.683 qpair failed and we were unable to recover it. 00:26:28.683 [2024-07-16 00:27:47.306412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.683 [2024-07-16 00:27:47.306422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.683 qpair failed and we were unable to recover it. 00:26:28.683 [2024-07-16 00:27:47.306670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.683 [2024-07-16 00:27:47.306680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.683 qpair failed and we were unable to recover it. 00:26:28.683 [2024-07-16 00:27:47.306914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.683 [2024-07-16 00:27:47.306944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.683 qpair failed and we were unable to recover it. 00:26:28.683 [2024-07-16 00:27:47.307250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.683 [2024-07-16 00:27:47.307281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.683 qpair failed and we were unable to recover it. 00:26:28.683 [2024-07-16 00:27:47.307440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.683 [2024-07-16 00:27:47.307470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.683 qpair failed and we were unable to recover it. 00:26:28.683 [2024-07-16 00:27:47.307792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.683 [2024-07-16 00:27:47.307822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.683 qpair failed and we were unable to recover it. 00:26:28.683 [2024-07-16 00:27:47.307980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.683 [2024-07-16 00:27:47.308010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.683 qpair failed and we were unable to recover it. 00:26:28.683 [2024-07-16 00:27:47.308325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.683 [2024-07-16 00:27:47.308355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.683 qpair failed and we were unable to recover it. 00:26:28.683 [2024-07-16 00:27:47.308560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.683 [2024-07-16 00:27:47.308590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.683 qpair failed and we were unable to recover it. 00:26:28.683 [2024-07-16 00:27:47.308884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.683 [2024-07-16 00:27:47.308914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.683 qpair failed and we were unable to recover it. 00:26:28.684 [2024-07-16 00:27:47.309222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.684 [2024-07-16 00:27:47.309261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.684 qpair failed and we were unable to recover it. 00:26:28.684 [2024-07-16 00:27:47.309578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.684 [2024-07-16 00:27:47.309613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.684 qpair failed and we were unable to recover it. 00:26:28.684 [2024-07-16 00:27:47.309834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.684 [2024-07-16 00:27:47.309844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.684 qpair failed and we were unable to recover it. 00:26:28.684 [2024-07-16 00:27:47.310054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.684 [2024-07-16 00:27:47.310064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.684 qpair failed and we were unable to recover it. 00:26:28.684 [2024-07-16 00:27:47.310271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.684 [2024-07-16 00:27:47.310281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.684 qpair failed and we were unable to recover it. 00:26:28.684 [2024-07-16 00:27:47.310476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.684 [2024-07-16 00:27:47.310487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.684 qpair failed and we were unable to recover it. 00:26:28.684 [2024-07-16 00:27:47.310677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.684 [2024-07-16 00:27:47.310687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.684 qpair failed and we were unable to recover it. 00:26:28.684 [2024-07-16 00:27:47.310885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.684 [2024-07-16 00:27:47.310894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.684 qpair failed and we were unable to recover it. 00:26:28.684 [2024-07-16 00:27:47.311097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.684 [2024-07-16 00:27:47.311126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.684 qpair failed and we were unable to recover it. 00:26:28.684 [2024-07-16 00:27:47.311375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.684 [2024-07-16 00:27:47.311406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.684 qpair failed and we were unable to recover it. 00:26:28.684 [2024-07-16 00:27:47.311659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.684 [2024-07-16 00:27:47.311690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.684 qpair failed and we were unable to recover it. 00:26:28.684 [2024-07-16 00:27:47.312036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.684 [2024-07-16 00:27:47.312066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.684 qpair failed and we were unable to recover it. 00:26:28.684 [2024-07-16 00:27:47.312361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.684 [2024-07-16 00:27:47.312407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.684 qpair failed and we were unable to recover it. 00:26:28.684 [2024-07-16 00:27:47.312722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.684 [2024-07-16 00:27:47.312751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.684 qpair failed and we were unable to recover it. 00:26:28.684 [2024-07-16 00:27:47.312996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.684 [2024-07-16 00:27:47.313026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.684 qpair failed and we were unable to recover it. 00:26:28.684 [2024-07-16 00:27:47.313216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.684 [2024-07-16 00:27:47.313257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.684 qpair failed and we were unable to recover it. 00:26:28.684 [2024-07-16 00:27:47.313482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.684 [2024-07-16 00:27:47.313512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.684 qpair failed and we were unable to recover it. 00:26:28.684 [2024-07-16 00:27:47.313745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.684 [2024-07-16 00:27:47.313775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.684 qpair failed and we were unable to recover it. 00:26:28.684 [2024-07-16 00:27:47.313959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.684 [2024-07-16 00:27:47.313996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.684 qpair failed and we were unable to recover it. 00:26:28.684 [2024-07-16 00:27:47.314122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.684 [2024-07-16 00:27:47.314131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.684 qpair failed and we were unable to recover it. 00:26:28.684 [2024-07-16 00:27:47.314322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.684 [2024-07-16 00:27:47.314332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.684 qpair failed and we were unable to recover it. 00:26:28.684 [2024-07-16 00:27:47.314556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.684 [2024-07-16 00:27:47.314586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.684 qpair failed and we were unable to recover it. 00:26:28.684 [2024-07-16 00:27:47.314890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.684 [2024-07-16 00:27:47.314921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.684 qpair failed and we were unable to recover it. 00:26:28.684 [2024-07-16 00:27:47.315155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.684 [2024-07-16 00:27:47.315165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.684 qpair failed and we were unable to recover it. 00:26:28.684 [2024-07-16 00:27:47.315363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.684 [2024-07-16 00:27:47.315373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.684 qpair failed and we were unable to recover it. 00:26:28.684 [2024-07-16 00:27:47.315576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.684 [2024-07-16 00:27:47.315605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.684 qpair failed and we were unable to recover it. 00:26:28.684 [2024-07-16 00:27:47.315844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.684 [2024-07-16 00:27:47.315873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.684 qpair failed and we were unable to recover it. 00:26:28.684 [2024-07-16 00:27:47.316100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.684 [2024-07-16 00:27:47.316129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.684 qpair failed and we were unable to recover it. 00:26:28.684 [2024-07-16 00:27:47.316376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.684 [2024-07-16 00:27:47.316408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.684 qpair failed and we were unable to recover it. 00:26:28.684 [2024-07-16 00:27:47.316535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.684 [2024-07-16 00:27:47.316565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.684 qpair failed and we were unable to recover it. 00:26:28.684 [2024-07-16 00:27:47.316898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.684 [2024-07-16 00:27:47.316928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.684 qpair failed and we were unable to recover it. 00:26:28.684 [2024-07-16 00:27:47.317169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.684 [2024-07-16 00:27:47.317199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.684 qpair failed and we were unable to recover it. 00:26:28.684 [2024-07-16 00:27:47.317438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.684 [2024-07-16 00:27:47.317469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.684 qpair failed and we were unable to recover it. 00:26:28.684 [2024-07-16 00:27:47.317646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.684 [2024-07-16 00:27:47.317675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.684 qpair failed and we were unable to recover it. 00:26:28.684 [2024-07-16 00:27:47.317932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.684 [2024-07-16 00:27:47.317961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.684 qpair failed and we were unable to recover it. 00:26:28.684 [2024-07-16 00:27:47.318197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.684 [2024-07-16 00:27:47.318233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.684 qpair failed and we were unable to recover it. 00:26:28.684 [2024-07-16 00:27:47.318477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.684 [2024-07-16 00:27:47.318508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.684 qpair failed and we were unable to recover it. 00:26:28.684 [2024-07-16 00:27:47.318694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.684 [2024-07-16 00:27:47.318704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.684 qpair failed and we were unable to recover it. 00:26:28.684 [2024-07-16 00:27:47.318901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.684 [2024-07-16 00:27:47.318931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.684 qpair failed and we were unable to recover it. 00:26:28.684 [2024-07-16 00:27:47.319164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.684 [2024-07-16 00:27:47.319194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.684 qpair failed and we were unable to recover it. 00:26:28.684 [2024-07-16 00:27:47.319497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.684 [2024-07-16 00:27:47.319528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.684 qpair failed and we were unable to recover it. 00:26:28.685 [2024-07-16 00:27:47.319843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.685 [2024-07-16 00:27:47.319872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.685 qpair failed and we were unable to recover it. 00:26:28.685 [2024-07-16 00:27:47.320141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.685 [2024-07-16 00:27:47.320171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.685 qpair failed and we were unable to recover it. 00:26:28.685 [2024-07-16 00:27:47.320452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.685 [2024-07-16 00:27:47.320483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.685 qpair failed and we were unable to recover it. 00:26:28.685 [2024-07-16 00:27:47.320727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.685 [2024-07-16 00:27:47.320757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.685 qpair failed and we were unable to recover it. 00:26:28.685 [2024-07-16 00:27:47.320931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.685 [2024-07-16 00:27:47.320966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.685 qpair failed and we were unable to recover it. 00:26:28.685 [2024-07-16 00:27:47.321141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.685 [2024-07-16 00:27:47.321151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.685 qpair failed and we were unable to recover it. 00:26:28.685 [2024-07-16 00:27:47.321381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.685 [2024-07-16 00:27:47.321412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.685 qpair failed and we were unable to recover it. 00:26:28.685 [2024-07-16 00:27:47.321673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.685 [2024-07-16 00:27:47.321703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.685 qpair failed and we were unable to recover it. 00:26:28.685 [2024-07-16 00:27:47.321859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.685 [2024-07-16 00:27:47.321890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.685 qpair failed and we were unable to recover it. 00:26:28.685 [2024-07-16 00:27:47.322075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.685 [2024-07-16 00:27:47.322085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.685 qpair failed and we were unable to recover it. 00:26:28.685 [2024-07-16 00:27:47.322297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.685 [2024-07-16 00:27:47.322326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.685 qpair failed and we were unable to recover it. 00:26:28.685 [2024-07-16 00:27:47.322661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.685 [2024-07-16 00:27:47.322690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.685 qpair failed and we were unable to recover it. 00:26:28.685 [2024-07-16 00:27:47.322924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.685 [2024-07-16 00:27:47.322933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.685 qpair failed and we were unable to recover it. 00:26:28.685 [2024-07-16 00:27:47.323188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.685 [2024-07-16 00:27:47.323217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.685 qpair failed and we were unable to recover it. 00:26:28.685 [2024-07-16 00:27:47.323545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.685 [2024-07-16 00:27:47.323582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.685 qpair failed and we were unable to recover it. 00:26:28.685 [2024-07-16 00:27:47.323890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.685 [2024-07-16 00:27:47.323920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.685 qpair failed and we were unable to recover it. 00:26:28.685 [2024-07-16 00:27:47.324077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.685 [2024-07-16 00:27:47.324106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.685 qpair failed and we were unable to recover it. 00:26:28.685 [2024-07-16 00:27:47.324353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.685 [2024-07-16 00:27:47.324382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.685 qpair failed and we were unable to recover it. 00:26:28.685 [2024-07-16 00:27:47.324708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.685 [2024-07-16 00:27:47.324739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.685 qpair failed and we were unable to recover it. 00:26:28.685 [2024-07-16 00:27:47.324978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.685 [2024-07-16 00:27:47.325008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.685 qpair failed and we were unable to recover it. 00:26:28.685 [2024-07-16 00:27:47.325245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.685 [2024-07-16 00:27:47.325276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.685 qpair failed and we were unable to recover it. 00:26:28.685 [2024-07-16 00:27:47.325578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.685 [2024-07-16 00:27:47.325608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.685 qpair failed and we were unable to recover it. 00:26:28.685 [2024-07-16 00:27:47.325785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.685 [2024-07-16 00:27:47.325815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.685 qpair failed and we were unable to recover it. 00:26:28.685 [2024-07-16 00:27:47.325999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.685 [2024-07-16 00:27:47.326009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.685 qpair failed and we were unable to recover it. 00:26:28.685 [2024-07-16 00:27:47.326202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.685 [2024-07-16 00:27:47.326239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.685 qpair failed and we were unable to recover it. 00:26:28.685 [2024-07-16 00:27:47.326461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.685 [2024-07-16 00:27:47.326490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.685 qpair failed and we were unable to recover it. 00:26:28.685 [2024-07-16 00:27:47.326714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.685 [2024-07-16 00:27:47.326744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.685 qpair failed and we were unable to recover it. 00:26:28.685 [2024-07-16 00:27:47.326891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.685 [2024-07-16 00:27:47.326920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.685 qpair failed and we were unable to recover it. 00:26:28.685 [2024-07-16 00:27:47.327145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.685 [2024-07-16 00:27:47.327174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.685 qpair failed and we were unable to recover it. 00:26:28.685 [2024-07-16 00:27:47.327423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.685 [2024-07-16 00:27:47.327452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.685 qpair failed and we were unable to recover it. 00:26:28.685 [2024-07-16 00:27:47.327689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.685 [2024-07-16 00:27:47.327718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.685 qpair failed and we were unable to recover it. 00:26:28.685 [2024-07-16 00:27:47.327931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.685 [2024-07-16 00:27:47.327960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.685 qpair failed and we were unable to recover it. 00:26:28.685 [2024-07-16 00:27:47.328283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.685 [2024-07-16 00:27:47.328294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.685 qpair failed and we were unable to recover it. 00:26:28.685 [2024-07-16 00:27:47.328577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.685 [2024-07-16 00:27:47.328587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.685 qpair failed and we were unable to recover it. 00:26:28.685 [2024-07-16 00:27:47.328834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.685 [2024-07-16 00:27:47.328844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.685 qpair failed and we were unable to recover it. 00:26:28.686 [2024-07-16 00:27:47.329034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.686 [2024-07-16 00:27:47.329064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.686 qpair failed and we were unable to recover it. 00:26:28.686 [2024-07-16 00:27:47.329406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.686 [2024-07-16 00:27:47.329437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.686 qpair failed and we were unable to recover it. 00:26:28.686 [2024-07-16 00:27:47.329673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.686 [2024-07-16 00:27:47.329703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.686 qpair failed and we were unable to recover it. 00:26:28.686 [2024-07-16 00:27:47.329941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.686 [2024-07-16 00:27:47.329971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.686 qpair failed and we were unable to recover it. 00:26:28.686 [2024-07-16 00:27:47.330261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.686 [2024-07-16 00:27:47.330292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.686 qpair failed and we were unable to recover it. 00:26:28.686 [2024-07-16 00:27:47.330583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.686 [2024-07-16 00:27:47.330613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.686 qpair failed and we were unable to recover it. 00:26:28.686 [2024-07-16 00:27:47.330790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.686 [2024-07-16 00:27:47.330819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.686 qpair failed and we were unable to recover it. 00:26:28.686 [2024-07-16 00:27:47.330952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.686 [2024-07-16 00:27:47.330982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.686 qpair failed and we were unable to recover it. 00:26:28.686 [2024-07-16 00:27:47.331271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.686 [2024-07-16 00:27:47.331281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.686 qpair failed and we were unable to recover it. 00:26:28.686 [2024-07-16 00:27:47.331552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.686 [2024-07-16 00:27:47.331563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.686 qpair failed and we were unable to recover it. 00:26:28.686 [2024-07-16 00:27:47.331764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.686 [2024-07-16 00:27:47.331794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.686 qpair failed and we were unable to recover it. 00:26:28.686 [2024-07-16 00:27:47.332100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.686 [2024-07-16 00:27:47.332129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.686 qpair failed and we were unable to recover it. 00:26:28.686 [2024-07-16 00:27:47.332390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.686 [2024-07-16 00:27:47.332421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.686 qpair failed and we were unable to recover it. 00:26:28.686 [2024-07-16 00:27:47.332646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.686 [2024-07-16 00:27:47.332683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.686 qpair failed and we were unable to recover it. 00:26:28.686 [2024-07-16 00:27:47.332881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.686 [2024-07-16 00:27:47.332891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.686 qpair failed and we were unable to recover it. 00:26:28.686 [2024-07-16 00:27:47.333112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.686 [2024-07-16 00:27:47.333122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.686 qpair failed and we were unable to recover it. 00:26:28.686 [2024-07-16 00:27:47.333409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.686 [2024-07-16 00:27:47.333439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.686 qpair failed and we were unable to recover it. 00:26:28.686 [2024-07-16 00:27:47.333678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.686 [2024-07-16 00:27:47.333708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.686 qpair failed and we were unable to recover it. 00:26:28.686 [2024-07-16 00:27:47.334022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.686 [2024-07-16 00:27:47.334052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.686 qpair failed and we were unable to recover it. 00:26:28.686 [2024-07-16 00:27:47.334279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.686 [2024-07-16 00:27:47.334309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.686 qpair failed and we were unable to recover it. 00:26:28.686 [2024-07-16 00:27:47.334619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.686 [2024-07-16 00:27:47.334649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.686 qpair failed and we were unable to recover it. 00:26:28.686 [2024-07-16 00:27:47.334942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.686 [2024-07-16 00:27:47.334972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.686 qpair failed and we were unable to recover it. 00:26:28.686 [2024-07-16 00:27:47.335286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.686 [2024-07-16 00:27:47.335296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.686 qpair failed and we were unable to recover it. 00:26:28.686 [2024-07-16 00:27:47.335505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.686 [2024-07-16 00:27:47.335535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.686 qpair failed and we were unable to recover it. 00:26:28.686 [2024-07-16 00:27:47.335777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.686 [2024-07-16 00:27:47.335807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.686 qpair failed and we were unable to recover it. 00:26:28.686 [2024-07-16 00:27:47.336032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.686 [2024-07-16 00:27:47.336063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.686 qpair failed and we were unable to recover it. 00:26:28.686 [2024-07-16 00:27:47.336302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.686 [2024-07-16 00:27:47.336333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.686 qpair failed and we were unable to recover it. 00:26:28.686 [2024-07-16 00:27:47.336574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.686 [2024-07-16 00:27:47.336604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.686 qpair failed and we were unable to recover it. 00:26:28.686 [2024-07-16 00:27:47.336843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.686 [2024-07-16 00:27:47.336873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.686 qpair failed and we were unable to recover it. 00:26:28.686 [2024-07-16 00:27:47.337164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.686 [2024-07-16 00:27:47.337194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.686 qpair failed and we were unable to recover it. 00:26:28.686 [2024-07-16 00:27:47.337498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.686 [2024-07-16 00:27:47.337566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.686 qpair failed and we were unable to recover it. 00:26:28.686 [2024-07-16 00:27:47.337906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.686 [2024-07-16 00:27:47.337939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.686 qpair failed and we were unable to recover it. 00:26:28.686 [2024-07-16 00:27:47.338249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.686 [2024-07-16 00:27:47.338280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.686 qpair failed and we were unable to recover it. 00:26:28.686 [2024-07-16 00:27:47.338524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.686 [2024-07-16 00:27:47.338554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.686 qpair failed and we were unable to recover it. 00:26:28.686 [2024-07-16 00:27:47.338790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.686 [2024-07-16 00:27:47.338820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.686 qpair failed and we were unable to recover it. 00:26:28.686 [2024-07-16 00:27:47.339053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.686 [2024-07-16 00:27:47.339082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.686 qpair failed and we were unable to recover it. 00:26:28.686 [2024-07-16 00:27:47.339383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.686 [2024-07-16 00:27:47.339418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.686 qpair failed and we were unable to recover it. 00:26:28.686 [2024-07-16 00:27:47.339584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.686 [2024-07-16 00:27:47.339614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.686 qpair failed and we were unable to recover it. 00:26:28.686 [2024-07-16 00:27:47.339852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.686 [2024-07-16 00:27:47.339881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.686 qpair failed and we were unable to recover it. 00:26:28.686 [2024-07-16 00:27:47.340069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.686 [2024-07-16 00:27:47.340099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.686 qpair failed and we were unable to recover it. 00:26:28.687 [2024-07-16 00:27:47.340374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.687 [2024-07-16 00:27:47.340404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.687 qpair failed and we were unable to recover it. 00:26:28.687 [2024-07-16 00:27:47.340635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.687 [2024-07-16 00:27:47.340664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.687 qpair failed and we were unable to recover it. 00:26:28.687 [2024-07-16 00:27:47.340898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.687 [2024-07-16 00:27:47.340911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.687 qpair failed and we were unable to recover it. 00:26:28.687 [2024-07-16 00:27:47.341119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.687 [2024-07-16 00:27:47.341133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.687 qpair failed and we were unable to recover it. 00:26:28.687 [2024-07-16 00:27:47.341273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.687 [2024-07-16 00:27:47.341287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.687 qpair failed and we were unable to recover it. 00:26:28.687 [2024-07-16 00:27:47.341497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.687 [2024-07-16 00:27:47.341527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.687 qpair failed and we were unable to recover it. 00:26:28.687 [2024-07-16 00:27:47.341815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.687 [2024-07-16 00:27:47.341844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.687 qpair failed and we were unable to recover it. 00:26:28.687 [2024-07-16 00:27:47.342136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.687 [2024-07-16 00:27:47.342166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.687 qpair failed and we were unable to recover it. 00:26:28.687 [2024-07-16 00:27:47.342453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.687 [2024-07-16 00:27:47.342484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.687 qpair failed and we were unable to recover it. 00:26:28.687 [2024-07-16 00:27:47.342796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.687 [2024-07-16 00:27:47.342831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.687 qpair failed and we were unable to recover it. 00:26:28.687 [2024-07-16 00:27:47.343081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.687 [2024-07-16 00:27:47.343111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.687 qpair failed and we were unable to recover it. 00:26:28.687 [2024-07-16 00:27:47.343257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.687 [2024-07-16 00:27:47.343294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.687 qpair failed and we were unable to recover it. 00:26:28.687 [2024-07-16 00:27:47.343634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.687 [2024-07-16 00:27:47.343663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.687 qpair failed and we were unable to recover it. 00:26:28.687 [2024-07-16 00:27:47.343977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.687 [2024-07-16 00:27:47.344006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.687 qpair failed and we were unable to recover it. 00:26:28.687 [2024-07-16 00:27:47.344169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.687 [2024-07-16 00:27:47.344199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.687 qpair failed and we were unable to recover it. 00:26:28.687 [2024-07-16 00:27:47.344451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.687 [2024-07-16 00:27:47.344481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.687 qpair failed and we were unable to recover it. 00:26:28.687 [2024-07-16 00:27:47.344745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.687 [2024-07-16 00:27:47.344775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.687 qpair failed and we were unable to recover it. 00:26:28.687 [2024-07-16 00:27:47.345013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.687 [2024-07-16 00:27:47.345042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.687 qpair failed and we were unable to recover it. 00:26:28.687 [2024-07-16 00:27:47.345285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.687 [2024-07-16 00:27:47.345316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.687 qpair failed and we were unable to recover it. 00:26:28.687 [2024-07-16 00:27:47.345552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.687 [2024-07-16 00:27:47.345582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.687 qpair failed and we were unable to recover it. 00:26:28.687 [2024-07-16 00:27:47.345751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.687 [2024-07-16 00:27:47.345780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.687 qpair failed and we were unable to recover it. 00:26:28.687 [2024-07-16 00:27:47.346090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.687 [2024-07-16 00:27:47.346119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.687 qpair failed and we were unable to recover it. 00:26:28.687 [2024-07-16 00:27:47.346349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.687 [2024-07-16 00:27:47.346379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.687 qpair failed and we were unable to recover it. 00:26:28.687 [2024-07-16 00:27:47.346652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.687 [2024-07-16 00:27:47.346682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.687 qpair failed and we were unable to recover it. 00:26:28.687 [2024-07-16 00:27:47.346871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.687 [2024-07-16 00:27:47.346900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.687 qpair failed and we were unable to recover it. 00:26:28.687 [2024-07-16 00:27:47.347141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.687 [2024-07-16 00:27:47.347171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.687 qpair failed and we were unable to recover it. 00:26:28.687 [2024-07-16 00:27:47.347480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.687 [2024-07-16 00:27:47.347514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.687 qpair failed and we were unable to recover it. 00:26:28.687 [2024-07-16 00:27:47.347776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.687 [2024-07-16 00:27:47.347806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.687 qpair failed and we were unable to recover it. 00:26:28.687 [2024-07-16 00:27:47.348049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.687 [2024-07-16 00:27:47.348078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.687 qpair failed and we were unable to recover it. 00:26:28.687 [2024-07-16 00:27:47.348267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.687 [2024-07-16 00:27:47.348299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.687 qpair failed and we were unable to recover it. 00:26:28.687 [2024-07-16 00:27:47.348543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.687 [2024-07-16 00:27:47.348573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.687 qpair failed and we were unable to recover it. 00:26:28.687 [2024-07-16 00:27:47.348737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.687 [2024-07-16 00:27:47.348751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.687 qpair failed and we were unable to recover it. 00:26:28.687 [2024-07-16 00:27:47.349015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.687 [2024-07-16 00:27:47.349045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.687 qpair failed and we were unable to recover it. 00:26:28.687 [2024-07-16 00:27:47.349370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.687 [2024-07-16 00:27:47.349402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.688 qpair failed and we were unable to recover it. 00:26:28.688 [2024-07-16 00:27:47.349607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.688 [2024-07-16 00:27:47.349638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.688 qpair failed and we were unable to recover it. 00:26:28.688 [2024-07-16 00:27:47.349976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.688 [2024-07-16 00:27:47.349990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.688 qpair failed and we were unable to recover it. 00:26:28.688 [2024-07-16 00:27:47.350152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.688 [2024-07-16 00:27:47.350166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.688 qpair failed and we were unable to recover it. 00:26:28.688 [2024-07-16 00:27:47.350323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.688 [2024-07-16 00:27:47.350337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.688 qpair failed and we were unable to recover it. 00:26:28.688 [2024-07-16 00:27:47.350560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.688 [2024-07-16 00:27:47.350589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.688 qpair failed and we were unable to recover it. 00:26:28.688 [2024-07-16 00:27:47.350880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.688 [2024-07-16 00:27:47.350910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.688 qpair failed and we were unable to recover it. 00:26:28.688 [2024-07-16 00:27:47.351234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.688 [2024-07-16 00:27:47.351277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.688 qpair failed and we were unable to recover it. 00:26:28.688 [2024-07-16 00:27:47.351473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.688 [2024-07-16 00:27:47.351503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.688 qpair failed and we were unable to recover it. 00:26:28.688 [2024-07-16 00:27:47.351737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.688 [2024-07-16 00:27:47.351767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.688 qpair failed and we were unable to recover it. 00:26:28.688 [2024-07-16 00:27:47.351951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.688 [2024-07-16 00:27:47.351965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.688 qpair failed and we were unable to recover it. 00:26:28.688 [2024-07-16 00:27:47.352167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.688 [2024-07-16 00:27:47.352181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.688 qpair failed and we were unable to recover it. 00:26:28.688 [2024-07-16 00:27:47.352383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.688 [2024-07-16 00:27:47.352397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.688 qpair failed and we were unable to recover it. 00:26:28.688 [2024-07-16 00:27:47.352546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.688 [2024-07-16 00:27:47.352560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.688 qpair failed and we were unable to recover it. 00:26:28.688 [2024-07-16 00:27:47.352769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.688 [2024-07-16 00:27:47.352783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.688 qpair failed and we were unable to recover it. 00:26:28.688 [2024-07-16 00:27:47.352969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.688 [2024-07-16 00:27:47.352982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.688 qpair failed and we were unable to recover it. 00:26:28.688 [2024-07-16 00:27:47.353135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.688 [2024-07-16 00:27:47.353150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.688 qpair failed and we were unable to recover it. 00:26:28.688 [2024-07-16 00:27:47.353363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.688 [2024-07-16 00:27:47.353393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.688 qpair failed and we were unable to recover it. 00:26:28.688 [2024-07-16 00:27:47.353665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.688 [2024-07-16 00:27:47.353695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.688 qpair failed and we were unable to recover it. 00:26:28.688 [2024-07-16 00:27:47.353926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.688 [2024-07-16 00:27:47.353940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.688 qpair failed and we were unable to recover it. 00:26:28.688 [2024-07-16 00:27:47.354090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.688 [2024-07-16 00:27:47.354103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.688 qpair failed and we were unable to recover it. 00:26:28.688 [2024-07-16 00:27:47.354235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.688 [2024-07-16 00:27:47.354248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.688 qpair failed and we were unable to recover it. 00:26:28.688 [2024-07-16 00:27:47.354453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.688 [2024-07-16 00:27:47.354467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.688 qpair failed and we were unable to recover it. 00:26:28.688 [2024-07-16 00:27:47.354669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.688 [2024-07-16 00:27:47.354683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.688 qpair failed and we were unable to recover it. 00:26:28.688 [2024-07-16 00:27:47.354972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.688 [2024-07-16 00:27:47.355001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.688 qpair failed and we were unable to recover it. 00:26:28.688 [2024-07-16 00:27:47.355252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.688 [2024-07-16 00:27:47.355294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.688 qpair failed and we were unable to recover it. 00:26:28.688 [2024-07-16 00:27:47.355535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.688 [2024-07-16 00:27:47.355565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.688 qpair failed and we were unable to recover it. 00:26:28.688 [2024-07-16 00:27:47.355752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.688 [2024-07-16 00:27:47.355781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.688 qpair failed and we were unable to recover it. 00:26:28.688 [2024-07-16 00:27:47.356024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.688 [2024-07-16 00:27:47.356038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.688 qpair failed and we were unable to recover it. 00:26:28.688 [2024-07-16 00:27:47.356318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.688 [2024-07-16 00:27:47.356358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.688 qpair failed and we were unable to recover it. 00:26:28.688 [2024-07-16 00:27:47.356631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.688 [2024-07-16 00:27:47.356661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.688 qpair failed and we were unable to recover it. 00:26:28.688 [2024-07-16 00:27:47.356907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.688 [2024-07-16 00:27:47.356937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.688 qpair failed and we were unable to recover it. 00:26:28.689 [2024-07-16 00:27:47.357115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.689 [2024-07-16 00:27:47.357144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.689 qpair failed and we were unable to recover it. 00:26:28.689 [2024-07-16 00:27:47.357379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.689 [2024-07-16 00:27:47.357410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.689 qpair failed and we were unable to recover it. 00:26:28.689 [2024-07-16 00:27:47.357580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.689 [2024-07-16 00:27:47.357609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.689 qpair failed and we were unable to recover it. 00:26:28.689 [2024-07-16 00:27:47.357923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.689 [2024-07-16 00:27:47.357936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.689 qpair failed and we were unable to recover it. 00:26:28.689 [2024-07-16 00:27:47.358132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.689 [2024-07-16 00:27:47.358146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.689 qpair failed and we were unable to recover it. 00:26:28.689 [2024-07-16 00:27:47.358347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.689 [2024-07-16 00:27:47.358361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.689 qpair failed and we were unable to recover it. 00:26:28.689 [2024-07-16 00:27:47.358570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.689 [2024-07-16 00:27:47.358584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.689 qpair failed and we were unable to recover it. 00:26:28.689 [2024-07-16 00:27:47.358748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.689 [2024-07-16 00:27:47.358778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.689 qpair failed and we were unable to recover it. 00:26:28.689 [2024-07-16 00:27:47.359021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.689 [2024-07-16 00:27:47.359051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.689 qpair failed and we were unable to recover it. 00:26:28.689 [2024-07-16 00:27:47.359247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.689 [2024-07-16 00:27:47.359286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.689 qpair failed and we were unable to recover it. 00:26:28.689 [2024-07-16 00:27:47.359452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.689 [2024-07-16 00:27:47.359483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.689 qpair failed and we were unable to recover it. 00:26:28.689 [2024-07-16 00:27:47.359726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.689 [2024-07-16 00:27:47.359756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.689 qpair failed and we were unable to recover it. 00:26:28.689 [2024-07-16 00:27:47.360049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.689 [2024-07-16 00:27:47.360079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.689 qpair failed and we were unable to recover it. 00:26:28.689 [2024-07-16 00:27:47.360256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.689 [2024-07-16 00:27:47.360288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.689 qpair failed and we were unable to recover it. 00:26:28.689 [2024-07-16 00:27:47.360538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.689 [2024-07-16 00:27:47.360567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.689 qpair failed and we were unable to recover it. 00:26:28.689 [2024-07-16 00:27:47.360738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.689 [2024-07-16 00:27:47.360752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.689 qpair failed and we were unable to recover it. 00:26:28.689 [2024-07-16 00:27:47.360951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.689 [2024-07-16 00:27:47.360964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.689 qpair failed and we were unable to recover it. 00:26:28.689 [2024-07-16 00:27:47.361234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.689 [2024-07-16 00:27:47.361265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.689 qpair failed and we were unable to recover it. 00:26:28.689 [2024-07-16 00:27:47.361588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.689 [2024-07-16 00:27:47.361618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.689 qpair failed and we were unable to recover it. 00:26:28.689 [2024-07-16 00:27:47.361833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.689 [2024-07-16 00:27:47.361862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.689 qpair failed and we were unable to recover it. 00:26:28.689 [2024-07-16 00:27:47.362094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.689 [2024-07-16 00:27:47.362108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.689 qpair failed and we were unable to recover it. 00:26:28.689 [2024-07-16 00:27:47.362324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.689 [2024-07-16 00:27:47.362355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.689 qpair failed and we were unable to recover it. 00:26:28.689 [2024-07-16 00:27:47.362588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.689 [2024-07-16 00:27:47.362618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.689 qpair failed and we were unable to recover it. 00:26:28.689 [2024-07-16 00:27:47.362860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.689 [2024-07-16 00:27:47.362874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.689 qpair failed and we were unable to recover it. 00:26:28.689 [2024-07-16 00:27:47.363073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.689 [2024-07-16 00:27:47.363107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.689 qpair failed and we were unable to recover it. 00:26:28.689 [2024-07-16 00:27:47.363348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.689 [2024-07-16 00:27:47.363383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.689 qpair failed and we were unable to recover it. 00:26:28.689 [2024-07-16 00:27:47.363568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.689 [2024-07-16 00:27:47.363597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.689 qpair failed and we were unable to recover it. 00:26:28.689 [2024-07-16 00:27:47.363777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.689 [2024-07-16 00:27:47.363807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.689 qpair failed and we were unable to recover it. 00:26:28.689 [2024-07-16 00:27:47.364112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.689 [2024-07-16 00:27:47.364154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.689 qpair failed and we were unable to recover it. 00:26:28.689 [2024-07-16 00:27:47.364451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.689 [2024-07-16 00:27:47.364482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.689 qpair failed and we were unable to recover it. 00:26:28.689 [2024-07-16 00:27:47.364637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.689 [2024-07-16 00:27:47.364667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.689 qpair failed and we were unable to recover it. 00:26:28.689 [2024-07-16 00:27:47.364958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.689 [2024-07-16 00:27:47.364995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.689 qpair failed and we were unable to recover it. 00:26:28.689 [2024-07-16 00:27:47.365199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.689 [2024-07-16 00:27:47.365212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.689 qpair failed and we were unable to recover it. 00:26:28.689 [2024-07-16 00:27:47.365364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.689 [2024-07-16 00:27:47.365394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.689 qpair failed and we were unable to recover it. 00:26:28.689 [2024-07-16 00:27:47.365626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.689 [2024-07-16 00:27:47.365655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.689 qpair failed and we were unable to recover it. 00:26:28.689 [2024-07-16 00:27:47.365899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.689 [2024-07-16 00:27:47.365928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.689 qpair failed and we were unable to recover it. 00:26:28.689 [2024-07-16 00:27:47.366233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.689 [2024-07-16 00:27:47.366266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.689 qpair failed and we were unable to recover it. 00:26:28.689 [2024-07-16 00:27:47.366457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.689 [2024-07-16 00:27:47.366487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.690 qpair failed and we were unable to recover it. 00:26:28.690 [2024-07-16 00:27:47.366721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.690 [2024-07-16 00:27:47.366751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.690 qpair failed and we were unable to recover it. 00:26:28.690 [2024-07-16 00:27:47.366984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.690 [2024-07-16 00:27:47.366998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.690 qpair failed and we were unable to recover it. 00:26:28.690 [2024-07-16 00:27:47.367142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.690 [2024-07-16 00:27:47.367155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.690 qpair failed and we were unable to recover it. 00:26:28.690 [2024-07-16 00:27:47.367256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.690 [2024-07-16 00:27:47.367271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.690 qpair failed and we were unable to recover it. 00:26:28.690 [2024-07-16 00:27:47.367477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.690 [2024-07-16 00:27:47.367491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.690 qpair failed and we were unable to recover it. 00:26:28.690 [2024-07-16 00:27:47.367628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.690 [2024-07-16 00:27:47.367641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.690 qpair failed and we were unable to recover it. 00:26:28.690 [2024-07-16 00:27:47.367860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.690 [2024-07-16 00:27:47.367873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.690 qpair failed and we were unable to recover it. 00:26:28.690 [2024-07-16 00:27:47.368069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.690 [2024-07-16 00:27:47.368098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.690 qpair failed and we were unable to recover it. 00:26:28.690 [2024-07-16 00:27:47.368272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.690 [2024-07-16 00:27:47.368303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.690 qpair failed and we were unable to recover it. 00:26:28.690 [2024-07-16 00:27:47.368546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.690 [2024-07-16 00:27:47.368576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.690 qpair failed and we were unable to recover it. 00:26:28.690 [2024-07-16 00:27:47.368830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.690 [2024-07-16 00:27:47.368860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.690 qpair failed and we were unable to recover it. 00:26:28.690 [2024-07-16 00:27:47.369118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.690 [2024-07-16 00:27:47.369132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.690 qpair failed and we were unable to recover it. 00:26:28.690 [2024-07-16 00:27:47.369412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.690 [2024-07-16 00:27:47.369426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.690 qpair failed and we were unable to recover it. 00:26:28.690 [2024-07-16 00:27:47.369692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.690 [2024-07-16 00:27:47.369721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.690 qpair failed and we were unable to recover it. 00:26:28.690 [2024-07-16 00:27:47.369877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.690 [2024-07-16 00:27:47.369891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.690 qpair failed and we were unable to recover it. 00:26:28.690 [2024-07-16 00:27:47.370039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.690 [2024-07-16 00:27:47.370069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.690 qpair failed and we were unable to recover it. 00:26:28.690 [2024-07-16 00:27:47.370310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.690 [2024-07-16 00:27:47.370341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.690 qpair failed and we were unable to recover it. 00:26:28.690 [2024-07-16 00:27:47.370576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.690 [2024-07-16 00:27:47.370605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.690 qpair failed and we were unable to recover it. 00:26:28.690 [2024-07-16 00:27:47.370728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.690 [2024-07-16 00:27:47.370758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.690 qpair failed and we were unable to recover it. 00:26:28.690 [2024-07-16 00:27:47.371047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.690 [2024-07-16 00:27:47.371076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.690 qpair failed and we were unable to recover it. 00:26:28.690 [2024-07-16 00:27:47.371243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.690 [2024-07-16 00:27:47.371261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.690 qpair failed and we were unable to recover it. 00:26:28.690 [2024-07-16 00:27:47.371464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.690 [2024-07-16 00:27:47.371478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.690 qpair failed and we were unable to recover it. 00:26:28.690 [2024-07-16 00:27:47.371673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.690 [2024-07-16 00:27:47.371703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.690 qpair failed and we were unable to recover it. 00:26:28.690 [2024-07-16 00:27:47.371945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.690 [2024-07-16 00:27:47.371975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.690 qpair failed and we were unable to recover it. 00:26:28.690 [2024-07-16 00:27:47.372200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.690 [2024-07-16 00:27:47.372247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.690 qpair failed and we were unable to recover it. 00:26:28.690 [2024-07-16 00:27:47.372451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.690 [2024-07-16 00:27:47.372465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.690 qpair failed and we were unable to recover it. 00:26:28.690 [2024-07-16 00:27:47.372641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.690 [2024-07-16 00:27:47.372657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.690 qpair failed and we were unable to recover it. 00:26:28.690 [2024-07-16 00:27:47.372854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.690 [2024-07-16 00:27:47.372883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.690 qpair failed and we were unable to recover it. 00:26:28.690 [2024-07-16 00:27:47.373175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.690 [2024-07-16 00:27:47.373205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.690 qpair failed and we were unable to recover it. 00:26:28.690 [2024-07-16 00:27:47.373375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.690 [2024-07-16 00:27:47.373405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.690 qpair failed and we were unable to recover it. 00:26:28.690 [2024-07-16 00:27:47.373709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.690 [2024-07-16 00:27:47.373738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.690 qpair failed and we were unable to recover it. 00:26:28.690 [2024-07-16 00:27:47.373980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.690 [2024-07-16 00:27:47.374010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.690 qpair failed and we were unable to recover it. 00:26:28.690 [2024-07-16 00:27:47.374341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.690 [2024-07-16 00:27:47.374372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.690 qpair failed and we were unable to recover it. 00:26:28.690 [2024-07-16 00:27:47.374608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.690 [2024-07-16 00:27:47.374638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.690 qpair failed and we were unable to recover it. 00:26:28.691 [2024-07-16 00:27:47.374814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.691 [2024-07-16 00:27:47.374843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.691 qpair failed and we were unable to recover it. 00:26:28.691 [2024-07-16 00:27:47.375087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.691 [2024-07-16 00:27:47.375100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.691 qpair failed and we were unable to recover it. 00:26:28.691 [2024-07-16 00:27:47.375246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.691 [2024-07-16 00:27:47.375262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.691 qpair failed and we were unable to recover it. 00:26:28.691 [2024-07-16 00:27:47.375415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.691 [2024-07-16 00:27:47.375445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.691 qpair failed and we were unable to recover it. 00:26:28.691 [2024-07-16 00:27:47.375669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.691 [2024-07-16 00:27:47.375698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.691 qpair failed and we were unable to recover it. 00:26:28.691 [2024-07-16 00:27:47.375898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.691 [2024-07-16 00:27:47.375928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.691 qpair failed and we were unable to recover it. 00:26:28.691 [2024-07-16 00:27:47.376049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.691 [2024-07-16 00:27:47.376079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.691 qpair failed and we were unable to recover it. 00:26:28.691 [2024-07-16 00:27:47.376257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.691 [2024-07-16 00:27:47.376287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.691 qpair failed and we were unable to recover it. 00:26:28.691 [2024-07-16 00:27:47.376464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.691 [2024-07-16 00:27:47.376493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.691 qpair failed and we were unable to recover it. 00:26:28.691 [2024-07-16 00:27:47.376738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.691 [2024-07-16 00:27:47.376768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.691 qpair failed and we were unable to recover it. 00:26:28.691 [2024-07-16 00:27:47.377007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.691 [2024-07-16 00:27:47.377037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.691 qpair failed and we were unable to recover it. 00:26:28.691 [2024-07-16 00:27:47.377331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.691 [2024-07-16 00:27:47.377361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.691 qpair failed and we were unable to recover it. 00:26:28.691 [2024-07-16 00:27:47.377536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.691 [2024-07-16 00:27:47.377566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.691 qpair failed and we were unable to recover it. 00:26:28.691 [2024-07-16 00:27:47.377794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.691 [2024-07-16 00:27:47.377824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.691 qpair failed and we were unable to recover it. 00:26:28.691 [2024-07-16 00:27:47.378052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.691 [2024-07-16 00:27:47.378066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.691 qpair failed and we were unable to recover it. 00:26:28.691 [2024-07-16 00:27:47.378184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.691 [2024-07-16 00:27:47.378198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.691 qpair failed and we were unable to recover it. 00:26:28.691 [2024-07-16 00:27:47.378410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.691 [2024-07-16 00:27:47.378441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.691 qpair failed and we were unable to recover it. 00:26:28.691 [2024-07-16 00:27:47.378734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.691 [2024-07-16 00:27:47.378764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.691 qpair failed and we were unable to recover it. 00:26:28.691 [2024-07-16 00:27:47.378993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.691 [2024-07-16 00:27:47.379022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:28.691 qpair failed and we were unable to recover it. 00:26:28.691 [2024-07-16 00:27:47.379245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.691 [2024-07-16 00:27:47.379314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:28.691 qpair failed and we were unable to recover it. 00:26:28.691 [2024-07-16 00:27:47.379625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.691 [2024-07-16 00:27:47.379693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.691 qpair failed and we were unable to recover it. 00:26:28.691 [2024-07-16 00:27:47.380010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.691 [2024-07-16 00:27:47.380044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.691 qpair failed and we were unable to recover it. 00:26:28.691 [2024-07-16 00:27:47.380294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.691 [2024-07-16 00:27:47.380327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.691 qpair failed and we were unable to recover it. 00:26:28.691 [2024-07-16 00:27:47.380554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.691 [2024-07-16 00:27:47.380585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.691 qpair failed and we were unable to recover it. 00:26:28.691 [2024-07-16 00:27:47.380813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.691 [2024-07-16 00:27:47.380843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.691 qpair failed and we were unable to recover it. 00:26:28.691 [2024-07-16 00:27:47.381087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.691 [2024-07-16 00:27:47.381117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.691 qpair failed and we were unable to recover it. 00:26:28.691 [2024-07-16 00:27:47.381425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.691 [2024-07-16 00:27:47.381439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.691 qpair failed and we were unable to recover it. 00:26:28.691 [2024-07-16 00:27:47.381636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.691 [2024-07-16 00:27:47.381650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.691 qpair failed and we were unable to recover it. 00:26:28.691 [2024-07-16 00:27:47.381851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.691 [2024-07-16 00:27:47.381864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.691 qpair failed and we were unable to recover it. 00:26:28.691 [2024-07-16 00:27:47.382059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.691 [2024-07-16 00:27:47.382072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.691 qpair failed and we were unable to recover it. 00:26:28.691 [2024-07-16 00:27:47.382272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.691 [2024-07-16 00:27:47.382286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.691 qpair failed and we were unable to recover it. 00:26:28.691 [2024-07-16 00:27:47.382492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.691 [2024-07-16 00:27:47.382506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.691 qpair failed and we were unable to recover it. 00:26:28.691 [2024-07-16 00:27:47.382705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.691 [2024-07-16 00:27:47.382723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.691 qpair failed and we were unable to recover it. 00:26:28.691 [2024-07-16 00:27:47.382877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.691 [2024-07-16 00:27:47.382907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.691 qpair failed and we were unable to recover it. 00:26:28.691 [2024-07-16 00:27:47.383086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.691 [2024-07-16 00:27:47.383117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.691 qpair failed and we were unable to recover it. 00:26:28.691 [2024-07-16 00:27:47.383398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.691 [2024-07-16 00:27:47.383429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.691 qpair failed and we were unable to recover it. 00:26:28.691 [2024-07-16 00:27:47.383733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.691 [2024-07-16 00:27:47.383763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.691 qpair failed and we were unable to recover it. 00:26:28.691 [2024-07-16 00:27:47.384054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.691 [2024-07-16 00:27:47.384084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.691 qpair failed and we were unable to recover it. 00:26:28.691 [2024-07-16 00:27:47.384333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.691 [2024-07-16 00:27:47.384364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.691 qpair failed and we were unable to recover it. 00:26:28.691 [2024-07-16 00:27:47.384693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.691 [2024-07-16 00:27:47.384724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.691 qpair failed and we were unable to recover it. 00:26:28.691 [2024-07-16 00:27:47.385036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.692 [2024-07-16 00:27:47.385067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.692 qpair failed and we were unable to recover it. 00:26:28.692 [2024-07-16 00:27:47.385304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.692 [2024-07-16 00:27:47.385318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.692 qpair failed and we were unable to recover it. 00:26:28.692 [2024-07-16 00:27:47.385524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.692 [2024-07-16 00:27:47.385537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.692 qpair failed and we were unable to recover it. 00:26:28.692 [2024-07-16 00:27:47.385728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.692 [2024-07-16 00:27:47.385742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.692 qpair failed and we were unable to recover it. 00:26:28.692 [2024-07-16 00:27:47.385998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.692 [2024-07-16 00:27:47.386011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.692 qpair failed and we were unable to recover it. 00:26:28.692 [2024-07-16 00:27:47.386155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.692 [2024-07-16 00:27:47.386169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.692 qpair failed and we were unable to recover it. 00:26:28.692 [2024-07-16 00:27:47.386379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.692 [2024-07-16 00:27:47.386393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.692 qpair failed and we were unable to recover it. 00:26:28.692 [2024-07-16 00:27:47.386588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.692 [2024-07-16 00:27:47.386617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.692 qpair failed and we were unable to recover it. 00:26:28.692 [2024-07-16 00:27:47.386841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.692 [2024-07-16 00:27:47.386871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.692 qpair failed and we were unable to recover it. 00:26:28.692 [2024-07-16 00:27:47.387151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.692 [2024-07-16 00:27:47.387181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.692 qpair failed and we were unable to recover it. 00:26:28.692 [2024-07-16 00:27:47.387510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.692 [2024-07-16 00:27:47.387540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.692 qpair failed and we were unable to recover it. 00:26:28.692 [2024-07-16 00:27:47.387658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.692 [2024-07-16 00:27:47.387698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.692 qpair failed and we were unable to recover it. 00:26:28.692 [2024-07-16 00:27:47.387930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.692 [2024-07-16 00:27:47.387944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.692 qpair failed and we were unable to recover it. 00:26:28.692 [2024-07-16 00:27:47.388247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.692 [2024-07-16 00:27:47.388261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.692 qpair failed and we were unable to recover it. 00:26:28.692 [2024-07-16 00:27:47.388484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.692 [2024-07-16 00:27:47.388498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.692 qpair failed and we were unable to recover it. 00:26:28.692 [2024-07-16 00:27:47.388699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.692 [2024-07-16 00:27:47.388712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.692 qpair failed and we were unable to recover it. 00:26:28.692 [2024-07-16 00:27:47.388912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.692 [2024-07-16 00:27:47.388925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.692 qpair failed and we were unable to recover it. 00:26:28.692 [2024-07-16 00:27:47.389139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.692 [2024-07-16 00:27:47.389154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.692 qpair failed and we were unable to recover it. 00:26:28.692 [2024-07-16 00:27:47.389444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.692 [2024-07-16 00:27:47.389473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.692 qpair failed and we were unable to recover it. 00:26:28.692 [2024-07-16 00:27:47.389686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.692 [2024-07-16 00:27:47.389716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.692 qpair failed and we were unable to recover it. 00:26:28.692 [2024-07-16 00:27:47.389982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.692 [2024-07-16 00:27:47.390011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.692 qpair failed and we were unable to recover it. 00:26:28.692 [2024-07-16 00:27:47.390250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.692 [2024-07-16 00:27:47.390280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.692 qpair failed and we were unable to recover it. 00:26:28.692 [2024-07-16 00:27:47.390561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.692 [2024-07-16 00:27:47.390591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.692 qpair failed and we were unable to recover it. 00:26:28.692 [2024-07-16 00:27:47.390894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.692 [2024-07-16 00:27:47.390923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.692 qpair failed and we were unable to recover it. 00:26:28.692 [2024-07-16 00:27:47.391167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.692 [2024-07-16 00:27:47.391181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.692 qpair failed and we were unable to recover it. 00:26:28.692 [2024-07-16 00:27:47.391327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.692 [2024-07-16 00:27:47.391341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.692 qpair failed and we were unable to recover it. 00:26:28.692 [2024-07-16 00:27:47.391602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.692 [2024-07-16 00:27:47.391631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.692 qpair failed and we were unable to recover it. 00:26:28.692 [2024-07-16 00:27:47.391872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.692 [2024-07-16 00:27:47.391902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.692 qpair failed and we were unable to recover it. 00:26:28.692 [2024-07-16 00:27:47.392092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.692 [2024-07-16 00:27:47.392121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.692 qpair failed and we were unable to recover it. 00:26:28.692 [2024-07-16 00:27:47.392411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.692 [2024-07-16 00:27:47.392441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.692 qpair failed and we were unable to recover it. 00:26:28.692 [2024-07-16 00:27:47.392676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.692 [2024-07-16 00:27:47.392706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.692 qpair failed and we were unable to recover it. 00:26:28.692 [2024-07-16 00:27:47.392938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.692 [2024-07-16 00:27:47.392968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.692 qpair failed and we were unable to recover it. 00:26:28.692 [2024-07-16 00:27:47.393242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.692 [2024-07-16 00:27:47.393272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.692 qpair failed and we were unable to recover it. 00:26:28.692 [2024-07-16 00:27:47.393519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.692 [2024-07-16 00:27:47.393549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.692 qpair failed and we were unable to recover it. 00:26:28.692 [2024-07-16 00:27:47.393785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.692 [2024-07-16 00:27:47.393814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.692 qpair failed and we were unable to recover it. 00:26:28.692 [2024-07-16 00:27:47.394136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.692 [2024-07-16 00:27:47.394167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.692 qpair failed and we were unable to recover it. 00:26:28.692 [2024-07-16 00:27:47.394413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.692 [2024-07-16 00:27:47.394444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.692 qpair failed and we were unable to recover it. 00:26:28.692 [2024-07-16 00:27:47.394682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.692 [2024-07-16 00:27:47.394712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.692 qpair failed and we were unable to recover it. 00:26:28.692 [2024-07-16 00:27:47.394956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.692 [2024-07-16 00:27:47.394986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.692 qpair failed and we were unable to recover it. 00:26:28.692 [2024-07-16 00:27:47.395221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.692 [2024-07-16 00:27:47.395258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.692 qpair failed and we were unable to recover it. 00:26:28.692 [2024-07-16 00:27:47.395496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.692 [2024-07-16 00:27:47.395526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.692 qpair failed and we were unable to recover it. 00:26:28.693 [2024-07-16 00:27:47.395765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.693 [2024-07-16 00:27:47.395795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.693 qpair failed and we were unable to recover it. 00:26:28.693 [2024-07-16 00:27:47.396128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.693 [2024-07-16 00:27:47.396142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.693 qpair failed and we were unable to recover it. 00:26:28.693 [2024-07-16 00:27:47.396288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.693 [2024-07-16 00:27:47.396302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.693 qpair failed and we were unable to recover it. 00:26:28.693 [2024-07-16 00:27:47.396559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.693 [2024-07-16 00:27:47.396572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.693 qpair failed and we were unable to recover it. 00:26:28.693 [2024-07-16 00:27:47.396785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.693 [2024-07-16 00:27:47.396815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.693 qpair failed and we were unable to recover it. 00:26:28.693 [2024-07-16 00:27:47.397115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.693 [2024-07-16 00:27:47.397144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.693 qpair failed and we were unable to recover it. 00:26:28.693 [2024-07-16 00:27:47.397309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.693 [2024-07-16 00:27:47.397323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.693 qpair failed and we were unable to recover it. 00:26:28.693 [2024-07-16 00:27:47.397481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.693 [2024-07-16 00:27:47.397511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.693 qpair failed and we were unable to recover it. 00:26:28.693 [2024-07-16 00:27:47.397737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.693 [2024-07-16 00:27:47.397767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.693 qpair failed and we were unable to recover it. 00:26:28.693 [2024-07-16 00:27:47.398022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.693 [2024-07-16 00:27:47.398052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.693 qpair failed and we were unable to recover it. 00:26:28.693 [2024-07-16 00:27:47.398399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.693 [2024-07-16 00:27:47.398430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.693 qpair failed and we were unable to recover it. 00:26:28.693 [2024-07-16 00:27:47.398744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.693 [2024-07-16 00:27:47.398773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.693 qpair failed and we were unable to recover it. 00:26:28.693 [2024-07-16 00:27:47.399015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.693 [2024-07-16 00:27:47.399045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.693 qpair failed and we were unable to recover it. 00:26:28.693 [2024-07-16 00:27:47.399270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.693 [2024-07-16 00:27:47.399300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.693 qpair failed and we were unable to recover it. 00:26:28.693 [2024-07-16 00:27:47.399528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.693 [2024-07-16 00:27:47.399558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.693 qpair failed and we were unable to recover it. 00:26:28.693 [2024-07-16 00:27:47.399689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.693 [2024-07-16 00:27:47.399718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.693 qpair failed and we were unable to recover it. 00:26:28.693 [2024-07-16 00:27:47.399936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.693 [2024-07-16 00:27:47.399965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.693 qpair failed and we were unable to recover it. 00:26:28.693 [2024-07-16 00:27:47.400204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.693 [2024-07-16 00:27:47.400218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.693 qpair failed and we were unable to recover it. 00:26:28.693 [2024-07-16 00:27:47.400361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.693 [2024-07-16 00:27:47.400377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.693 qpair failed and we were unable to recover it. 00:26:28.693 [2024-07-16 00:27:47.400665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.693 [2024-07-16 00:27:47.400695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.693 qpair failed and we were unable to recover it. 00:26:28.693 [2024-07-16 00:27:47.400948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.693 [2024-07-16 00:27:47.400979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.693 qpair failed and we were unable to recover it. 00:26:28.693 [2024-07-16 00:27:47.401221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.693 [2024-07-16 00:27:47.401260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.693 qpair failed and we were unable to recover it. 00:26:28.693 [2024-07-16 00:27:47.401503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.693 [2024-07-16 00:27:47.401532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.693 qpair failed and we were unable to recover it. 00:26:28.693 [2024-07-16 00:27:47.401851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.693 [2024-07-16 00:27:47.401881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.693 qpair failed and we were unable to recover it. 00:26:28.693 [2024-07-16 00:27:47.402119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.693 [2024-07-16 00:27:47.402149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.693 qpair failed and we were unable to recover it. 00:26:28.693 [2024-07-16 00:27:47.402315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.693 [2024-07-16 00:27:47.402346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.693 qpair failed and we were unable to recover it. 00:26:28.693 [2024-07-16 00:27:47.402649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.693 [2024-07-16 00:27:47.402678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.693 qpair failed and we were unable to recover it. 00:26:28.693 [2024-07-16 00:27:47.402980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.693 [2024-07-16 00:27:47.403010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.693 qpair failed and we were unable to recover it. 00:26:28.693 [2024-07-16 00:27:47.403188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.693 [2024-07-16 00:27:47.403202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.693 qpair failed and we were unable to recover it. 00:26:28.693 [2024-07-16 00:27:47.403406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.693 [2024-07-16 00:27:47.403420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.693 qpair failed and we were unable to recover it. 00:26:28.693 [2024-07-16 00:27:47.403655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.693 [2024-07-16 00:27:47.403685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.693 qpair failed and we were unable to recover it. 00:26:28.693 [2024-07-16 00:27:47.403913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.693 [2024-07-16 00:27:47.403943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.693 qpair failed and we were unable to recover it. 00:26:28.693 [2024-07-16 00:27:47.404218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.693 [2024-07-16 00:27:47.404264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.693 qpair failed and we were unable to recover it. 00:26:28.693 [2024-07-16 00:27:47.404466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.693 [2024-07-16 00:27:47.404495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.693 qpair failed and we were unable to recover it. 00:26:28.693 [2024-07-16 00:27:47.404787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.693 [2024-07-16 00:27:47.404817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.693 qpair failed and we were unable to recover it. 00:26:28.693 [2024-07-16 00:27:47.405104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.693 [2024-07-16 00:27:47.405135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.693 qpair failed and we were unable to recover it. 00:26:28.693 [2024-07-16 00:27:47.405395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.693 [2024-07-16 00:27:47.405426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.693 qpair failed and we were unable to recover it. 00:26:28.693 [2024-07-16 00:27:47.405647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.693 [2024-07-16 00:27:47.405677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.693 qpair failed and we were unable to recover it. 00:26:28.693 [2024-07-16 00:27:47.405869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.693 [2024-07-16 00:27:47.405899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.693 qpair failed and we were unable to recover it. 00:26:28.693 [2024-07-16 00:27:47.406218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.693 [2024-07-16 00:27:47.406257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.693 qpair failed and we were unable to recover it. 00:26:28.693 [2024-07-16 00:27:47.406503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.693 [2024-07-16 00:27:47.406533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.694 qpair failed and we were unable to recover it. 00:26:28.694 [2024-07-16 00:27:47.406714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.694 [2024-07-16 00:27:47.406743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.694 qpair failed and we were unable to recover it. 00:26:28.694 [2024-07-16 00:27:47.407059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.694 [2024-07-16 00:27:47.407089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.694 qpair failed and we were unable to recover it. 00:26:28.694 [2024-07-16 00:27:47.407272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.694 [2024-07-16 00:27:47.407287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.694 qpair failed and we were unable to recover it. 00:26:28.694 [2024-07-16 00:27:47.407415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.694 [2024-07-16 00:27:47.407445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.694 qpair failed and we were unable to recover it. 00:26:28.694 [2024-07-16 00:27:47.407744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.694 [2024-07-16 00:27:47.407773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.694 qpair failed and we were unable to recover it. 00:26:28.694 [2024-07-16 00:27:47.408023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.694 [2024-07-16 00:27:47.408052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.694 qpair failed and we were unable to recover it. 00:26:28.694 [2024-07-16 00:27:47.408273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.694 [2024-07-16 00:27:47.408303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.694 qpair failed and we were unable to recover it. 00:26:28.694 [2024-07-16 00:27:47.408591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.694 [2024-07-16 00:27:47.408621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.694 qpair failed and we were unable to recover it. 00:26:28.694 [2024-07-16 00:27:47.408846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.694 [2024-07-16 00:27:47.408876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.694 qpair failed and we were unable to recover it. 00:26:28.694 [2024-07-16 00:27:47.409134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.694 [2024-07-16 00:27:47.409164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.694 qpair failed and we were unable to recover it. 00:26:28.694 [2024-07-16 00:27:47.409477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.694 [2024-07-16 00:27:47.409507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.694 qpair failed and we were unable to recover it. 00:26:28.694 [2024-07-16 00:27:47.409752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.694 [2024-07-16 00:27:47.409782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.694 qpair failed and we were unable to recover it. 00:26:28.694 [2024-07-16 00:27:47.410037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.694 [2024-07-16 00:27:47.410067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.694 qpair failed and we were unable to recover it. 00:26:28.694 [2024-07-16 00:27:47.410357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.694 [2024-07-16 00:27:47.410387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.694 qpair failed and we were unable to recover it. 00:26:28.694 [2024-07-16 00:27:47.410563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.694 [2024-07-16 00:27:47.410592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.694 qpair failed and we were unable to recover it. 00:26:28.694 [2024-07-16 00:27:47.410856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.694 [2024-07-16 00:27:47.410885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.694 qpair failed and we were unable to recover it. 00:26:28.694 [2024-07-16 00:27:47.411126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.694 [2024-07-16 00:27:47.411156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.694 qpair failed and we were unable to recover it. 00:26:28.694 [2024-07-16 00:27:47.411492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.694 [2024-07-16 00:27:47.411508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.694 qpair failed and we were unable to recover it. 00:26:28.694 [2024-07-16 00:27:47.411647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.694 [2024-07-16 00:27:47.411661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.694 qpair failed and we were unable to recover it. 00:26:28.694 [2024-07-16 00:27:47.411899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.694 [2024-07-16 00:27:47.411913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.694 qpair failed and we were unable to recover it. 00:26:28.694 [2024-07-16 00:27:47.412197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.694 [2024-07-16 00:27:47.412250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.694 qpair failed and we were unable to recover it. 00:26:28.694 [2024-07-16 00:27:47.412494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.694 [2024-07-16 00:27:47.412524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.694 qpair failed and we were unable to recover it. 00:26:28.694 [2024-07-16 00:27:47.412763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.694 [2024-07-16 00:27:47.412793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.694 qpair failed and we were unable to recover it. 00:26:28.694 [2024-07-16 00:27:47.413022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.694 [2024-07-16 00:27:47.413036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.694 qpair failed and we were unable to recover it. 00:26:28.694 [2024-07-16 00:27:47.413246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.694 [2024-07-16 00:27:47.413278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.694 qpair failed and we were unable to recover it. 00:26:28.694 [2024-07-16 00:27:47.413522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.694 [2024-07-16 00:27:47.413552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.694 qpair failed and we were unable to recover it. 00:26:28.694 [2024-07-16 00:27:47.413870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.694 [2024-07-16 00:27:47.413899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.694 qpair failed and we were unable to recover it. 00:26:28.694 [2024-07-16 00:27:47.414189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.694 [2024-07-16 00:27:47.414219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.694 qpair failed and we were unable to recover it. 00:26:28.694 [2024-07-16 00:27:47.414555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.694 [2024-07-16 00:27:47.414585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.694 qpair failed and we were unable to recover it. 00:26:28.694 [2024-07-16 00:27:47.414817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.694 [2024-07-16 00:27:47.414853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.694 qpair failed and we were unable to recover it. 00:26:28.694 [2024-07-16 00:27:47.415060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.694 [2024-07-16 00:27:47.415074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.694 qpair failed and we were unable to recover it. 00:26:28.694 [2024-07-16 00:27:47.415208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.694 [2024-07-16 00:27:47.415221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.694 qpair failed and we were unable to recover it. 00:26:28.694 [2024-07-16 00:27:47.415425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.694 [2024-07-16 00:27:47.415455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.694 qpair failed and we were unable to recover it. 00:26:28.694 [2024-07-16 00:27:47.415769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.694 [2024-07-16 00:27:47.415800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.694 qpair failed and we were unable to recover it. 00:26:28.694 [2024-07-16 00:27:47.416040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.694 [2024-07-16 00:27:47.416054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.694 qpair failed and we were unable to recover it. 00:26:28.694 [2024-07-16 00:27:47.416309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.695 [2024-07-16 00:27:47.416330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.695 qpair failed and we were unable to recover it. 00:26:28.695 [2024-07-16 00:27:47.416539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.695 [2024-07-16 00:27:47.416553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.695 qpair failed and we were unable to recover it. 00:26:28.695 [2024-07-16 00:27:47.416699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.695 [2024-07-16 00:27:47.416713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.695 qpair failed and we were unable to recover it. 00:26:28.695 [2024-07-16 00:27:47.416925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.695 [2024-07-16 00:27:47.416956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.695 qpair failed and we were unable to recover it. 00:26:28.695 [2024-07-16 00:27:47.417129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.695 [2024-07-16 00:27:47.417158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.695 qpair failed and we were unable to recover it. 00:26:28.695 [2024-07-16 00:27:47.417410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.695 [2024-07-16 00:27:47.417441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.695 qpair failed and we were unable to recover it. 00:26:28.695 [2024-07-16 00:27:47.417687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.695 [2024-07-16 00:27:47.417717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.695 qpair failed and we were unable to recover it. 00:26:28.695 [2024-07-16 00:27:47.417967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.695 [2024-07-16 00:27:47.417980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.695 qpair failed and we were unable to recover it. 00:26:28.695 [2024-07-16 00:27:47.418254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.695 [2024-07-16 00:27:47.418268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.695 qpair failed and we were unable to recover it. 00:26:28.695 [2024-07-16 00:27:47.418372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.695 [2024-07-16 00:27:47.418385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.695 qpair failed and we were unable to recover it. 00:26:28.695 [2024-07-16 00:27:47.418529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.695 [2024-07-16 00:27:47.418560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.695 qpair failed and we were unable to recover it. 00:26:28.695 [2024-07-16 00:27:47.418790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.695 [2024-07-16 00:27:47.418820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.695 qpair failed and we were unable to recover it. 00:26:28.695 [2024-07-16 00:27:47.419135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.695 [2024-07-16 00:27:47.419165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.695 qpair failed and we were unable to recover it. 00:26:28.695 [2024-07-16 00:27:47.419422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.695 [2024-07-16 00:27:47.419452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.695 qpair failed and we were unable to recover it. 00:26:28.695 [2024-07-16 00:27:47.419747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.695 [2024-07-16 00:27:47.419777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.695 qpair failed and we were unable to recover it. 00:26:28.695 [2024-07-16 00:27:47.419951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.695 [2024-07-16 00:27:47.419964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.695 qpair failed and we were unable to recover it. 00:26:28.695 [2024-07-16 00:27:47.420106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.695 [2024-07-16 00:27:47.420120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.695 qpair failed and we were unable to recover it. 00:26:28.695 [2024-07-16 00:27:47.420394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.695 [2024-07-16 00:27:47.420424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.695 qpair failed and we were unable to recover it. 00:26:28.695 [2024-07-16 00:27:47.420608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.695 [2024-07-16 00:27:47.420637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.695 qpair failed and we were unable to recover it. 00:26:28.695 [2024-07-16 00:27:47.420781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.695 [2024-07-16 00:27:47.420811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.695 qpair failed and we were unable to recover it. 00:26:28.695 [2024-07-16 00:27:47.421122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.695 [2024-07-16 00:27:47.421152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.695 qpair failed and we were unable to recover it. 00:26:28.695 [2024-07-16 00:27:47.421323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.695 [2024-07-16 00:27:47.421353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.695 qpair failed and we were unable to recover it. 00:26:28.695 [2024-07-16 00:27:47.421668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.695 [2024-07-16 00:27:47.421702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.695 qpair failed and we were unable to recover it. 00:26:28.695 [2024-07-16 00:27:47.421944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.695 [2024-07-16 00:27:47.421973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.695 qpair failed and we were unable to recover it. 00:26:28.695 [2024-07-16 00:27:47.422194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.695 [2024-07-16 00:27:47.422208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.695 qpair failed and we were unable to recover it. 00:26:28.695 [2024-07-16 00:27:47.422416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.695 [2024-07-16 00:27:47.422447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.695 qpair failed and we were unable to recover it. 00:26:28.695 [2024-07-16 00:27:47.422673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.695 [2024-07-16 00:27:47.422702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.695 qpair failed and we were unable to recover it. 00:26:28.695 [2024-07-16 00:27:47.422870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.695 [2024-07-16 00:27:47.422900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.695 qpair failed and we were unable to recover it. 00:26:28.695 [2024-07-16 00:27:47.423086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.695 [2024-07-16 00:27:47.423116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.695 qpair failed and we were unable to recover it. 00:26:28.695 [2024-07-16 00:27:47.423419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.695 [2024-07-16 00:27:47.423449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.695 qpair failed and we were unable to recover it. 00:26:28.695 [2024-07-16 00:27:47.423692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.695 [2024-07-16 00:27:47.423721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.695 qpair failed and we were unable to recover it. 00:26:28.695 [2024-07-16 00:27:47.424038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.695 [2024-07-16 00:27:47.424069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.695 qpair failed and we were unable to recover it. 00:26:28.695 [2024-07-16 00:27:47.424341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.695 [2024-07-16 00:27:47.424371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.695 qpair failed and we were unable to recover it. 00:26:28.695 [2024-07-16 00:27:47.424637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.695 [2024-07-16 00:27:47.424666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.695 qpair failed and we were unable to recover it. 00:26:28.695 [2024-07-16 00:27:47.424913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.695 [2024-07-16 00:27:47.424943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.695 qpair failed and we were unable to recover it. 00:26:28.695 [2024-07-16 00:27:47.425245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.695 [2024-07-16 00:27:47.425276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.695 qpair failed and we were unable to recover it. 00:26:28.695 [2024-07-16 00:27:47.425599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.695 [2024-07-16 00:27:47.425629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.695 qpair failed and we were unable to recover it. 00:26:28.695 [2024-07-16 00:27:47.425921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.695 [2024-07-16 00:27:47.425951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.695 qpair failed and we were unable to recover it. 00:26:28.695 [2024-07-16 00:27:47.426248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.695 [2024-07-16 00:27:47.426262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.695 qpair failed and we were unable to recover it. 00:26:28.695 [2024-07-16 00:27:47.426409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.695 [2024-07-16 00:27:47.426422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.695 qpair failed and we were unable to recover it. 00:26:28.695 [2024-07-16 00:27:47.426558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.695 [2024-07-16 00:27:47.426587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.695 qpair failed and we were unable to recover it. 00:26:28.696 [2024-07-16 00:27:47.426901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.696 [2024-07-16 00:27:47.426931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.696 qpair failed and we were unable to recover it. 00:26:28.696 [2024-07-16 00:27:47.427114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.696 [2024-07-16 00:27:47.427127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.696 qpair failed and we were unable to recover it. 00:26:28.696 [2024-07-16 00:27:47.427340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.696 [2024-07-16 00:27:47.427370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.696 qpair failed and we were unable to recover it. 00:26:28.696 [2024-07-16 00:27:47.427535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.696 [2024-07-16 00:27:47.427564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.696 qpair failed and we were unable to recover it. 00:26:28.696 [2024-07-16 00:27:47.427747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.696 [2024-07-16 00:27:47.427777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.696 qpair failed and we were unable to recover it. 00:26:28.696 [2024-07-16 00:27:47.428027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.696 [2024-07-16 00:27:47.428041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.696 qpair failed and we were unable to recover it. 00:26:28.696 [2024-07-16 00:27:47.428296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.696 [2024-07-16 00:27:47.428333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.696 qpair failed and we were unable to recover it. 00:26:28.696 [2024-07-16 00:27:47.428511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.696 [2024-07-16 00:27:47.428540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.696 qpair failed and we were unable to recover it. 00:26:28.696 [2024-07-16 00:27:47.428708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.696 [2024-07-16 00:27:47.428737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.696 qpair failed and we were unable to recover it. 00:26:28.696 [2024-07-16 00:27:47.428972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.696 [2024-07-16 00:27:47.428986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.696 qpair failed and we were unable to recover it. 00:26:28.696 [2024-07-16 00:27:47.429189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.696 [2024-07-16 00:27:47.429203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.696 qpair failed and we were unable to recover it. 00:26:28.696 [2024-07-16 00:27:47.429477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.696 [2024-07-16 00:27:47.429491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.696 qpair failed and we were unable to recover it. 00:26:28.696 [2024-07-16 00:27:47.429748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.696 [2024-07-16 00:27:47.429761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.696 qpair failed and we were unable to recover it. 00:26:28.696 [2024-07-16 00:27:47.429994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.696 [2024-07-16 00:27:47.430007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.696 qpair failed and we were unable to recover it. 00:26:28.696 [2024-07-16 00:27:47.430241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.696 [2024-07-16 00:27:47.430255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.696 qpair failed and we were unable to recover it. 00:26:28.696 [2024-07-16 00:27:47.430543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.696 [2024-07-16 00:27:47.430573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.696 qpair failed and we were unable to recover it. 00:26:28.696 [2024-07-16 00:27:47.430801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.696 [2024-07-16 00:27:47.430830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.696 qpair failed and we were unable to recover it. 00:26:28.696 [2024-07-16 00:27:47.430991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.696 [2024-07-16 00:27:47.431021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.696 qpair failed and we were unable to recover it. 00:26:28.696 [2024-07-16 00:27:47.431261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.696 [2024-07-16 00:27:47.431275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.696 qpair failed and we were unable to recover it. 00:26:28.696 [2024-07-16 00:27:47.431478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.696 [2024-07-16 00:27:47.431507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.696 qpair failed and we were unable to recover it. 00:26:28.696 [2024-07-16 00:27:47.431665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.696 [2024-07-16 00:27:47.431694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.696 qpair failed and we were unable to recover it. 00:26:28.696 [2024-07-16 00:27:47.431985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.696 [2024-07-16 00:27:47.432019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.696 qpair failed and we were unable to recover it. 00:26:28.696 [2024-07-16 00:27:47.432316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.696 [2024-07-16 00:27:47.432346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.696 qpair failed and we were unable to recover it. 00:26:28.696 [2024-07-16 00:27:47.432671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.696 [2024-07-16 00:27:47.432701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.696 qpair failed and we were unable to recover it. 00:26:28.696 [2024-07-16 00:27:47.432944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.696 [2024-07-16 00:27:47.432973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.696 qpair failed and we were unable to recover it. 00:26:28.696 [2024-07-16 00:27:47.433155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.696 [2024-07-16 00:27:47.433185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.696 qpair failed and we were unable to recover it. 00:26:28.696 [2024-07-16 00:27:47.433369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.696 [2024-07-16 00:27:47.433399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.696 qpair failed and we were unable to recover it. 00:26:28.696 [2024-07-16 00:27:47.433632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.696 [2024-07-16 00:27:47.433662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.696 qpair failed and we were unable to recover it. 00:26:28.696 [2024-07-16 00:27:47.433969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.696 [2024-07-16 00:27:47.433999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.696 qpair failed and we were unable to recover it. 00:26:28.696 [2024-07-16 00:27:47.434245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.696 [2024-07-16 00:27:47.434276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.696 qpair failed and we were unable to recover it. 00:26:28.696 [2024-07-16 00:27:47.434506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.696 [2024-07-16 00:27:47.434535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.696 qpair failed and we were unable to recover it. 00:26:28.696 [2024-07-16 00:27:47.434706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.696 [2024-07-16 00:27:47.434736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.696 qpair failed and we were unable to recover it. 00:26:28.696 [2024-07-16 00:27:47.434970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.696 [2024-07-16 00:27:47.435000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.696 qpair failed and we were unable to recover it. 00:26:28.696 [2024-07-16 00:27:47.435324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.696 [2024-07-16 00:27:47.435354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.696 qpair failed and we were unable to recover it. 00:26:28.696 [2024-07-16 00:27:47.435516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.696 [2024-07-16 00:27:47.435545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.696 qpair failed and we were unable to recover it. 00:26:28.696 [2024-07-16 00:27:47.435746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.696 [2024-07-16 00:27:47.435776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.696 qpair failed and we were unable to recover it. 00:26:28.696 [2024-07-16 00:27:47.435979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.697 [2024-07-16 00:27:47.436009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.697 qpair failed and we were unable to recover it. 00:26:28.697 [2024-07-16 00:27:47.436176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.697 [2024-07-16 00:27:47.436206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.697 qpair failed and we were unable to recover it. 00:26:28.697 [2024-07-16 00:27:47.436440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.697 [2024-07-16 00:27:47.436470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.697 qpair failed and we were unable to recover it. 00:26:28.697 [2024-07-16 00:27:47.436765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.697 [2024-07-16 00:27:47.436795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.697 qpair failed and we were unable to recover it. 00:26:28.697 [2024-07-16 00:27:47.436960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.697 [2024-07-16 00:27:47.436990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.697 qpair failed and we were unable to recover it. 00:26:28.697 [2024-07-16 00:27:47.437295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.697 [2024-07-16 00:27:47.437325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.697 qpair failed and we were unable to recover it. 00:26:28.697 [2024-07-16 00:27:47.437565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.697 [2024-07-16 00:27:47.437596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.697 qpair failed and we were unable to recover it. 00:26:28.697 [2024-07-16 00:27:47.437891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.697 [2024-07-16 00:27:47.437921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.697 qpair failed and we were unable to recover it. 00:26:28.697 [2024-07-16 00:27:47.438077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.697 [2024-07-16 00:27:47.438107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.697 qpair failed and we were unable to recover it. 00:26:28.697 [2024-07-16 00:27:47.438347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.697 [2024-07-16 00:27:47.438378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.697 qpair failed and we were unable to recover it. 00:26:28.697 [2024-07-16 00:27:47.438603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.697 [2024-07-16 00:27:47.438633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.697 qpair failed and we were unable to recover it. 00:26:28.697 [2024-07-16 00:27:47.438866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.697 [2024-07-16 00:27:47.438896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.697 qpair failed and we were unable to recover it. 00:26:28.697 [2024-07-16 00:27:47.439161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.697 [2024-07-16 00:27:47.439174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.697 qpair failed and we were unable to recover it. 00:26:28.697 [2024-07-16 00:27:47.439378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.697 [2024-07-16 00:27:47.439392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.697 qpair failed and we were unable to recover it. 00:26:28.697 [2024-07-16 00:27:47.439612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.697 [2024-07-16 00:27:47.439626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.697 qpair failed and we were unable to recover it. 00:26:28.697 [2024-07-16 00:27:47.439825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.697 [2024-07-16 00:27:47.439855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.697 qpair failed and we were unable to recover it. 00:26:28.697 [2024-07-16 00:27:47.440025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.697 [2024-07-16 00:27:47.440055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.697 qpair failed and we were unable to recover it. 00:26:28.697 [2024-07-16 00:27:47.440260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.697 [2024-07-16 00:27:47.440292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.697 qpair failed and we were unable to recover it. 00:26:28.697 [2024-07-16 00:27:47.440447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.697 [2024-07-16 00:27:47.440461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.697 qpair failed and we were unable to recover it. 00:26:28.697 [2024-07-16 00:27:47.440649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.697 [2024-07-16 00:27:47.440662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.697 qpair failed and we were unable to recover it. 00:26:28.697 [2024-07-16 00:27:47.440760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.697 [2024-07-16 00:27:47.440773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.697 qpair failed and we were unable to recover it. 00:26:28.697 [2024-07-16 00:27:47.440966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.697 [2024-07-16 00:27:47.440995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.697 qpair failed and we were unable to recover it. 00:26:28.697 [2024-07-16 00:27:47.441288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.697 [2024-07-16 00:27:47.441318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.697 qpair failed and we were unable to recover it. 00:26:28.697 [2024-07-16 00:27:47.441523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.697 [2024-07-16 00:27:47.441552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.697 qpair failed and we were unable to recover it. 00:26:28.697 [2024-07-16 00:27:47.441741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.697 [2024-07-16 00:27:47.441771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.697 qpair failed and we were unable to recover it. 00:26:28.697 [2024-07-16 00:27:47.442146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.697 [2024-07-16 00:27:47.442182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.697 qpair failed and we were unable to recover it. 00:26:28.697 [2024-07-16 00:27:47.442486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.697 [2024-07-16 00:27:47.442516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.697 qpair failed and we were unable to recover it. 00:26:28.697 [2024-07-16 00:27:47.442767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.697 [2024-07-16 00:27:47.442797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.697 qpair failed and we were unable to recover it. 00:26:28.697 [2024-07-16 00:27:47.443039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.697 [2024-07-16 00:27:47.443068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.697 qpair failed and we were unable to recover it. 00:26:28.697 [2024-07-16 00:27:47.443313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.697 [2024-07-16 00:27:47.443344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.697 qpair failed and we were unable to recover it. 00:26:28.697 [2024-07-16 00:27:47.443646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.697 [2024-07-16 00:27:47.443676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.697 qpair failed and we were unable to recover it. 00:26:28.697 [2024-07-16 00:27:47.443922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.697 [2024-07-16 00:27:47.443951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.697 qpair failed and we were unable to recover it. 00:26:28.697 [2024-07-16 00:27:47.444192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.697 [2024-07-16 00:27:47.444238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.697 qpair failed and we were unable to recover it. 00:26:28.697 [2024-07-16 00:27:47.444440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.697 [2024-07-16 00:27:47.444453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.697 qpair failed and we were unable to recover it. 00:26:28.697 [2024-07-16 00:27:47.444657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.697 [2024-07-16 00:27:47.444670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.697 qpair failed and we were unable to recover it. 00:26:28.697 [2024-07-16 00:27:47.444952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.697 [2024-07-16 00:27:47.444965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.697 qpair failed and we were unable to recover it. 00:26:28.697 [2024-07-16 00:27:47.445154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.697 [2024-07-16 00:27:47.445168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.697 qpair failed and we were unable to recover it. 00:26:28.697 [2024-07-16 00:27:47.445455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.697 [2024-07-16 00:27:47.445485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.697 qpair failed and we were unable to recover it. 00:26:28.697 [2024-07-16 00:27:47.445663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.697 [2024-07-16 00:27:47.445692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.697 qpair failed and we were unable to recover it. 00:26:28.697 [2024-07-16 00:27:47.445950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.697 [2024-07-16 00:27:47.445980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.697 qpair failed and we were unable to recover it. 00:26:28.697 [2024-07-16 00:27:47.446179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.697 [2024-07-16 00:27:47.446192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.697 qpair failed and we were unable to recover it. 00:26:28.698 [2024-07-16 00:27:47.446409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.698 [2024-07-16 00:27:47.446423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.698 qpair failed and we were unable to recover it. 00:26:28.698 [2024-07-16 00:27:47.446580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.698 [2024-07-16 00:27:47.446609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.698 qpair failed and we were unable to recover it. 00:26:28.698 [2024-07-16 00:27:47.446946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.698 [2024-07-16 00:27:47.446975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.698 qpair failed and we were unable to recover it. 00:26:28.698 [2024-07-16 00:27:47.447156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.698 [2024-07-16 00:27:47.447169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.698 qpair failed and we were unable to recover it. 00:26:28.698 [2024-07-16 00:27:47.447388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.698 [2024-07-16 00:27:47.447419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.698 qpair failed and we were unable to recover it. 00:26:28.698 [2024-07-16 00:27:47.447691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.698 [2024-07-16 00:27:47.447721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.698 qpair failed and we were unable to recover it. 00:26:28.698 [2024-07-16 00:27:47.447907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.698 [2024-07-16 00:27:47.447937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.698 qpair failed and we were unable to recover it. 00:26:28.698 [2024-07-16 00:27:47.448183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.698 [2024-07-16 00:27:47.448212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.698 qpair failed and we were unable to recover it. 00:26:28.698 [2024-07-16 00:27:47.448361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.698 [2024-07-16 00:27:47.448391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.698 qpair failed and we were unable to recover it. 00:26:28.698 [2024-07-16 00:27:47.448618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.698 [2024-07-16 00:27:47.448647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.698 qpair failed and we were unable to recover it. 00:26:28.698 [2024-07-16 00:27:47.448890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.698 [2024-07-16 00:27:47.448920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.698 qpair failed and we were unable to recover it. 00:26:28.698 [2024-07-16 00:27:47.449221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.698 [2024-07-16 00:27:47.449260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.698 qpair failed and we were unable to recover it. 00:26:28.698 [2024-07-16 00:27:47.449589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.698 [2024-07-16 00:27:47.449619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.698 qpair failed and we were unable to recover it. 00:26:28.698 [2024-07-16 00:27:47.449864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.698 [2024-07-16 00:27:47.449893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.698 qpair failed and we were unable to recover it. 00:26:28.698 [2024-07-16 00:27:47.450138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.698 [2024-07-16 00:27:47.450168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.698 qpair failed and we were unable to recover it. 00:26:28.698 [2024-07-16 00:27:47.450387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.698 [2024-07-16 00:27:47.450401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.698 qpair failed and we were unable to recover it. 00:26:28.698 [2024-07-16 00:27:47.450613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.698 [2024-07-16 00:27:47.450626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.698 qpair failed and we were unable to recover it. 00:26:28.698 [2024-07-16 00:27:47.450827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.698 [2024-07-16 00:27:47.450856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.698 qpair failed and we were unable to recover it. 00:26:28.698 [2024-07-16 00:27:47.451045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.698 [2024-07-16 00:27:47.451075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.698 qpair failed and we were unable to recover it. 00:26:28.698 [2024-07-16 00:27:47.451251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.698 [2024-07-16 00:27:47.451281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.698 qpair failed and we were unable to recover it. 00:26:28.698 [2024-07-16 00:27:47.451600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.698 [2024-07-16 00:27:47.451629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.698 qpair failed and we were unable to recover it. 00:26:28.698 [2024-07-16 00:27:47.451897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.698 [2024-07-16 00:27:47.451927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.698 qpair failed and we were unable to recover it. 00:26:28.698 [2024-07-16 00:27:47.452220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.698 [2024-07-16 00:27:47.452270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.698 qpair failed and we were unable to recover it. 00:26:28.698 [2024-07-16 00:27:47.452495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.698 [2024-07-16 00:27:47.452525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.698 qpair failed and we were unable to recover it. 00:26:28.698 [2024-07-16 00:27:47.452767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.698 [2024-07-16 00:27:47.452802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.698 qpair failed and we were unable to recover it. 00:26:28.698 [2024-07-16 00:27:47.453027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.698 [2024-07-16 00:27:47.453057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.698 qpair failed and we were unable to recover it. 00:26:28.698 [2024-07-16 00:27:47.453343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.698 [2024-07-16 00:27:47.453357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.698 qpair failed and we were unable to recover it. 00:26:28.698 [2024-07-16 00:27:47.453547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.698 [2024-07-16 00:27:47.453561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.698 qpair failed and we were unable to recover it. 00:26:28.698 [2024-07-16 00:27:47.453712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.698 [2024-07-16 00:27:47.453725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.698 qpair failed and we were unable to recover it. 00:26:28.698 [2024-07-16 00:27:47.453917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.698 [2024-07-16 00:27:47.453930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.698 qpair failed and we were unable to recover it. 00:26:28.698 [2024-07-16 00:27:47.454216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.698 [2024-07-16 00:27:47.454254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.698 qpair failed and we were unable to recover it. 00:26:28.698 [2024-07-16 00:27:47.454444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.698 [2024-07-16 00:27:47.454474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.698 qpair failed and we were unable to recover it. 00:26:28.698 [2024-07-16 00:27:47.454789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.698 [2024-07-16 00:27:47.454818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.698 qpair failed and we were unable to recover it. 00:26:28.698 [2024-07-16 00:27:47.455063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.698 [2024-07-16 00:27:47.455092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.698 qpair failed and we were unable to recover it. 00:26:28.698 [2024-07-16 00:27:47.455331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.698 [2024-07-16 00:27:47.455345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.698 qpair failed and we were unable to recover it. 00:26:28.698 [2024-07-16 00:27:47.455601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.698 [2024-07-16 00:27:47.455614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.698 qpair failed and we were unable to recover it. 00:26:28.698 [2024-07-16 00:27:47.455826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.698 [2024-07-16 00:27:47.455856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.698 qpair failed and we were unable to recover it. 00:26:28.698 [2024-07-16 00:27:47.456103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.698 [2024-07-16 00:27:47.456133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.698 qpair failed and we were unable to recover it. 00:26:28.698 [2024-07-16 00:27:47.456302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.698 [2024-07-16 00:27:47.456316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.698 qpair failed and we were unable to recover it. 00:26:28.698 [2024-07-16 00:27:47.456542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.698 [2024-07-16 00:27:47.456555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.698 qpair failed and we were unable to recover it. 00:26:28.698 [2024-07-16 00:27:47.456847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.698 [2024-07-16 00:27:47.456876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.698 qpair failed and we were unable to recover it. 00:26:28.699 [2024-07-16 00:27:47.457145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.699 [2024-07-16 00:27:47.457175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.699 qpair failed and we were unable to recover it. 00:26:28.699 [2024-07-16 00:27:47.457423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.699 [2024-07-16 00:27:47.457453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.699 qpair failed and we were unable to recover it. 00:26:28.699 [2024-07-16 00:27:47.457783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.699 [2024-07-16 00:27:47.457812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.699 qpair failed and we were unable to recover it. 00:26:28.699 [2024-07-16 00:27:47.457994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.699 [2024-07-16 00:27:47.458023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.699 qpair failed and we were unable to recover it. 00:26:28.699 [2024-07-16 00:27:47.458264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.699 [2024-07-16 00:27:47.458302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.699 qpair failed and we were unable to recover it. 00:26:28.699 [2024-07-16 00:27:47.458619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.699 [2024-07-16 00:27:47.458649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.699 qpair failed and we were unable to recover it. 00:26:28.699 [2024-07-16 00:27:47.458894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.699 [2024-07-16 00:27:47.458924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.699 qpair failed and we were unable to recover it. 00:26:28.699 [2024-07-16 00:27:47.459157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.699 [2024-07-16 00:27:47.459187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.699 qpair failed and we were unable to recover it. 00:26:28.699 [2024-07-16 00:27:47.459465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.699 [2024-07-16 00:27:47.459495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.699 qpair failed and we were unable to recover it. 00:26:28.699 [2024-07-16 00:27:47.459683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.699 [2024-07-16 00:27:47.459714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.699 qpair failed and we were unable to recover it. 00:26:28.699 [2024-07-16 00:27:47.459956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.699 [2024-07-16 00:27:47.459985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.699 qpair failed and we were unable to recover it. 00:26:28.699 [2024-07-16 00:27:47.460260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.699 [2024-07-16 00:27:47.460292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.699 qpair failed and we were unable to recover it. 00:26:28.699 [2024-07-16 00:27:47.460462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.699 [2024-07-16 00:27:47.460491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.699 qpair failed and we were unable to recover it. 00:26:28.699 [2024-07-16 00:27:47.460718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.699 [2024-07-16 00:27:47.460747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.699 qpair failed and we were unable to recover it. 00:26:28.699 [2024-07-16 00:27:47.461010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.699 [2024-07-16 00:27:47.461040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.699 qpair failed and we were unable to recover it. 00:26:28.699 [2024-07-16 00:27:47.461329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.699 [2024-07-16 00:27:47.461343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.699 qpair failed and we were unable to recover it. 00:26:28.699 [2024-07-16 00:27:47.461539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.699 [2024-07-16 00:27:47.461569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.699 qpair failed and we were unable to recover it. 00:26:28.699 [2024-07-16 00:27:47.461820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.699 [2024-07-16 00:27:47.461850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.699 qpair failed and we were unable to recover it. 00:26:28.699 [2024-07-16 00:27:47.462146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.699 [2024-07-16 00:27:47.462175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.699 qpair failed and we were unable to recover it. 00:26:28.699 [2024-07-16 00:27:47.462422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.699 [2024-07-16 00:27:47.462452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.699 qpair failed and we were unable to recover it. 00:26:28.699 [2024-07-16 00:27:47.462697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.699 [2024-07-16 00:27:47.462726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.699 qpair failed and we were unable to recover it. 00:26:28.699 [2024-07-16 00:27:47.462965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.699 [2024-07-16 00:27:47.462995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.699 qpair failed and we were unable to recover it. 00:26:28.699 [2024-07-16 00:27:47.463246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.699 [2024-07-16 00:27:47.463276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.699 qpair failed and we were unable to recover it. 00:26:28.699 [2024-07-16 00:27:47.463463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.699 [2024-07-16 00:27:47.463499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.699 qpair failed and we were unable to recover it. 00:26:28.699 [2024-07-16 00:27:47.463792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.699 [2024-07-16 00:27:47.463822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.699 qpair failed and we were unable to recover it. 00:26:28.699 [2024-07-16 00:27:47.463993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.699 [2024-07-16 00:27:47.464022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.699 qpair failed and we were unable to recover it. 00:26:28.699 [2024-07-16 00:27:47.464261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.699 [2024-07-16 00:27:47.464291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.699 qpair failed and we were unable to recover it. 00:26:28.699 [2024-07-16 00:27:47.464519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.699 [2024-07-16 00:27:47.464548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.699 qpair failed and we were unable to recover it. 00:26:28.699 [2024-07-16 00:27:47.464819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.699 [2024-07-16 00:27:47.464849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.699 qpair failed and we were unable to recover it. 00:26:28.699 [2024-07-16 00:27:47.465166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.699 [2024-07-16 00:27:47.465196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.699 qpair failed and we were unable to recover it. 00:26:28.699 [2024-07-16 00:27:47.465453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.699 [2024-07-16 00:27:47.465482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.699 qpair failed and we were unable to recover it. 00:26:28.699 [2024-07-16 00:27:47.465670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.699 [2024-07-16 00:27:47.465700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.699 qpair failed and we were unable to recover it. 00:26:28.699 [2024-07-16 00:27:47.465938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.699 [2024-07-16 00:27:47.465967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.699 qpair failed and we were unable to recover it. 00:26:28.699 [2024-07-16 00:27:47.466233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.699 [2024-07-16 00:27:47.466265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.699 qpair failed and we were unable to recover it. 00:26:28.699 [2024-07-16 00:27:47.466562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.699 [2024-07-16 00:27:47.466591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.699 qpair failed and we were unable to recover it. 00:26:28.699 [2024-07-16 00:27:47.466814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.700 [2024-07-16 00:27:47.466843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.700 qpair failed and we were unable to recover it. 00:26:28.700 [2024-07-16 00:27:47.467079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.700 [2024-07-16 00:27:47.467093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.700 qpair failed and we were unable to recover it. 00:26:28.700 [2024-07-16 00:27:47.467294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.700 [2024-07-16 00:27:47.467309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.700 qpair failed and we were unable to recover it. 00:26:28.700 [2024-07-16 00:27:47.467456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.700 [2024-07-16 00:27:47.467470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.700 qpair failed and we were unable to recover it. 00:26:28.700 [2024-07-16 00:27:47.467760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.700 [2024-07-16 00:27:47.467789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.700 qpair failed and we were unable to recover it. 00:26:28.700 [2024-07-16 00:27:47.468108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.700 [2024-07-16 00:27:47.468145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.700 qpair failed and we were unable to recover it. 00:26:28.700 [2024-07-16 00:27:47.468344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.700 [2024-07-16 00:27:47.468358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.700 qpair failed and we were unable to recover it. 00:26:28.700 [2024-07-16 00:27:47.468557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.700 [2024-07-16 00:27:47.468571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.700 qpair failed and we were unable to recover it. 00:26:28.700 [2024-07-16 00:27:47.468783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.700 [2024-07-16 00:27:47.468796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.700 qpair failed and we were unable to recover it. 00:26:28.700 [2024-07-16 00:27:47.468952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.700 [2024-07-16 00:27:47.468968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.700 qpair failed and we were unable to recover it. 00:26:28.700 [2024-07-16 00:27:47.469180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.700 [2024-07-16 00:27:47.469193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.700 qpair failed and we were unable to recover it. 00:26:28.700 [2024-07-16 00:27:47.469450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.700 [2024-07-16 00:27:47.469464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.700 qpair failed and we were unable to recover it. 00:26:28.700 [2024-07-16 00:27:47.469725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.700 [2024-07-16 00:27:47.469739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.700 qpair failed and we were unable to recover it. 00:26:28.700 [2024-07-16 00:27:47.469946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.700 [2024-07-16 00:27:47.469959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.700 qpair failed and we were unable to recover it. 00:26:28.700 [2024-07-16 00:27:47.470168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.700 [2024-07-16 00:27:47.470182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.700 qpair failed and we were unable to recover it. 00:26:28.700 [2024-07-16 00:27:47.470401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.700 [2024-07-16 00:27:47.470415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.700 qpair failed and we were unable to recover it. 00:26:28.700 [2024-07-16 00:27:47.470621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.700 [2024-07-16 00:27:47.470650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.700 qpair failed and we were unable to recover it. 00:26:28.700 [2024-07-16 00:27:47.470875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.700 [2024-07-16 00:27:47.470905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.700 qpair failed and we were unable to recover it. 00:26:28.700 [2024-07-16 00:27:47.471239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.700 [2024-07-16 00:27:47.471269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.700 qpair failed and we were unable to recover it. 00:26:28.700 [2024-07-16 00:27:47.471464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.700 [2024-07-16 00:27:47.471494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.700 qpair failed and we were unable to recover it. 00:26:28.700 [2024-07-16 00:27:47.471791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.700 [2024-07-16 00:27:47.471821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.700 qpair failed and we were unable to recover it. 00:26:28.700 [2024-07-16 00:27:47.472056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.700 [2024-07-16 00:27:47.472085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.700 qpair failed and we were unable to recover it. 00:26:28.700 [2024-07-16 00:27:47.472345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.700 [2024-07-16 00:27:47.472376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.700 qpair failed and we were unable to recover it. 00:26:28.700 [2024-07-16 00:27:47.472626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.700 [2024-07-16 00:27:47.472656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.700 qpair failed and we were unable to recover it. 00:26:28.700 [2024-07-16 00:27:47.472926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.700 [2024-07-16 00:27:47.472955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.700 qpair failed and we were unable to recover it. 00:26:28.700 [2024-07-16 00:27:47.473205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.700 [2024-07-16 00:27:47.473243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.700 qpair failed and we were unable to recover it. 00:26:28.700 [2024-07-16 00:27:47.473533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.700 [2024-07-16 00:27:47.473563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.700 qpair failed and we were unable to recover it. 00:26:28.700 [2024-07-16 00:27:47.473856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.700 [2024-07-16 00:27:47.473885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.700 qpair failed and we were unable to recover it. 00:26:28.700 [2024-07-16 00:27:47.474135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.700 [2024-07-16 00:27:47.474171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.700 qpair failed and we were unable to recover it. 00:26:28.700 [2024-07-16 00:27:47.474331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.700 [2024-07-16 00:27:47.474361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.700 qpair failed and we were unable to recover it. 00:26:28.700 [2024-07-16 00:27:47.474680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.700 [2024-07-16 00:27:47.474710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.700 qpair failed and we were unable to recover it. 00:26:28.700 [2024-07-16 00:27:47.474932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.700 [2024-07-16 00:27:47.474961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.700 qpair failed and we were unable to recover it. 00:26:28.700 [2024-07-16 00:27:47.475203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.700 [2024-07-16 00:27:47.475239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.700 qpair failed and we were unable to recover it. 00:26:28.700 [2024-07-16 00:27:47.475460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.700 [2024-07-16 00:27:47.475474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.700 qpair failed and we were unable to recover it. 00:26:28.700 [2024-07-16 00:27:47.475610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.700 [2024-07-16 00:27:47.475624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.700 qpair failed and we were unable to recover it. 00:26:28.700 [2024-07-16 00:27:47.475915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.700 [2024-07-16 00:27:47.475945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.700 qpair failed and we were unable to recover it. 00:26:28.700 [2024-07-16 00:27:47.476190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.700 [2024-07-16 00:27:47.476220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.700 qpair failed and we were unable to recover it. 00:26:28.700 [2024-07-16 00:27:47.476469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.700 [2024-07-16 00:27:47.476499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.700 qpair failed and we were unable to recover it. 00:26:28.700 [2024-07-16 00:27:47.476724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.700 [2024-07-16 00:27:47.476754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.700 qpair failed and we were unable to recover it. 00:26:28.700 [2024-07-16 00:27:47.476988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.700 [2024-07-16 00:27:47.477018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.700 qpair failed and we were unable to recover it. 00:26:28.700 [2024-07-16 00:27:47.477234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.700 [2024-07-16 00:27:47.477248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.700 qpair failed and we were unable to recover it. 00:26:28.700 [2024-07-16 00:27:47.477478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.701 [2024-07-16 00:27:47.477508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.701 qpair failed and we were unable to recover it. 00:26:28.701 [2024-07-16 00:27:47.477806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.701 [2024-07-16 00:27:47.477835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.701 qpair failed and we were unable to recover it. 00:26:28.701 [2024-07-16 00:27:47.478026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.701 [2024-07-16 00:27:47.478039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.701 qpair failed and we were unable to recover it. 00:26:28.701 [2024-07-16 00:27:47.478243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.701 [2024-07-16 00:27:47.478274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.701 qpair failed and we were unable to recover it. 00:26:28.701 [2024-07-16 00:27:47.478565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.701 [2024-07-16 00:27:47.478594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.701 qpair failed and we were unable to recover it. 00:26:28.701 [2024-07-16 00:27:47.478911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.701 [2024-07-16 00:27:47.478940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.701 qpair failed and we were unable to recover it. 00:26:28.701 [2024-07-16 00:27:47.479244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.701 [2024-07-16 00:27:47.479274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.701 qpair failed and we were unable to recover it. 00:26:28.701 [2024-07-16 00:27:47.479450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.701 [2024-07-16 00:27:47.479465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.701 qpair failed and we were unable to recover it. 00:26:28.701 [2024-07-16 00:27:47.479607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.701 [2024-07-16 00:27:47.479636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.701 qpair failed and we were unable to recover it. 00:26:28.701 [2024-07-16 00:27:47.479950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.701 [2024-07-16 00:27:47.479980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.701 qpair failed and we were unable to recover it. 00:26:28.701 [2024-07-16 00:27:47.480293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.701 [2024-07-16 00:27:47.480308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.701 qpair failed and we were unable to recover it. 00:26:28.701 [2024-07-16 00:27:47.480513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.701 [2024-07-16 00:27:47.480526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.701 qpair failed and we were unable to recover it. 00:26:28.701 [2024-07-16 00:27:47.480741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.701 [2024-07-16 00:27:47.480754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.701 qpair failed and we were unable to recover it. 00:26:28.701 [2024-07-16 00:27:47.480899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.701 [2024-07-16 00:27:47.480913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.701 qpair failed and we were unable to recover it. 00:26:28.701 [2024-07-16 00:27:47.481063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.701 [2024-07-16 00:27:47.481093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.701 qpair failed and we were unable to recover it. 00:26:28.701 [2024-07-16 00:27:47.481344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.701 [2024-07-16 00:27:47.481374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.701 qpair failed and we were unable to recover it. 00:26:28.701 [2024-07-16 00:27:47.481623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.701 [2024-07-16 00:27:47.481653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.701 qpair failed and we were unable to recover it. 00:26:28.701 [2024-07-16 00:27:47.481887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.701 [2024-07-16 00:27:47.481917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.701 qpair failed and we were unable to recover it. 00:26:28.701 [2024-07-16 00:27:47.482157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.701 [2024-07-16 00:27:47.482186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.701 qpair failed and we were unable to recover it. 00:26:28.701 [2024-07-16 00:27:47.482422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.701 [2024-07-16 00:27:47.482436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.701 qpair failed and we were unable to recover it. 00:26:28.701 [2024-07-16 00:27:47.482723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.701 [2024-07-16 00:27:47.482753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.701 qpair failed and we were unable to recover it. 00:26:28.701 [2024-07-16 00:27:47.483043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.701 [2024-07-16 00:27:47.483073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.701 qpair failed and we were unable to recover it. 00:26:28.701 [2024-07-16 00:27:47.483361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.701 [2024-07-16 00:27:47.483375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.701 qpair failed and we were unable to recover it. 00:26:28.701 [2024-07-16 00:27:47.483648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.701 [2024-07-16 00:27:47.483677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.701 qpair failed and we were unable to recover it. 00:26:28.701 [2024-07-16 00:27:47.483989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.701 [2024-07-16 00:27:47.484019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.701 qpair failed and we were unable to recover it. 00:26:28.701 [2024-07-16 00:27:47.484245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.701 [2024-07-16 00:27:47.484276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.701 qpair failed and we were unable to recover it. 00:26:28.701 [2024-07-16 00:27:47.484454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.701 [2024-07-16 00:27:47.484483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.701 qpair failed and we were unable to recover it. 00:26:28.701 [2024-07-16 00:27:47.484706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.701 [2024-07-16 00:27:47.484741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.701 qpair failed and we were unable to recover it. 00:26:28.701 [2024-07-16 00:27:47.484877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.701 [2024-07-16 00:27:47.484907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.701 qpair failed and we were unable to recover it. 00:26:28.701 [2024-07-16 00:27:47.485213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.701 [2024-07-16 00:27:47.485251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.701 qpair failed and we were unable to recover it. 00:26:28.701 [2024-07-16 00:27:47.485420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.701 [2024-07-16 00:27:47.485450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.701 qpair failed and we were unable to recover it. 00:26:28.701 [2024-07-16 00:27:47.485704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.701 [2024-07-16 00:27:47.485733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.701 qpair failed and we were unable to recover it. 00:26:28.701 [2024-07-16 00:27:47.486046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.701 [2024-07-16 00:27:47.486076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.701 qpair failed and we were unable to recover it. 00:26:28.701 [2024-07-16 00:27:47.486390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.701 [2024-07-16 00:27:47.486420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.701 qpair failed and we were unable to recover it. 00:26:28.701 [2024-07-16 00:27:47.486733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.701 [2024-07-16 00:27:47.486763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.701 qpair failed and we were unable to recover it. 00:26:28.701 [2024-07-16 00:27:47.486989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.701 [2024-07-16 00:27:47.487019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.701 qpair failed and we were unable to recover it. 00:26:28.701 [2024-07-16 00:27:47.487270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.701 [2024-07-16 00:27:47.487300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.701 qpair failed and we were unable to recover it. 00:26:28.701 [2024-07-16 00:27:47.487563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.701 [2024-07-16 00:27:47.487592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.701 qpair failed and we were unable to recover it. 00:26:28.701 [2024-07-16 00:27:47.487836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.701 [2024-07-16 00:27:47.487865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.701 qpair failed and we were unable to recover it. 00:26:28.701 [2024-07-16 00:27:47.488157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.701 [2024-07-16 00:27:47.488187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.701 qpair failed and we were unable to recover it. 00:26:28.701 [2024-07-16 00:27:47.488451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.701 [2024-07-16 00:27:47.488481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.701 qpair failed and we were unable to recover it. 00:26:28.702 [2024-07-16 00:27:47.488781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.702 [2024-07-16 00:27:47.488810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.702 qpair failed and we were unable to recover it. 00:26:28.702 [2024-07-16 00:27:47.489100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.702 [2024-07-16 00:27:47.489129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.702 qpair failed and we were unable to recover it. 00:26:28.702 [2024-07-16 00:27:47.489367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.702 [2024-07-16 00:27:47.489398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.702 qpair failed and we were unable to recover it. 00:26:28.702 [2024-07-16 00:27:47.489656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.702 [2024-07-16 00:27:47.489685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.702 qpair failed and we were unable to recover it. 00:26:28.702 [2024-07-16 00:27:47.490003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.702 [2024-07-16 00:27:47.490032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.702 qpair failed and we were unable to recover it. 00:26:28.702 [2024-07-16 00:27:47.490278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.702 [2024-07-16 00:27:47.490308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.702 qpair failed and we were unable to recover it. 00:26:28.702 [2024-07-16 00:27:47.490492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.702 [2024-07-16 00:27:47.490522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.702 qpair failed and we were unable to recover it. 00:26:28.702 [2024-07-16 00:27:47.490758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.702 [2024-07-16 00:27:47.490787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.702 qpair failed and we were unable to recover it. 00:26:28.702 [2024-07-16 00:27:47.491024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.702 [2024-07-16 00:27:47.491054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.702 qpair failed and we were unable to recover it. 00:26:28.702 [2024-07-16 00:27:47.491298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.702 [2024-07-16 00:27:47.491328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.702 qpair failed and we were unable to recover it. 00:26:28.702 [2024-07-16 00:27:47.491489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.702 [2024-07-16 00:27:47.491519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.702 qpair failed and we were unable to recover it. 00:26:28.702 [2024-07-16 00:27:47.491744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.702 [2024-07-16 00:27:47.491774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.702 qpair failed and we were unable to recover it. 00:26:28.702 [2024-07-16 00:27:47.492000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.702 [2024-07-16 00:27:47.492029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.702 qpair failed and we were unable to recover it. 00:26:28.702 [2024-07-16 00:27:47.492197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.702 [2024-07-16 00:27:47.492210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.702 qpair failed and we were unable to recover it. 00:26:28.702 [2024-07-16 00:27:47.492365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.702 [2024-07-16 00:27:47.492395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.702 qpair failed and we were unable to recover it. 00:26:28.702 [2024-07-16 00:27:47.492702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.702 [2024-07-16 00:27:47.492732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.702 qpair failed and we were unable to recover it. 00:26:28.702 [2024-07-16 00:27:47.493035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.702 [2024-07-16 00:27:47.493064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.702 qpair failed and we were unable to recover it. 00:26:28.702 [2024-07-16 00:27:47.493244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.702 [2024-07-16 00:27:47.493280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.702 qpair failed and we were unable to recover it. 00:26:28.702 [2024-07-16 00:27:47.493536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.702 [2024-07-16 00:27:47.493549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.702 qpair failed and we were unable to recover it. 00:26:28.702 [2024-07-16 00:27:47.493831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.702 [2024-07-16 00:27:47.493845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.702 qpair failed and we were unable to recover it. 00:26:28.702 [2024-07-16 00:27:47.494042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.702 [2024-07-16 00:27:47.494073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.702 qpair failed and we were unable to recover it. 00:26:28.702 [2024-07-16 00:27:47.494316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.702 [2024-07-16 00:27:47.494346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.702 qpair failed and we were unable to recover it. 00:26:28.702 [2024-07-16 00:27:47.494588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.702 [2024-07-16 00:27:47.494617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.702 qpair failed and we were unable to recover it. 00:26:28.702 [2024-07-16 00:27:47.494924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.702 [2024-07-16 00:27:47.494954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.702 qpair failed and we were unable to recover it. 00:26:28.702 [2024-07-16 00:27:47.495188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.702 [2024-07-16 00:27:47.495217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.702 qpair failed and we were unable to recover it. 00:26:28.702 [2024-07-16 00:27:47.495411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.702 [2024-07-16 00:27:47.495441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.702 qpair failed and we were unable to recover it. 00:26:28.702 [2024-07-16 00:27:47.495732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.702 [2024-07-16 00:27:47.495766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.702 qpair failed and we were unable to recover it. 00:26:28.702 [2024-07-16 00:27:47.496006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.702 [2024-07-16 00:27:47.496035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.702 qpair failed and we were unable to recover it. 00:26:28.702 [2024-07-16 00:27:47.496207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.702 [2024-07-16 00:27:47.496252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.702 qpair failed and we were unable to recover it. 00:26:28.702 [2024-07-16 00:27:47.496503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.702 [2024-07-16 00:27:47.496532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.702 qpair failed and we were unable to recover it. 00:26:28.702 [2024-07-16 00:27:47.496759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.702 [2024-07-16 00:27:47.496789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.702 qpair failed and we were unable to recover it. 00:26:28.702 [2024-07-16 00:27:47.497049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.702 [2024-07-16 00:27:47.497078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.702 qpair failed and we were unable to recover it. 00:26:28.702 [2024-07-16 00:27:47.497352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.702 [2024-07-16 00:27:47.497384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.702 qpair failed and we were unable to recover it. 00:26:28.702 [2024-07-16 00:27:47.497558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.702 [2024-07-16 00:27:47.497588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.702 qpair failed and we were unable to recover it. 00:26:28.702 [2024-07-16 00:27:47.497811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.702 [2024-07-16 00:27:47.497841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.702 qpair failed and we were unable to recover it. 00:26:28.702 [2024-07-16 00:27:47.498016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.702 [2024-07-16 00:27:47.498046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.702 qpair failed and we were unable to recover it. 00:26:28.702 [2024-07-16 00:27:47.498344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.702 [2024-07-16 00:27:47.498375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.702 qpair failed and we were unable to recover it. 00:26:28.702 [2024-07-16 00:27:47.498547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.702 [2024-07-16 00:27:47.498561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.702 qpair failed and we were unable to recover it. 00:26:28.702 [2024-07-16 00:27:47.498699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.702 [2024-07-16 00:27:47.498712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.702 qpair failed and we were unable to recover it. 00:26:28.702 [2024-07-16 00:27:47.498925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.702 [2024-07-16 00:27:47.498954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.702 qpair failed and we were unable to recover it. 00:26:28.702 [2024-07-16 00:27:47.499196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.702 [2024-07-16 00:27:47.499231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.702 qpair failed and we were unable to recover it. 00:26:28.703 [2024-07-16 00:27:47.499467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.703 [2024-07-16 00:27:47.499497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.703 qpair failed and we were unable to recover it. 00:26:28.703 [2024-07-16 00:27:47.499725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.703 [2024-07-16 00:27:47.499755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.703 qpair failed and we were unable to recover it. 00:26:28.703 [2024-07-16 00:27:47.499980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.703 [2024-07-16 00:27:47.500010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.703 qpair failed and we were unable to recover it. 00:26:28.703 [2024-07-16 00:27:47.500177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.703 [2024-07-16 00:27:47.500191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.703 qpair failed and we were unable to recover it. 00:26:28.703 [2024-07-16 00:27:47.500350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.703 [2024-07-16 00:27:47.500381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.703 qpair failed and we were unable to recover it. 00:26:28.703 [2024-07-16 00:27:47.500630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.703 [2024-07-16 00:27:47.500659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.703 qpair failed and we were unable to recover it. 00:26:28.703 [2024-07-16 00:27:47.500835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.703 [2024-07-16 00:27:47.500864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.703 qpair failed and we were unable to recover it. 00:26:28.703 [2024-07-16 00:27:47.501114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.703 [2024-07-16 00:27:47.501143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.703 qpair failed and we were unable to recover it. 00:26:28.703 [2024-07-16 00:27:47.501292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.703 [2024-07-16 00:27:47.501306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.703 qpair failed and we were unable to recover it. 00:26:28.703 [2024-07-16 00:27:47.501498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.703 [2024-07-16 00:27:47.501528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.703 qpair failed and we were unable to recover it. 00:26:28.703 [2024-07-16 00:27:47.501821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.703 [2024-07-16 00:27:47.501849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.703 qpair failed and we were unable to recover it. 00:26:28.703 [2024-07-16 00:27:47.502031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.703 [2024-07-16 00:27:47.502061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.703 qpair failed and we were unable to recover it. 00:26:28.703 [2024-07-16 00:27:47.502401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.703 [2024-07-16 00:27:47.502415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.703 qpair failed and we were unable to recover it. 00:26:28.703 [2024-07-16 00:27:47.502557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.703 [2024-07-16 00:27:47.502571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.703 qpair failed and we were unable to recover it. 00:26:28.703 [2024-07-16 00:27:47.502806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.703 [2024-07-16 00:27:47.502836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.703 qpair failed and we were unable to recover it. 00:26:28.703 [2024-07-16 00:27:47.503022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.703 [2024-07-16 00:27:47.503051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.703 qpair failed and we were unable to recover it. 00:26:28.703 [2024-07-16 00:27:47.503338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.703 [2024-07-16 00:27:47.503351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.703 qpair failed and we were unable to recover it. 00:26:28.703 [2024-07-16 00:27:47.503617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.703 [2024-07-16 00:27:47.503631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.703 qpair failed and we were unable to recover it. 00:26:28.703 [2024-07-16 00:27:47.503822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.703 [2024-07-16 00:27:47.503836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.703 qpair failed and we were unable to recover it. 00:26:28.703 [2024-07-16 00:27:47.504046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.703 [2024-07-16 00:27:47.504075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.703 qpair failed and we were unable to recover it. 00:26:28.703 [2024-07-16 00:27:47.504256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.703 [2024-07-16 00:27:47.504286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.703 qpair failed and we were unable to recover it. 00:26:28.703 [2024-07-16 00:27:47.504578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.703 [2024-07-16 00:27:47.504608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.703 qpair failed and we were unable to recover it. 00:26:28.703 [2024-07-16 00:27:47.504848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.703 [2024-07-16 00:27:47.504878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.703 qpair failed and we were unable to recover it. 00:26:28.703 [2024-07-16 00:27:47.505123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.703 [2024-07-16 00:27:47.505153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.703 qpair failed and we were unable to recover it. 00:26:28.703 [2024-07-16 00:27:47.505374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.703 [2024-07-16 00:27:47.505387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.703 qpair failed and we were unable to recover it. 00:26:28.703 [2024-07-16 00:27:47.505589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.703 [2024-07-16 00:27:47.505624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.703 qpair failed and we were unable to recover it. 00:26:28.703 [2024-07-16 00:27:47.505881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.703 [2024-07-16 00:27:47.505911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.703 qpair failed and we were unable to recover it. 00:26:28.703 [2024-07-16 00:27:47.506078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.703 [2024-07-16 00:27:47.506108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.703 qpair failed and we were unable to recover it. 00:26:28.703 [2024-07-16 00:27:47.506328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.703 [2024-07-16 00:27:47.506341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.703 qpair failed and we were unable to recover it. 00:26:28.703 [2024-07-16 00:27:47.506477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.703 [2024-07-16 00:27:47.506490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.703 qpair failed and we were unable to recover it. 00:26:28.703 [2024-07-16 00:27:47.506712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.703 [2024-07-16 00:27:47.506741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.703 qpair failed and we were unable to recover it. 00:26:28.703 [2024-07-16 00:27:47.506937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.703 [2024-07-16 00:27:47.506966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.703 qpair failed and we were unable to recover it. 00:26:28.703 [2024-07-16 00:27:47.507196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.703 [2024-07-16 00:27:47.507210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.703 qpair failed and we were unable to recover it. 00:26:28.703 [2024-07-16 00:27:47.507363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.703 [2024-07-16 00:27:47.507377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.703 qpair failed and we were unable to recover it. 00:26:28.703 [2024-07-16 00:27:47.507564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.703 [2024-07-16 00:27:47.507578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.703 qpair failed and we were unable to recover it. 00:26:28.703 [2024-07-16 00:27:47.507847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.703 [2024-07-16 00:27:47.507861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.703 qpair failed and we were unable to recover it. 00:26:28.704 [2024-07-16 00:27:47.508069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.704 [2024-07-16 00:27:47.508082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.704 qpair failed and we were unable to recover it. 00:26:28.704 [2024-07-16 00:27:47.508196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.704 [2024-07-16 00:27:47.508209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.704 qpair failed and we were unable to recover it. 00:26:28.704 [2024-07-16 00:27:47.508420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.704 [2024-07-16 00:27:47.508434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.704 qpair failed and we were unable to recover it. 00:26:28.704 [2024-07-16 00:27:47.508663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.704 [2024-07-16 00:27:47.508693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.704 qpair failed and we were unable to recover it. 00:26:28.979 [2024-07-16 00:27:47.508930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.979 [2024-07-16 00:27:47.508961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.979 qpair failed and we were unable to recover it. 00:26:28.979 [2024-07-16 00:27:47.509196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.979 [2024-07-16 00:27:47.509237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.979 qpair failed and we were unable to recover it. 00:26:28.979 [2024-07-16 00:27:47.509506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.979 [2024-07-16 00:27:47.509519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.979 qpair failed and we were unable to recover it. 00:26:28.979 [2024-07-16 00:27:47.509670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.979 [2024-07-16 00:27:47.509683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.979 qpair failed and we were unable to recover it. 00:26:28.979 [2024-07-16 00:27:47.509987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.979 [2024-07-16 00:27:47.510016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.979 qpair failed and we were unable to recover it. 00:26:28.979 [2024-07-16 00:27:47.510304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.979 [2024-07-16 00:27:47.510318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.979 qpair failed and we were unable to recover it. 00:26:28.979 [2024-07-16 00:27:47.510519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.979 [2024-07-16 00:27:47.510532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.979 qpair failed and we were unable to recover it. 00:26:28.979 [2024-07-16 00:27:47.510810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.979 [2024-07-16 00:27:47.510839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.979 qpair failed and we were unable to recover it. 00:26:28.979 [2024-07-16 00:27:47.511034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.979 [2024-07-16 00:27:47.511064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.979 qpair failed and we were unable to recover it. 00:26:28.979 [2024-07-16 00:27:47.511326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.979 [2024-07-16 00:27:47.511356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.979 qpair failed and we were unable to recover it. 00:26:28.979 [2024-07-16 00:27:47.511643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.979 [2024-07-16 00:27:47.511656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.979 qpair failed and we were unable to recover it. 00:26:28.979 [2024-07-16 00:27:47.511869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.980 [2024-07-16 00:27:47.511898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.980 qpair failed and we were unable to recover it. 00:26:28.980 [2024-07-16 00:27:47.512145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.980 [2024-07-16 00:27:47.512174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.980 qpair failed and we were unable to recover it. 00:26:28.980 [2024-07-16 00:27:47.512423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.980 [2024-07-16 00:27:47.512454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.980 qpair failed and we were unable to recover it. 00:26:28.980 [2024-07-16 00:27:47.512624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.980 [2024-07-16 00:27:47.512638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.980 qpair failed and we were unable to recover it. 00:26:28.980 [2024-07-16 00:27:47.512845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.980 [2024-07-16 00:27:47.512858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.980 qpair failed and we were unable to recover it. 00:26:28.980 [2024-07-16 00:27:47.513146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.980 [2024-07-16 00:27:47.513176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.980 qpair failed and we were unable to recover it. 00:26:28.980 [2024-07-16 00:27:47.513424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.980 [2024-07-16 00:27:47.513455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.980 qpair failed and we were unable to recover it. 00:26:28.980 [2024-07-16 00:27:47.513637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.980 [2024-07-16 00:27:47.513666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.980 qpair failed and we were unable to recover it. 00:26:28.980 [2024-07-16 00:27:47.514002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.980 [2024-07-16 00:27:47.514032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.980 qpair failed and we were unable to recover it. 00:26:28.980 [2024-07-16 00:27:47.514258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.980 [2024-07-16 00:27:47.514281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.980 qpair failed and we were unable to recover it. 00:26:28.980 [2024-07-16 00:27:47.514534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.980 [2024-07-16 00:27:47.514570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.980 qpair failed and we were unable to recover it. 00:26:28.980 [2024-07-16 00:27:47.514801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.980 [2024-07-16 00:27:47.514831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.980 qpair failed and we were unable to recover it. 00:26:28.980 [2024-07-16 00:27:47.515017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.980 [2024-07-16 00:27:47.515047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.980 qpair failed and we were unable to recover it. 00:26:28.980 [2024-07-16 00:27:47.515301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.980 [2024-07-16 00:27:47.515331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.980 qpair failed and we were unable to recover it. 00:26:28.980 [2024-07-16 00:27:47.515594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.980 [2024-07-16 00:27:47.515629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.980 qpair failed and we were unable to recover it. 00:26:28.980 [2024-07-16 00:27:47.515863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.980 [2024-07-16 00:27:47.515892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.980 qpair failed and we were unable to recover it. 00:26:28.980 [2024-07-16 00:27:47.516133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.980 [2024-07-16 00:27:47.516162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.980 qpair failed and we were unable to recover it. 00:26:28.980 [2024-07-16 00:27:47.516388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.980 [2024-07-16 00:27:47.516402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.980 qpair failed and we were unable to recover it. 00:26:28.980 [2024-07-16 00:27:47.516615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.980 [2024-07-16 00:27:47.516644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.980 qpair failed and we were unable to recover it. 00:26:28.980 [2024-07-16 00:27:47.516887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.980 [2024-07-16 00:27:47.516916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.980 qpair failed and we were unable to recover it. 00:26:28.980 [2024-07-16 00:27:47.517185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.980 [2024-07-16 00:27:47.517214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.980 qpair failed and we were unable to recover it. 00:26:28.980 [2024-07-16 00:27:47.517479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.980 [2024-07-16 00:27:47.517509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.980 qpair failed and we were unable to recover it. 00:26:28.980 [2024-07-16 00:27:47.517736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.980 [2024-07-16 00:27:47.517766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.980 qpair failed and we were unable to recover it. 00:26:28.980 [2024-07-16 00:27:47.517929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.980 [2024-07-16 00:27:47.517959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.980 qpair failed and we were unable to recover it. 00:26:28.980 [2024-07-16 00:27:47.518298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.980 [2024-07-16 00:27:47.518328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.980 qpair failed and we were unable to recover it. 00:26:28.980 [2024-07-16 00:27:47.518502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.980 [2024-07-16 00:27:47.518532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.980 qpair failed and we were unable to recover it. 00:26:28.980 [2024-07-16 00:27:47.518711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.980 [2024-07-16 00:27:47.518741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.980 qpair failed and we were unable to recover it. 00:26:28.980 [2024-07-16 00:27:47.519039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.980 [2024-07-16 00:27:47.519068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.980 qpair failed and we were unable to recover it. 00:26:28.980 [2024-07-16 00:27:47.519310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.981 [2024-07-16 00:27:47.519341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.981 qpair failed and we were unable to recover it. 00:26:28.981 [2024-07-16 00:27:47.519580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.981 [2024-07-16 00:27:47.519610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.981 qpair failed and we were unable to recover it. 00:26:28.981 [2024-07-16 00:27:47.519838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.981 [2024-07-16 00:27:47.519867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.981 qpair failed and we were unable to recover it. 00:26:28.981 [2024-07-16 00:27:47.520158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.981 [2024-07-16 00:27:47.520188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.981 qpair failed and we were unable to recover it. 00:26:28.981 [2024-07-16 00:27:47.520536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.981 [2024-07-16 00:27:47.520567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.981 qpair failed and we were unable to recover it. 00:26:28.981 [2024-07-16 00:27:47.520818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.981 [2024-07-16 00:27:47.520848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.981 qpair failed and we were unable to recover it. 00:26:28.981 [2024-07-16 00:27:47.521072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.981 [2024-07-16 00:27:47.521102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.981 qpair failed and we were unable to recover it. 00:26:28.981 [2024-07-16 00:27:47.521442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.981 [2024-07-16 00:27:47.521472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.981 qpair failed and we were unable to recover it. 00:26:28.981 [2024-07-16 00:27:47.521719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.981 [2024-07-16 00:27:47.521749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.981 qpair failed and we were unable to recover it. 00:26:28.981 [2024-07-16 00:27:47.521970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.981 [2024-07-16 00:27:47.522000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.981 qpair failed and we were unable to recover it. 00:26:28.981 [2024-07-16 00:27:47.522177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.981 [2024-07-16 00:27:47.522206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.981 qpair failed and we were unable to recover it. 00:26:28.981 [2024-07-16 00:27:47.522420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.981 [2024-07-16 00:27:47.522450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.981 qpair failed and we were unable to recover it. 00:26:28.981 [2024-07-16 00:27:47.522779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.981 [2024-07-16 00:27:47.522809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.981 qpair failed and we were unable to recover it. 00:26:28.981 [2024-07-16 00:27:47.523105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.981 [2024-07-16 00:27:47.523134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.981 qpair failed and we were unable to recover it. 00:26:28.981 [2024-07-16 00:27:47.523339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.981 [2024-07-16 00:27:47.523370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.981 qpair failed and we were unable to recover it. 00:26:28.981 [2024-07-16 00:27:47.523554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.981 [2024-07-16 00:27:47.523568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.981 qpair failed and we were unable to recover it. 00:26:28.981 [2024-07-16 00:27:47.523804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.981 [2024-07-16 00:27:47.523833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.981 qpair failed and we were unable to recover it. 00:26:28.981 [2024-07-16 00:27:47.524125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.981 [2024-07-16 00:27:47.524154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.981 qpair failed and we were unable to recover it. 00:26:28.981 [2024-07-16 00:27:47.524399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.981 [2024-07-16 00:27:47.524413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.981 qpair failed and we were unable to recover it. 00:26:28.981 [2024-07-16 00:27:47.524671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.981 [2024-07-16 00:27:47.524700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.981 qpair failed and we were unable to recover it. 00:26:28.981 [2024-07-16 00:27:47.524933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.981 [2024-07-16 00:27:47.524963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.981 qpair failed and we were unable to recover it. 00:26:28.981 [2024-07-16 00:27:47.525202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.981 [2024-07-16 00:27:47.525240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.981 qpair failed and we were unable to recover it. 00:26:28.981 [2024-07-16 00:27:47.525583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.981 [2024-07-16 00:27:47.525612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.981 qpair failed and we were unable to recover it. 00:26:28.981 [2024-07-16 00:27:47.525836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.981 [2024-07-16 00:27:47.525865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.981 qpair failed and we were unable to recover it. 00:26:28.981 [2024-07-16 00:27:47.526160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.981 [2024-07-16 00:27:47.526189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.981 qpair failed and we were unable to recover it. 00:26:28.981 [2024-07-16 00:27:47.526458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.981 [2024-07-16 00:27:47.526489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.981 qpair failed and we were unable to recover it. 00:26:28.981 [2024-07-16 00:27:47.526749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.981 [2024-07-16 00:27:47.526785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.981 qpair failed and we were unable to recover it. 00:26:28.981 [2024-07-16 00:27:47.527009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.981 [2024-07-16 00:27:47.527039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.982 qpair failed and we were unable to recover it. 00:26:28.982 [2024-07-16 00:27:47.527336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.982 [2024-07-16 00:27:47.527367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.982 qpair failed and we were unable to recover it. 00:26:28.982 [2024-07-16 00:27:47.527692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.982 [2024-07-16 00:27:47.527722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.982 qpair failed and we were unable to recover it. 00:26:28.982 [2024-07-16 00:27:47.527989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.982 [2024-07-16 00:27:47.528018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.982 qpair failed and we were unable to recover it. 00:26:28.982 [2024-07-16 00:27:47.528323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.982 [2024-07-16 00:27:47.528353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.982 qpair failed and we were unable to recover it. 00:26:28.982 [2024-07-16 00:27:47.528675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.982 [2024-07-16 00:27:47.528705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.982 qpair failed and we were unable to recover it. 00:26:28.982 [2024-07-16 00:27:47.528938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.982 [2024-07-16 00:27:47.528967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.982 qpair failed and we were unable to recover it. 00:26:28.982 [2024-07-16 00:27:47.529205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.982 [2024-07-16 00:27:47.529218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.982 qpair failed and we were unable to recover it. 00:26:28.982 [2024-07-16 00:27:47.529449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.982 [2024-07-16 00:27:47.529463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.982 qpair failed and we were unable to recover it. 00:26:28.982 [2024-07-16 00:27:47.529663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.982 [2024-07-16 00:27:47.529677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.982 qpair failed and we were unable to recover it. 00:26:28.982 [2024-07-16 00:27:47.529972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.982 [2024-07-16 00:27:47.530001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.982 qpair failed and we were unable to recover it. 00:26:28.982 [2024-07-16 00:27:47.530316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.982 [2024-07-16 00:27:47.530345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.982 qpair failed and we were unable to recover it. 00:26:28.982 [2024-07-16 00:27:47.530581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.982 [2024-07-16 00:27:47.530611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.982 qpair failed and we were unable to recover it. 00:26:28.982 [2024-07-16 00:27:47.530908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.982 [2024-07-16 00:27:47.530938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.982 qpair failed and we were unable to recover it. 00:26:28.982 [2024-07-16 00:27:47.531077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.982 [2024-07-16 00:27:47.531106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.982 qpair failed and we were unable to recover it. 00:26:28.982 [2024-07-16 00:27:47.531412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.982 [2024-07-16 00:27:47.531442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.982 qpair failed and we were unable to recover it. 00:26:28.982 [2024-07-16 00:27:47.531780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.982 [2024-07-16 00:27:47.531811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.982 qpair failed and we were unable to recover it. 00:26:28.982 [2024-07-16 00:27:47.532124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.982 [2024-07-16 00:27:47.532154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.982 qpair failed and we were unable to recover it. 00:26:28.982 [2024-07-16 00:27:47.532337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.982 [2024-07-16 00:27:47.532351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.982 qpair failed and we were unable to recover it. 00:26:28.982 [2024-07-16 00:27:47.532645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.982 [2024-07-16 00:27:47.532674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.982 qpair failed and we were unable to recover it. 00:26:28.982 [2024-07-16 00:27:47.532852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.982 [2024-07-16 00:27:47.532881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.982 qpair failed and we were unable to recover it. 00:26:28.982 [2024-07-16 00:27:47.533141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.982 [2024-07-16 00:27:47.533171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.982 qpair failed and we were unable to recover it. 00:26:28.982 [2024-07-16 00:27:47.533349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.982 [2024-07-16 00:27:47.533378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.982 qpair failed and we were unable to recover it. 00:26:28.982 [2024-07-16 00:27:47.533697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.982 [2024-07-16 00:27:47.533726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.982 qpair failed and we were unable to recover it. 00:26:28.982 [2024-07-16 00:27:47.534051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.982 [2024-07-16 00:27:47.534080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.982 qpair failed and we were unable to recover it. 00:26:28.982 [2024-07-16 00:27:47.534321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.982 [2024-07-16 00:27:47.534351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.982 qpair failed and we were unable to recover it. 00:26:28.982 [2024-07-16 00:27:47.534584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.982 [2024-07-16 00:27:47.534619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.982 qpair failed and we were unable to recover it. 00:26:28.982 [2024-07-16 00:27:47.534776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.982 [2024-07-16 00:27:47.534805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.982 qpair failed and we were unable to recover it. 00:26:28.982 [2024-07-16 00:27:47.535122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.983 [2024-07-16 00:27:47.535152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.983 qpair failed and we were unable to recover it. 00:26:28.983 [2024-07-16 00:27:47.535386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.983 [2024-07-16 00:27:47.535416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.983 qpair failed and we were unable to recover it. 00:26:28.983 [2024-07-16 00:27:47.535708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.983 [2024-07-16 00:27:47.535737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.983 qpair failed and we were unable to recover it. 00:26:28.983 [2024-07-16 00:27:47.535963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.983 [2024-07-16 00:27:47.535993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.983 qpair failed and we were unable to recover it. 00:26:28.983 [2024-07-16 00:27:47.536220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.983 [2024-07-16 00:27:47.536269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.983 qpair failed and we were unable to recover it. 00:26:28.983 [2024-07-16 00:27:47.536456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.983 [2024-07-16 00:27:47.536470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.983 qpair failed and we were unable to recover it. 00:26:28.983 [2024-07-16 00:27:47.536757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.983 [2024-07-16 00:27:47.536786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.983 qpair failed and we were unable to recover it. 00:26:28.983 [2024-07-16 00:27:47.537112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.983 [2024-07-16 00:27:47.537141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.983 qpair failed and we were unable to recover it. 00:26:28.983 [2024-07-16 00:27:47.537372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.983 [2024-07-16 00:27:47.537402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.983 qpair failed and we were unable to recover it. 00:26:28.983 [2024-07-16 00:27:47.537701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.983 [2024-07-16 00:27:47.537731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.983 qpair failed and we were unable to recover it. 00:26:28.983 [2024-07-16 00:27:47.537974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.983 [2024-07-16 00:27:47.538003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.983 qpair failed and we were unable to recover it. 00:26:28.983 [2024-07-16 00:27:47.538268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.983 [2024-07-16 00:27:47.538298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.983 qpair failed and we were unable to recover it. 00:26:28.983 [2024-07-16 00:27:47.538558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.983 [2024-07-16 00:27:47.538587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.983 qpair failed and we were unable to recover it. 00:26:28.983 [2024-07-16 00:27:47.538760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.983 [2024-07-16 00:27:47.538789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.983 qpair failed and we were unable to recover it. 00:26:28.983 [2024-07-16 00:27:47.539014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.983 [2024-07-16 00:27:47.539044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.983 qpair failed and we were unable to recover it. 00:26:28.983 [2024-07-16 00:27:47.539294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.983 [2024-07-16 00:27:47.539308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.983 qpair failed and we were unable to recover it. 00:26:28.983 [2024-07-16 00:27:47.539563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.983 [2024-07-16 00:27:47.539605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.983 qpair failed and we were unable to recover it. 00:26:28.983 [2024-07-16 00:27:47.539921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.983 [2024-07-16 00:27:47.539950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.983 qpair failed and we were unable to recover it. 00:26:28.983 [2024-07-16 00:27:47.540123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.983 [2024-07-16 00:27:47.540152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.983 qpair failed and we were unable to recover it. 00:26:28.983 [2024-07-16 00:27:47.540394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.983 [2024-07-16 00:27:47.540424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.983 qpair failed and we were unable to recover it. 00:26:28.983 [2024-07-16 00:27:47.540654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.983 [2024-07-16 00:27:47.540667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.983 qpair failed and we were unable to recover it. 00:26:28.983 [2024-07-16 00:27:47.540974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.983 [2024-07-16 00:27:47.541004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.983 qpair failed and we were unable to recover it. 00:26:28.983 [2024-07-16 00:27:47.541295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.983 [2024-07-16 00:27:47.541325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.983 qpair failed and we were unable to recover it. 00:26:28.983 [2024-07-16 00:27:47.541586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.983 [2024-07-16 00:27:47.541616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.983 qpair failed and we were unable to recover it. 00:26:28.983 [2024-07-16 00:27:47.541878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.983 [2024-07-16 00:27:47.541907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.983 qpair failed and we were unable to recover it. 00:26:28.983 [2024-07-16 00:27:47.542087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.983 [2024-07-16 00:27:47.542115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.983 qpair failed and we were unable to recover it. 00:26:28.983 [2024-07-16 00:27:47.542430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.983 [2024-07-16 00:27:47.542460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.983 qpair failed and we were unable to recover it. 00:26:28.983 [2024-07-16 00:27:47.542746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.983 [2024-07-16 00:27:47.542776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.983 qpair failed and we were unable to recover it. 00:26:28.984 [2024-07-16 00:27:47.543008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.984 [2024-07-16 00:27:47.543037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.984 qpair failed and we were unable to recover it. 00:26:28.984 [2024-07-16 00:27:47.543262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.984 [2024-07-16 00:27:47.543293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.984 qpair failed and we were unable to recover it. 00:26:28.984 [2024-07-16 00:27:47.543521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.984 [2024-07-16 00:27:47.543550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.984 qpair failed and we were unable to recover it. 00:26:28.984 [2024-07-16 00:27:47.543865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.984 [2024-07-16 00:27:47.543894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.984 qpair failed and we were unable to recover it. 00:26:28.984 [2024-07-16 00:27:47.544085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.984 [2024-07-16 00:27:47.544124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.984 qpair failed and we were unable to recover it. 00:26:28.984 [2024-07-16 00:27:47.544304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.984 [2024-07-16 00:27:47.544317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.984 qpair failed and we were unable to recover it. 00:26:28.984 [2024-07-16 00:27:47.544545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.984 [2024-07-16 00:27:47.544574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.984 qpair failed and we were unable to recover it. 00:26:28.984 [2024-07-16 00:27:47.544773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.984 [2024-07-16 00:27:47.544802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.984 qpair failed and we were unable to recover it. 00:26:28.984 [2024-07-16 00:27:47.545051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.984 [2024-07-16 00:27:47.545080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.984 qpair failed and we were unable to recover it. 00:26:28.984 [2024-07-16 00:27:47.545371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.984 [2024-07-16 00:27:47.545385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.984 qpair failed and we were unable to recover it. 00:26:28.984 [2024-07-16 00:27:47.545592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.984 [2024-07-16 00:27:47.545608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.984 qpair failed and we were unable to recover it. 00:26:28.984 [2024-07-16 00:27:47.545826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.984 [2024-07-16 00:27:47.545855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.984 qpair failed and we were unable to recover it. 00:26:28.984 [2024-07-16 00:27:47.546185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.984 [2024-07-16 00:27:47.546215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.984 qpair failed and we were unable to recover it. 00:26:28.984 [2024-07-16 00:27:47.546460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.984 [2024-07-16 00:27:47.546474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.984 qpair failed and we were unable to recover it. 00:26:28.984 [2024-07-16 00:27:47.546671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.984 [2024-07-16 00:27:47.546700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.984 qpair failed and we were unable to recover it. 00:26:28.984 [2024-07-16 00:27:47.546939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.984 [2024-07-16 00:27:47.546968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.984 qpair failed and we were unable to recover it. 00:26:28.984 [2024-07-16 00:27:47.547207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.984 [2024-07-16 00:27:47.547245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.984 qpair failed and we were unable to recover it. 00:26:28.984 [2024-07-16 00:27:47.547425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.984 [2024-07-16 00:27:47.547454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.984 qpair failed and we were unable to recover it. 00:26:28.984 [2024-07-16 00:27:47.547768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.984 [2024-07-16 00:27:47.547797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.984 qpair failed and we were unable to recover it. 00:26:28.984 [2024-07-16 00:27:47.548023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.984 [2024-07-16 00:27:47.548053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.984 qpair failed and we were unable to recover it. 00:26:28.984 [2024-07-16 00:27:47.548234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.984 [2024-07-16 00:27:47.548248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.984 qpair failed and we were unable to recover it. 00:26:28.984 [2024-07-16 00:27:47.548479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.984 [2024-07-16 00:27:47.548508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.984 qpair failed and we were unable to recover it. 00:26:28.984 [2024-07-16 00:27:47.548771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.984 [2024-07-16 00:27:47.548801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.984 qpair failed and we were unable to recover it. 00:26:28.984 [2024-07-16 00:27:47.548983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.984 [2024-07-16 00:27:47.549013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.984 qpair failed and we were unable to recover it. 00:26:28.984 [2024-07-16 00:27:47.549262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.984 [2024-07-16 00:27:47.549294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.984 qpair failed and we were unable to recover it. 00:26:28.984 [2024-07-16 00:27:47.549529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.984 [2024-07-16 00:27:47.549559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.984 qpair failed and we were unable to recover it. 00:26:28.984 [2024-07-16 00:27:47.549798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.984 [2024-07-16 00:27:47.549827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.984 qpair failed and we were unable to recover it. 00:26:28.984 [2024-07-16 00:27:47.550067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.984 [2024-07-16 00:27:47.550096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.984 qpair failed and we were unable to recover it. 00:26:28.984 [2024-07-16 00:27:47.550327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.984 [2024-07-16 00:27:47.550357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.984 qpair failed and we were unable to recover it. 00:26:28.984 [2024-07-16 00:27:47.550542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.984 [2024-07-16 00:27:47.550571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.984 qpair failed and we were unable to recover it. 00:26:28.984 [2024-07-16 00:27:47.550890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.984 [2024-07-16 00:27:47.550920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.984 qpair failed and we were unable to recover it. 00:26:28.984 [2024-07-16 00:27:47.551217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.984 [2024-07-16 00:27:47.551254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.984 qpair failed and we were unable to recover it. 00:26:28.984 [2024-07-16 00:27:47.551546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.984 [2024-07-16 00:27:47.551575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.984 qpair failed and we were unable to recover it. 00:26:28.984 [2024-07-16 00:27:47.551886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.984 [2024-07-16 00:27:47.551915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.984 qpair failed and we were unable to recover it. 00:26:28.984 [2024-07-16 00:27:47.552205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.984 [2024-07-16 00:27:47.552241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.985 qpair failed and we were unable to recover it. 00:26:28.985 [2024-07-16 00:27:47.552598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.985 [2024-07-16 00:27:47.552628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.985 qpair failed and we were unable to recover it. 00:26:28.985 [2024-07-16 00:27:47.552857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.985 [2024-07-16 00:27:47.552887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.985 qpair failed and we were unable to recover it. 00:26:28.985 [2024-07-16 00:27:47.553137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.985 [2024-07-16 00:27:47.553166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.985 qpair failed and we were unable to recover it. 00:26:28.985 [2024-07-16 00:27:47.553417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.985 [2024-07-16 00:27:47.553448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.985 qpair failed and we were unable to recover it. 00:26:28.985 [2024-07-16 00:27:47.553758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.985 [2024-07-16 00:27:47.553787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.985 qpair failed and we were unable to recover it. 00:26:28.985 [2024-07-16 00:27:47.553978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.985 [2024-07-16 00:27:47.554008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.985 qpair failed and we were unable to recover it. 00:26:28.985 [2024-07-16 00:27:47.554174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.985 [2024-07-16 00:27:47.554204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.985 qpair failed and we were unable to recover it. 00:26:28.985 [2024-07-16 00:27:47.554513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.985 [2024-07-16 00:27:47.554527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.985 qpair failed and we were unable to recover it. 00:26:28.985 [2024-07-16 00:27:47.554817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.985 [2024-07-16 00:27:47.554831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.985 qpair failed and we were unable to recover it. 00:26:28.985 [2024-07-16 00:27:47.554953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.985 [2024-07-16 00:27:47.554967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.985 qpair failed and we were unable to recover it. 00:26:28.985 [2024-07-16 00:27:47.555158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.985 [2024-07-16 00:27:47.555172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.985 qpair failed and we were unable to recover it. 00:26:28.985 [2024-07-16 00:27:47.555385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.985 [2024-07-16 00:27:47.555415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.985 qpair failed and we were unable to recover it. 00:26:28.985 [2024-07-16 00:27:47.555594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.985 [2024-07-16 00:27:47.555624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.985 qpair failed and we were unable to recover it. 00:26:28.985 [2024-07-16 00:27:47.555875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.985 [2024-07-16 00:27:47.555905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.985 qpair failed and we were unable to recover it. 00:26:28.985 [2024-07-16 00:27:47.556152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.985 [2024-07-16 00:27:47.556182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.985 qpair failed and we were unable to recover it. 00:26:28.985 [2024-07-16 00:27:47.556351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.985 [2024-07-16 00:27:47.556369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.985 qpair failed and we were unable to recover it. 00:26:28.985 [2024-07-16 00:27:47.556678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.985 [2024-07-16 00:27:47.556706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.985 qpair failed and we were unable to recover it. 00:26:28.985 [2024-07-16 00:27:47.556996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.985 [2024-07-16 00:27:47.557026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.985 qpair failed and we were unable to recover it. 00:26:28.985 [2024-07-16 00:27:47.557259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.985 [2024-07-16 00:27:47.557273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.985 qpair failed and we were unable to recover it. 00:26:28.985 [2024-07-16 00:27:47.557475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.985 [2024-07-16 00:27:47.557504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.985 qpair failed and we were unable to recover it. 00:26:28.985 [2024-07-16 00:27:47.557739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.985 [2024-07-16 00:27:47.557769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.985 qpair failed and we were unable to recover it. 00:26:28.985 [2024-07-16 00:27:47.557995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.985 [2024-07-16 00:27:47.558025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.985 qpair failed and we were unable to recover it. 00:26:28.985 [2024-07-16 00:27:47.558265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.985 [2024-07-16 00:27:47.558294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.985 qpair failed and we were unable to recover it. 00:26:28.985 [2024-07-16 00:27:47.558498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.985 [2024-07-16 00:27:47.558527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.985 qpair failed and we were unable to recover it. 00:26:28.985 [2024-07-16 00:27:47.558793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.985 [2024-07-16 00:27:47.558822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.985 qpair failed and we were unable to recover it. 00:26:28.985 [2024-07-16 00:27:47.559044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.985 [2024-07-16 00:27:47.559074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.985 qpair failed and we were unable to recover it. 00:26:28.985 [2024-07-16 00:27:47.559253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.985 [2024-07-16 00:27:47.559284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.985 qpair failed and we were unable to recover it. 00:26:28.985 [2024-07-16 00:27:47.559483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.985 [2024-07-16 00:27:47.559512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.985 qpair failed and we were unable to recover it. 00:26:28.985 [2024-07-16 00:27:47.559760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.985 [2024-07-16 00:27:47.559773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.985 qpair failed and we were unable to recover it. 00:26:28.985 [2024-07-16 00:27:47.559965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.985 [2024-07-16 00:27:47.559978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.985 qpair failed and we were unable to recover it. 00:26:28.985 [2024-07-16 00:27:47.560169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.985 [2024-07-16 00:27:47.560182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.985 qpair failed and we were unable to recover it. 00:26:28.985 [2024-07-16 00:27:47.560315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.985 [2024-07-16 00:27:47.560329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.985 qpair failed and we were unable to recover it. 00:26:28.985 [2024-07-16 00:27:47.560523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.985 [2024-07-16 00:27:47.560552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.985 qpair failed and we were unable to recover it. 00:26:28.985 [2024-07-16 00:27:47.560842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.986 [2024-07-16 00:27:47.560871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.986 qpair failed and we were unable to recover it. 00:26:28.986 [2024-07-16 00:27:47.561050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.986 [2024-07-16 00:27:47.561080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.986 qpair failed and we were unable to recover it. 00:26:28.986 [2024-07-16 00:27:47.561397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.986 [2024-07-16 00:27:47.561427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.986 qpair failed and we were unable to recover it. 00:26:28.986 [2024-07-16 00:27:47.561664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.986 [2024-07-16 00:27:47.561693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.986 qpair failed and we were unable to recover it. 00:26:28.986 [2024-07-16 00:27:47.561916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.986 [2024-07-16 00:27:47.561946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.986 qpair failed and we were unable to recover it. 00:26:28.986 [2024-07-16 00:27:47.562141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.986 [2024-07-16 00:27:47.562170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.986 qpair failed and we were unable to recover it. 00:26:28.986 [2024-07-16 00:27:47.562436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.986 [2024-07-16 00:27:47.562466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.986 qpair failed and we were unable to recover it. 00:26:28.986 [2024-07-16 00:27:47.562682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.986 [2024-07-16 00:27:47.562695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.986 qpair failed and we were unable to recover it. 00:26:28.986 [2024-07-16 00:27:47.562838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.986 [2024-07-16 00:27:47.562868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.986 qpair failed and we were unable to recover it. 00:26:28.986 [2024-07-16 00:27:47.563103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.986 [2024-07-16 00:27:47.563133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.986 qpair failed and we were unable to recover it. 00:26:28.986 [2024-07-16 00:27:47.563383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.986 [2024-07-16 00:27:47.563426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.986 qpair failed and we were unable to recover it. 00:26:28.986 [2024-07-16 00:27:47.563633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.986 [2024-07-16 00:27:47.563647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.986 qpair failed and we were unable to recover it. 00:26:28.986 [2024-07-16 00:27:47.563852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.986 [2024-07-16 00:27:47.563865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.986 qpair failed and we were unable to recover it. 00:26:28.986 [2024-07-16 00:27:47.563950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.986 [2024-07-16 00:27:47.563980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.986 qpair failed and we were unable to recover it. 00:26:28.986 [2024-07-16 00:27:47.564254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.986 [2024-07-16 00:27:47.564283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.986 qpair failed and we were unable to recover it. 00:26:28.986 [2024-07-16 00:27:47.564467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.986 [2024-07-16 00:27:47.564497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.986 qpair failed and we were unable to recover it. 00:26:28.986 [2024-07-16 00:27:47.564806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.986 [2024-07-16 00:27:47.564835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.986 qpair failed and we were unable to recover it. 00:26:28.986 [2024-07-16 00:27:47.565060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.986 [2024-07-16 00:27:47.565089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.986 qpair failed and we were unable to recover it. 00:26:28.986 [2024-07-16 00:27:47.565412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.986 [2024-07-16 00:27:47.565442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.986 qpair failed and we were unable to recover it. 00:26:28.986 [2024-07-16 00:27:47.565634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.986 [2024-07-16 00:27:47.565648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.986 qpair failed and we were unable to recover it. 00:26:28.986 [2024-07-16 00:27:47.565863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.986 [2024-07-16 00:27:47.565893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.986 qpair failed and we were unable to recover it. 00:26:28.986 [2024-07-16 00:27:47.566208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.986 [2024-07-16 00:27:47.566246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.986 qpair failed and we were unable to recover it. 00:26:28.986 [2024-07-16 00:27:47.566485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.986 [2024-07-16 00:27:47.566501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.986 qpair failed and we were unable to recover it. 00:26:28.986 [2024-07-16 00:27:47.566710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.986 [2024-07-16 00:27:47.566739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.986 qpair failed and we were unable to recover it. 00:26:28.986 [2024-07-16 00:27:47.566982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.986 [2024-07-16 00:27:47.567012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.986 qpair failed and we were unable to recover it. 00:26:28.986 [2024-07-16 00:27:47.567244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.986 [2024-07-16 00:27:47.567274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.986 qpair failed and we were unable to recover it. 00:26:28.986 [2024-07-16 00:27:47.567444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.986 [2024-07-16 00:27:47.567474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.986 qpair failed and we were unable to recover it. 00:26:28.986 [2024-07-16 00:27:47.567770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.986 [2024-07-16 00:27:47.567799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.986 qpair failed and we were unable to recover it. 00:26:28.986 [2024-07-16 00:27:47.567982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.986 [2024-07-16 00:27:47.568011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.986 qpair failed and we were unable to recover it. 00:26:28.986 [2024-07-16 00:27:47.568183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.986 [2024-07-16 00:27:47.568197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.986 qpair failed and we were unable to recover it. 00:26:28.986 [2024-07-16 00:27:47.568402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.986 [2024-07-16 00:27:47.568432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.986 qpair failed and we were unable to recover it. 00:26:28.986 [2024-07-16 00:27:47.568744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.986 [2024-07-16 00:27:47.568774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.986 qpair failed and we were unable to recover it. 00:26:28.986 [2024-07-16 00:27:47.569067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.986 [2024-07-16 00:27:47.569096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.986 qpair failed and we were unable to recover it. 00:26:28.986 [2024-07-16 00:27:47.569386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.986 [2024-07-16 00:27:47.569416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.986 qpair failed and we were unable to recover it. 00:26:28.986 [2024-07-16 00:27:47.569591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.986 [2024-07-16 00:27:47.569604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.986 qpair failed and we were unable to recover it. 00:26:28.986 [2024-07-16 00:27:47.569858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.986 [2024-07-16 00:27:47.569872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.986 qpair failed and we were unable to recover it. 00:26:28.986 [2024-07-16 00:27:47.570129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.986 [2024-07-16 00:27:47.570167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.986 qpair failed and we were unable to recover it. 00:26:28.986 [2024-07-16 00:27:47.570340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.986 [2024-07-16 00:27:47.570370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.986 qpair failed and we were unable to recover it. 00:26:28.986 [2024-07-16 00:27:47.570659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.986 [2024-07-16 00:27:47.570689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.986 qpair failed and we were unable to recover it. 00:26:28.986 [2024-07-16 00:27:47.570927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.986 [2024-07-16 00:27:47.570957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.986 qpair failed and we were unable to recover it. 00:26:28.986 [2024-07-16 00:27:47.571280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.986 [2024-07-16 00:27:47.571294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.986 qpair failed and we were unable to recover it. 00:26:28.987 [2024-07-16 00:27:47.571496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.987 [2024-07-16 00:27:47.571509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.987 qpair failed and we were unable to recover it. 00:26:28.987 [2024-07-16 00:27:47.571716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.987 [2024-07-16 00:27:47.571730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.987 qpair failed and we were unable to recover it. 00:26:28.987 [2024-07-16 00:27:47.571852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.987 [2024-07-16 00:27:47.571865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.987 qpair failed and we were unable to recover it. 00:26:28.987 [2024-07-16 00:27:47.572150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.987 [2024-07-16 00:27:47.572179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.987 qpair failed and we were unable to recover it. 00:26:28.987 [2024-07-16 00:27:47.572497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.987 [2024-07-16 00:27:47.572527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.987 qpair failed and we were unable to recover it. 00:26:28.987 [2024-07-16 00:27:47.572743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.987 [2024-07-16 00:27:47.572756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.987 qpair failed and we were unable to recover it. 00:26:28.987 [2024-07-16 00:27:47.572962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.987 [2024-07-16 00:27:47.572976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.987 qpair failed and we were unable to recover it. 00:26:28.987 [2024-07-16 00:27:47.573257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.987 [2024-07-16 00:27:47.573288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.987 qpair failed and we were unable to recover it. 00:26:28.987 [2024-07-16 00:27:47.573535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.987 [2024-07-16 00:27:47.573565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.987 qpair failed and we were unable to recover it. 00:26:28.987 [2024-07-16 00:27:47.573796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.987 [2024-07-16 00:27:47.573811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.987 qpair failed and we were unable to recover it. 00:26:28.987 [2024-07-16 00:27:47.574017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.987 [2024-07-16 00:27:47.574046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.987 qpair failed and we were unable to recover it. 00:26:28.987 [2024-07-16 00:27:47.574293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.987 [2024-07-16 00:27:47.574323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.987 qpair failed and we were unable to recover it. 00:26:28.987 [2024-07-16 00:27:47.574493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.987 [2024-07-16 00:27:47.574507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.987 qpair failed and we were unable to recover it. 00:26:28.987 [2024-07-16 00:27:47.574661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.987 [2024-07-16 00:27:47.574674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.987 qpair failed and we were unable to recover it. 00:26:28.987 [2024-07-16 00:27:47.574936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.987 [2024-07-16 00:27:47.574965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.987 qpair failed and we were unable to recover it. 00:26:28.987 [2024-07-16 00:27:47.575256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.987 [2024-07-16 00:27:47.575286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.987 qpair failed and we were unable to recover it. 00:26:28.987 [2024-07-16 00:27:47.575599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.987 [2024-07-16 00:27:47.575612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.987 qpair failed and we were unable to recover it. 00:26:28.987 [2024-07-16 00:27:47.575768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.987 [2024-07-16 00:27:47.575781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.987 qpair failed and we were unable to recover it. 00:26:28.987 [2024-07-16 00:27:47.576006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.987 [2024-07-16 00:27:47.576020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.987 qpair failed and we were unable to recover it. 00:26:28.987 [2024-07-16 00:27:47.576283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.987 [2024-07-16 00:27:47.576313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.987 qpair failed and we were unable to recover it. 00:26:28.987 [2024-07-16 00:27:47.576553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.987 [2024-07-16 00:27:47.576582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.987 qpair failed and we were unable to recover it. 00:26:28.987 [2024-07-16 00:27:47.576805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.987 [2024-07-16 00:27:47.576821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.987 qpair failed and we were unable to recover it. 00:26:28.987 [2024-07-16 00:27:47.577034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.987 [2024-07-16 00:27:47.577063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.987 qpair failed and we were unable to recover it. 00:26:28.987 [2024-07-16 00:27:47.577381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.987 [2024-07-16 00:27:47.577419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.987 qpair failed and we were unable to recover it. 00:26:28.987 [2024-07-16 00:27:47.577619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.987 [2024-07-16 00:27:47.577633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.987 qpair failed and we were unable to recover it. 00:26:28.987 [2024-07-16 00:27:47.577888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.987 [2024-07-16 00:27:47.577901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.987 qpair failed and we were unable to recover it. 00:26:28.987 [2024-07-16 00:27:47.578177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.987 [2024-07-16 00:27:47.578208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.987 qpair failed and we were unable to recover it. 00:26:28.987 [2024-07-16 00:27:47.578398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.987 [2024-07-16 00:27:47.578428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.987 qpair failed and we were unable to recover it. 00:26:28.987 [2024-07-16 00:27:47.578682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.987 [2024-07-16 00:27:47.578712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.987 qpair failed and we were unable to recover it. 00:26:28.987 [2024-07-16 00:27:47.579033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.987 [2024-07-16 00:27:47.579063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.987 qpair failed and we were unable to recover it. 00:26:28.987 [2024-07-16 00:27:47.579301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.987 [2024-07-16 00:27:47.579331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.987 qpair failed and we were unable to recover it. 00:26:28.987 [2024-07-16 00:27:47.579523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.987 [2024-07-16 00:27:47.579553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.987 qpair failed and we were unable to recover it. 00:26:28.987 [2024-07-16 00:27:47.579784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.987 [2024-07-16 00:27:47.579797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.987 qpair failed and we were unable to recover it. 00:26:28.987 [2024-07-16 00:27:47.579999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.987 [2024-07-16 00:27:47.580013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.987 qpair failed and we were unable to recover it. 00:26:28.987 [2024-07-16 00:27:47.580139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.987 [2024-07-16 00:27:47.580152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.987 qpair failed and we were unable to recover it. 00:26:28.987 [2024-07-16 00:27:47.580414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.987 [2024-07-16 00:27:47.580428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.987 qpair failed and we were unable to recover it. 00:26:28.987 [2024-07-16 00:27:47.580545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.987 [2024-07-16 00:27:47.580558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.987 qpair failed and we were unable to recover it. 00:26:28.987 [2024-07-16 00:27:47.580821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.987 [2024-07-16 00:27:47.580850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.987 qpair failed and we were unable to recover it. 00:26:28.987 [2024-07-16 00:27:47.581158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.987 [2024-07-16 00:27:47.581187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.987 qpair failed and we were unable to recover it. 00:26:28.987 [2024-07-16 00:27:47.581382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.987 [2024-07-16 00:27:47.581396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.987 qpair failed and we were unable to recover it. 00:26:28.987 [2024-07-16 00:27:47.581614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.987 [2024-07-16 00:27:47.581643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.987 qpair failed and we were unable to recover it. 00:26:28.988 [2024-07-16 00:27:47.581956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.988 [2024-07-16 00:27:47.581985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.988 qpair failed and we were unable to recover it. 00:26:28.988 [2024-07-16 00:27:47.582303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.988 [2024-07-16 00:27:47.582332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.988 qpair failed and we were unable to recover it. 00:26:28.988 [2024-07-16 00:27:47.582507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.988 [2024-07-16 00:27:47.582536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.988 qpair failed and we were unable to recover it. 00:26:28.988 [2024-07-16 00:27:47.582689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.988 [2024-07-16 00:27:47.582719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.988 qpair failed and we were unable to recover it. 00:26:28.988 [2024-07-16 00:27:47.582889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.988 [2024-07-16 00:27:47.582918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.988 qpair failed and we were unable to recover it. 00:26:28.988 [2024-07-16 00:27:47.583216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.988 [2024-07-16 00:27:47.583254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.988 qpair failed and we were unable to recover it. 00:26:28.988 [2024-07-16 00:27:47.583475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.988 [2024-07-16 00:27:47.583488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.988 qpair failed and we were unable to recover it. 00:26:28.988 [2024-07-16 00:27:47.583697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.988 [2024-07-16 00:27:47.583710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.988 qpair failed and we were unable to recover it. 00:26:28.988 [2024-07-16 00:27:47.583903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.988 [2024-07-16 00:27:47.583917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.988 qpair failed and we were unable to recover it. 00:26:28.988 [2024-07-16 00:27:47.584064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.988 [2024-07-16 00:27:47.584077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.988 qpair failed and we were unable to recover it. 00:26:28.988 [2024-07-16 00:27:47.584220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.988 [2024-07-16 00:27:47.584259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.988 qpair failed and we were unable to recover it. 00:26:28.988 [2024-07-16 00:27:47.584555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.988 [2024-07-16 00:27:47.584584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.988 qpair failed and we were unable to recover it. 00:26:28.988 [2024-07-16 00:27:47.584824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.988 [2024-07-16 00:27:47.584854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.988 qpair failed and we were unable to recover it. 00:26:28.988 [2024-07-16 00:27:47.585077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.988 [2024-07-16 00:27:47.585106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.988 qpair failed and we were unable to recover it. 00:26:28.988 [2024-07-16 00:27:47.585352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.988 [2024-07-16 00:27:47.585366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.988 qpair failed and we were unable to recover it. 00:26:28.988 [2024-07-16 00:27:47.585542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.988 [2024-07-16 00:27:47.585555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.988 qpair failed and we were unable to recover it. 00:26:28.988 [2024-07-16 00:27:47.585696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.988 [2024-07-16 00:27:47.585725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.988 qpair failed and we were unable to recover it. 00:26:28.988 [2024-07-16 00:27:47.586032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.988 [2024-07-16 00:27:47.586061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.988 qpair failed and we were unable to recover it. 00:26:28.988 [2024-07-16 00:27:47.586281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.988 [2024-07-16 00:27:47.586295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.988 qpair failed and we were unable to recover it. 00:26:28.988 [2024-07-16 00:27:47.586568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.988 [2024-07-16 00:27:47.586598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.988 qpair failed and we were unable to recover it. 00:26:28.988 [2024-07-16 00:27:47.586918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.988 [2024-07-16 00:27:47.586953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.988 qpair failed and we were unable to recover it. 00:26:28.988 [2024-07-16 00:27:47.587283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.988 [2024-07-16 00:27:47.587313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.988 qpair failed and we were unable to recover it. 00:26:28.988 [2024-07-16 00:27:47.587551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.988 [2024-07-16 00:27:47.587581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.988 qpair failed and we were unable to recover it. 00:26:28.988 [2024-07-16 00:27:47.587816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.988 [2024-07-16 00:27:47.587845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.988 qpair failed and we were unable to recover it. 00:26:28.988 [2024-07-16 00:27:47.588139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.988 [2024-07-16 00:27:47.588168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.988 qpair failed and we were unable to recover it. 00:26:28.988 [2024-07-16 00:27:47.588368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.988 [2024-07-16 00:27:47.588399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.988 qpair failed and we were unable to recover it. 00:26:28.988 [2024-07-16 00:27:47.588703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.988 [2024-07-16 00:27:47.588732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.988 qpair failed and we were unable to recover it. 00:26:28.988 [2024-07-16 00:27:47.589030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.988 [2024-07-16 00:27:47.589060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.988 qpair failed and we were unable to recover it. 00:26:28.988 [2024-07-16 00:27:47.589295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.988 [2024-07-16 00:27:47.589309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.988 qpair failed and we were unable to recover it. 00:26:28.988 [2024-07-16 00:27:47.589569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.988 [2024-07-16 00:27:47.589599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.988 qpair failed and we were unable to recover it. 00:26:28.988 [2024-07-16 00:27:47.589913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.988 [2024-07-16 00:27:47.589943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.988 qpair failed and we were unable to recover it. 00:26:28.988 [2024-07-16 00:27:47.590259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.988 [2024-07-16 00:27:47.590289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.988 qpair failed and we were unable to recover it. 00:26:28.988 [2024-07-16 00:27:47.590576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.988 [2024-07-16 00:27:47.590606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.988 qpair failed and we were unable to recover it. 00:26:28.988 [2024-07-16 00:27:47.590864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.988 [2024-07-16 00:27:47.590893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.988 qpair failed and we were unable to recover it. 00:26:28.988 [2024-07-16 00:27:47.591077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.988 [2024-07-16 00:27:47.591107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.988 qpair failed and we were unable to recover it. 00:26:28.988 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1664709 Killed "${NVMF_APP[@]}" "$@" 00:26:28.988 [2024-07-16 00:27:47.591423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.988 [2024-07-16 00:27:47.591453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.988 qpair failed and we were unable to recover it. 00:26:28.988 [2024-07-16 00:27:47.591688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.988 [2024-07-16 00:27:47.591702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.988 qpair failed and we were unable to recover it. 00:26:28.988 00:27:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:26:28.988 [2024-07-16 00:27:47.591852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.988 [2024-07-16 00:27:47.591865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.988 qpair failed and we were unable to recover it. 00:26:28.988 [2024-07-16 00:27:47.592091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.988 [2024-07-16 00:27:47.592105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.988 qpair failed and we were unable to recover it. 00:26:28.988 00:27:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:26:28.988 [2024-07-16 00:27:47.592325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.989 [2024-07-16 00:27:47.592339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.989 qpair failed and we were unable to recover it. 00:26:28.989 00:27:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:28.989 [2024-07-16 00:27:47.592491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.989 [2024-07-16 00:27:47.592505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.989 qpair failed and we were unable to recover it. 00:26:28.989 [2024-07-16 00:27:47.592693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.989 [2024-07-16 00:27:47.592706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.989 qpair failed and we were unable to recover it. 00:26:28.989 00:27:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:28.989 [2024-07-16 00:27:47.592934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.989 [2024-07-16 00:27:47.592947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.989 qpair failed and we were unable to recover it. 00:26:28.989 00:27:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:28.989 [2024-07-16 00:27:47.593149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.989 [2024-07-16 00:27:47.593163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.989 qpair failed and we were unable to recover it. 00:26:28.989 [2024-07-16 00:27:47.593439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.989 [2024-07-16 00:27:47.593455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.989 qpair failed and we were unable to recover it. 00:26:28.989 [2024-07-16 00:27:47.593584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.989 [2024-07-16 00:27:47.593598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.989 qpair failed and we were unable to recover it. 00:26:28.989 [2024-07-16 00:27:47.593804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.989 [2024-07-16 00:27:47.593817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.989 qpair failed and we were unable to recover it. 00:26:28.989 [2024-07-16 00:27:47.594035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.989 [2024-07-16 00:27:47.594048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.989 qpair failed and we were unable to recover it. 00:26:28.989 [2024-07-16 00:27:47.594202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.989 [2024-07-16 00:27:47.594216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.989 qpair failed and we were unable to recover it. 00:26:28.989 [2024-07-16 00:27:47.594348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.989 [2024-07-16 00:27:47.594362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.989 qpair failed and we were unable to recover it. 00:26:28.989 [2024-07-16 00:27:47.594574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.989 [2024-07-16 00:27:47.594587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.989 qpair failed and we were unable to recover it. 00:26:28.989 [2024-07-16 00:27:47.594777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.989 [2024-07-16 00:27:47.594791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.989 qpair failed and we were unable to recover it. 00:26:28.989 [2024-07-16 00:27:47.595048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.989 [2024-07-16 00:27:47.595061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.989 qpair failed and we were unable to recover it. 00:26:28.989 [2024-07-16 00:27:47.595298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.989 [2024-07-16 00:27:47.595311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.989 qpair failed and we were unable to recover it. 00:26:28.989 [2024-07-16 00:27:47.595508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.989 [2024-07-16 00:27:47.595521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.989 qpair failed and we were unable to recover it. 00:26:28.989 [2024-07-16 00:27:47.595624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.989 [2024-07-16 00:27:47.595637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.989 qpair failed and we were unable to recover it. 00:26:28.989 [2024-07-16 00:27:47.595774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.989 [2024-07-16 00:27:47.595788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.989 qpair failed and we were unable to recover it. 00:26:28.989 [2024-07-16 00:27:47.595986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.989 [2024-07-16 00:27:47.596000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.989 qpair failed and we were unable to recover it. 00:26:28.989 [2024-07-16 00:27:47.596204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.989 [2024-07-16 00:27:47.596217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.989 qpair failed and we were unable to recover it. 00:26:28.989 [2024-07-16 00:27:47.596369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.989 [2024-07-16 00:27:47.596384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.989 qpair failed and we were unable to recover it. 00:26:28.989 [2024-07-16 00:27:47.596570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.989 [2024-07-16 00:27:47.596583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.989 qpair failed and we were unable to recover it. 00:26:28.989 [2024-07-16 00:27:47.596836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.989 [2024-07-16 00:27:47.596850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.989 qpair failed and we were unable to recover it. 00:26:28.989 [2024-07-16 00:27:47.597108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.989 [2024-07-16 00:27:47.597122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.989 qpair failed and we were unable to recover it. 00:26:28.989 [2024-07-16 00:27:47.597259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.989 [2024-07-16 00:27:47.597273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.989 qpair failed and we were unable to recover it. 00:26:28.989 [2024-07-16 00:27:47.597465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.989 [2024-07-16 00:27:47.597477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.989 qpair failed and we were unable to recover it. 00:26:28.989 [2024-07-16 00:27:47.597689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.989 [2024-07-16 00:27:47.597703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.989 qpair failed and we were unable to recover it. 00:26:28.989 [2024-07-16 00:27:47.597921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.989 [2024-07-16 00:27:47.597935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.989 qpair failed and we were unable to recover it. 00:26:28.989 [2024-07-16 00:27:47.598072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.989 [2024-07-16 00:27:47.598086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.989 qpair failed and we were unable to recover it. 00:26:28.989 [2024-07-16 00:27:47.598221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.989 [2024-07-16 00:27:47.598239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.989 qpair failed and we were unable to recover it. 00:26:28.989 [2024-07-16 00:27:47.598373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.989 [2024-07-16 00:27:47.598386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.989 qpair failed and we were unable to recover it. 00:26:28.989 [2024-07-16 00:27:47.598577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.989 [2024-07-16 00:27:47.598591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.989 qpair failed and we were unable to recover it. 00:26:28.989 [2024-07-16 00:27:47.598797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.989 [2024-07-16 00:27:47.598813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.989 qpair failed and we were unable to recover it. 00:26:28.989 [2024-07-16 00:27:47.598907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.989 [2024-07-16 00:27:47.598920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.989 qpair failed and we were unable to recover it. 00:26:28.989 [2024-07-16 00:27:47.599178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.990 [2024-07-16 00:27:47.599192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.990 qpair failed and we were unable to recover it. 00:26:28.990 00:27:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1665436 00:26:28.990 [2024-07-16 00:27:47.599420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.990 [2024-07-16 00:27:47.599434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.990 qpair failed and we were unable to recover it. 00:26:28.990 [2024-07-16 00:27:47.599635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.990 [2024-07-16 00:27:47.599649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.990 qpair failed and we were unable to recover it. 00:26:28.990 00:27:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1665436 00:26:28.990 [2024-07-16 00:27:47.599795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.990 [2024-07-16 00:27:47.599809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.990 qpair failed and we were unable to recover it. 00:26:28.990 00:27:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:26:28.990 [2024-07-16 00:27:47.600027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.990 [2024-07-16 00:27:47.600041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.990 qpair failed and we were unable to recover it. 00:26:28.990 00:27:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@823 -- # '[' -z 1665436 ']' 00:26:28.990 [2024-07-16 00:27:47.600329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.990 [2024-07-16 00:27:47.600343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.990 00:27:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:28.990 qpair failed and we were unable to recover it. 00:26:28.990 [2024-07-16 00:27:47.600553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.990 [2024-07-16 00:27:47.600567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.990 qpair failed and we were unable to recover it. 00:26:28.990 00:27:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@828 -- # local max_retries=100 00:26:28.990 [2024-07-16 00:27:47.600767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.990 [2024-07-16 00:27:47.600781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.990 qpair failed and we were unable to recover it. 00:26:28.990 00:27:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:28.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:28.990 [2024-07-16 00:27:47.600986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.990 [2024-07-16 00:27:47.601001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.990 qpair failed and we were unable to recover it. 00:26:28.990 00:27:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@832 -- # xtrace_disable 00:26:28.990 [2024-07-16 00:27:47.601308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.990 [2024-07-16 00:27:47.601323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.990 qpair failed and we were unable to recover it. 00:26:28.990 00:27:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:28.990 [2024-07-16 00:27:47.601532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.990 [2024-07-16 00:27:47.601546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.990 qpair failed and we were unable to recover it. 00:26:28.990 [2024-07-16 00:27:47.601801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.990 [2024-07-16 00:27:47.601814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.990 qpair failed and we were unable to recover it. 00:26:28.990 [2024-07-16 00:27:47.601966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.990 [2024-07-16 00:27:47.601980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.990 qpair failed and we were unable to recover it. 00:26:28.990 [2024-07-16 00:27:47.602101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.990 [2024-07-16 00:27:47.602114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.990 qpair failed and we were unable to recover it. 00:26:28.990 [2024-07-16 00:27:47.602370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.990 [2024-07-16 00:27:47.602384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.990 qpair failed and we were unable to recover it. 00:26:28.990 [2024-07-16 00:27:47.602592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.990 [2024-07-16 00:27:47.602605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.990 qpair failed and we were unable to recover it. 00:26:28.990 [2024-07-16 00:27:47.602743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.990 [2024-07-16 00:27:47.602756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.990 qpair failed and we were unable to recover it. 00:26:28.990 [2024-07-16 00:27:47.603017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.990 [2024-07-16 00:27:47.603031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.990 qpair failed and we were unable to recover it. 00:26:28.990 [2024-07-16 00:27:47.603266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.990 [2024-07-16 00:27:47.603281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.990 qpair failed and we were unable to recover it. 00:26:28.990 [2024-07-16 00:27:47.603490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.990 [2024-07-16 00:27:47.603504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.990 qpair failed and we were unable to recover it. 00:26:28.990 [2024-07-16 00:27:47.603639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.990 [2024-07-16 00:27:47.603655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.990 qpair failed and we were unable to recover it. 00:26:28.990 [2024-07-16 00:27:47.603801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.990 [2024-07-16 00:27:47.603814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.990 qpair failed and we were unable to recover it. 00:26:28.990 [2024-07-16 00:27:47.603968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.990 [2024-07-16 00:27:47.603981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.990 qpair failed and we were unable to recover it. 00:26:28.990 [2024-07-16 00:27:47.604252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.990 [2024-07-16 00:27:47.604266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.990 qpair failed and we were unable to recover it. 00:26:28.990 [2024-07-16 00:27:47.604474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.990 [2024-07-16 00:27:47.604487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.990 qpair failed and we were unable to recover it. 00:26:28.990 [2024-07-16 00:27:47.604689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.990 [2024-07-16 00:27:47.604702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.990 qpair failed and we were unable to recover it. 00:26:28.990 [2024-07-16 00:27:47.604963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.990 [2024-07-16 00:27:47.604976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.990 qpair failed and we were unable to recover it. 00:26:28.990 [2024-07-16 00:27:47.605199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.990 [2024-07-16 00:27:47.605213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.990 qpair failed and we were unable to recover it. 00:26:28.990 [2024-07-16 00:27:47.605459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.990 [2024-07-16 00:27:47.605473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.990 qpair failed and we were unable to recover it. 00:26:28.990 [2024-07-16 00:27:47.605682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.990 [2024-07-16 00:27:47.605696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.990 qpair failed and we were unable to recover it. 00:26:28.990 [2024-07-16 00:27:47.605900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.990 [2024-07-16 00:27:47.605913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.990 qpair failed and we were unable to recover it. 00:26:28.990 [2024-07-16 00:27:47.606119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.990 [2024-07-16 00:27:47.606133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.990 qpair failed and we were unable to recover it. 00:26:28.990 [2024-07-16 00:27:47.606342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.990 [2024-07-16 00:27:47.606356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.990 qpair failed and we were unable to recover it. 00:26:28.990 [2024-07-16 00:27:47.606494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.990 [2024-07-16 00:27:47.606507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.990 qpair failed and we were unable to recover it. 00:26:28.990 [2024-07-16 00:27:47.606804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.990 [2024-07-16 00:27:47.606817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.990 qpair failed and we were unable to recover it. 00:26:28.990 [2024-07-16 00:27:47.606968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.990 [2024-07-16 00:27:47.606982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.990 qpair failed and we were unable to recover it. 00:26:28.990 [2024-07-16 00:27:47.607141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.990 [2024-07-16 00:27:47.607155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.991 qpair failed and we were unable to recover it. 00:26:28.991 [2024-07-16 00:27:47.607288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.991 [2024-07-16 00:27:47.607302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.991 qpair failed and we were unable to recover it. 00:26:28.991 [2024-07-16 00:27:47.607493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.991 [2024-07-16 00:27:47.607507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.991 qpair failed and we were unable to recover it. 00:26:28.991 [2024-07-16 00:27:47.607646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.991 [2024-07-16 00:27:47.607659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.991 qpair failed and we were unable to recover it. 00:26:28.991 [2024-07-16 00:27:47.607835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.991 [2024-07-16 00:27:47.607848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.991 qpair failed and we were unable to recover it. 00:26:28.991 [2024-07-16 00:27:47.608107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.991 [2024-07-16 00:27:47.608121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.991 qpair failed and we were unable to recover it. 00:26:28.991 [2024-07-16 00:27:47.608259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.991 [2024-07-16 00:27:47.608272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.991 qpair failed and we were unable to recover it. 00:26:28.991 [2024-07-16 00:27:47.608474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.991 [2024-07-16 00:27:47.608487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.991 qpair failed and we were unable to recover it. 00:26:28.991 [2024-07-16 00:27:47.608705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.991 [2024-07-16 00:27:47.608719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.991 qpair failed and we were unable to recover it. 00:26:28.991 [2024-07-16 00:27:47.608853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.991 [2024-07-16 00:27:47.608867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.991 qpair failed and we were unable to recover it. 00:26:28.991 [2024-07-16 00:27:47.608967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.991 [2024-07-16 00:27:47.608981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:28.991 qpair failed and we were unable to recover it. 00:26:28.991 [2024-07-16 00:27:47.609194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.991 [2024-07-16 00:27:47.609221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.991 qpair failed and we were unable to recover it. 00:26:28.991 [2024-07-16 00:27:47.609431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.991 [2024-07-16 00:27:47.609443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.991 qpair failed and we were unable to recover it. 00:26:28.991 [2024-07-16 00:27:47.609643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.991 [2024-07-16 00:27:47.609653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.991 qpair failed and we were unable to recover it. 00:26:28.991 [2024-07-16 00:27:47.609837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.991 [2024-07-16 00:27:47.609847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.991 qpair failed and we were unable to recover it. 00:26:28.991 [2024-07-16 00:27:47.610046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.991 [2024-07-16 00:27:47.610056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.991 qpair failed and we were unable to recover it. 00:26:28.991 [2024-07-16 00:27:47.610192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.991 [2024-07-16 00:27:47.610201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.991 qpair failed and we were unable to recover it. 00:26:28.991 [2024-07-16 00:27:47.610335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.991 [2024-07-16 00:27:47.610346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.991 qpair failed and we were unable to recover it. 00:26:28.991 [2024-07-16 00:27:47.610543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.991 [2024-07-16 00:27:47.610553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.991 qpair failed and we were unable to recover it. 00:26:28.991 [2024-07-16 00:27:47.610737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.991 [2024-07-16 00:27:47.610747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.991 qpair failed and we were unable to recover it. 00:26:28.991 [2024-07-16 00:27:47.610924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.991 [2024-07-16 00:27:47.610935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.991 qpair failed and we were unable to recover it. 00:26:28.991 [2024-07-16 00:27:47.611060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.991 [2024-07-16 00:27:47.611070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.991 qpair failed and we were unable to recover it. 00:26:28.991 [2024-07-16 00:27:47.611194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.991 [2024-07-16 00:27:47.611204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.991 qpair failed and we were unable to recover it. 00:26:28.991 [2024-07-16 00:27:47.611316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.991 [2024-07-16 00:27:47.611326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.991 qpair failed and we were unable to recover it. 00:26:28.991 [2024-07-16 00:27:47.611528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.991 [2024-07-16 00:27:47.611542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.991 qpair failed and we were unable to recover it. 00:26:28.991 [2024-07-16 00:27:47.611743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.991 [2024-07-16 00:27:47.611753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.991 qpair failed and we were unable to recover it. 00:26:28.991 [2024-07-16 00:27:47.611987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.991 [2024-07-16 00:27:47.611997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.991 qpair failed and we were unable to recover it. 00:26:28.991 [2024-07-16 00:27:47.612127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.991 [2024-07-16 00:27:47.612137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.991 qpair failed and we were unable to recover it. 00:26:28.991 [2024-07-16 00:27:47.612339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.991 [2024-07-16 00:27:47.612349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.991 qpair failed and we were unable to recover it. 00:26:28.991 [2024-07-16 00:27:47.612634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.991 [2024-07-16 00:27:47.612646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.991 qpair failed and we were unable to recover it. 00:26:28.991 [2024-07-16 00:27:47.612787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.991 [2024-07-16 00:27:47.612797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.991 qpair failed and we were unable to recover it. 00:26:28.991 [2024-07-16 00:27:47.612997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.991 [2024-07-16 00:27:47.613007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.991 qpair failed and we were unable to recover it. 00:26:28.991 [2024-07-16 00:27:47.613216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.991 [2024-07-16 00:27:47.613229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.991 qpair failed and we were unable to recover it. 00:26:28.991 [2024-07-16 00:27:47.613352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.991 [2024-07-16 00:27:47.613362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.991 qpair failed and we were unable to recover it. 00:26:28.991 [2024-07-16 00:27:47.613445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.991 [2024-07-16 00:27:47.613456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.991 qpair failed and we were unable to recover it. 00:26:28.991 [2024-07-16 00:27:47.613730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.991 [2024-07-16 00:27:47.613740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.991 qpair failed and we were unable to recover it. 00:26:28.991 [2024-07-16 00:27:47.613876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.991 [2024-07-16 00:27:47.613886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.991 qpair failed and we were unable to recover it. 00:26:28.991 [2024-07-16 00:27:47.614157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.991 [2024-07-16 00:27:47.614167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.991 qpair failed and we were unable to recover it. 00:26:28.991 [2024-07-16 00:27:47.614309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.991 [2024-07-16 00:27:47.614319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.991 qpair failed and we were unable to recover it. 00:26:28.991 [2024-07-16 00:27:47.614516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.991 [2024-07-16 00:27:47.614526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.991 qpair failed and we were unable to recover it. 00:26:28.991 [2024-07-16 00:27:47.614722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.991 [2024-07-16 00:27:47.614732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.991 qpair failed and we were unable to recover it. 00:26:28.991 [2024-07-16 00:27:47.614928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.992 [2024-07-16 00:27:47.614938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.992 qpair failed and we were unable to recover it. 00:26:28.992 [2024-07-16 00:27:47.615188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.992 [2024-07-16 00:27:47.615198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.992 qpair failed and we were unable to recover it. 00:26:28.992 [2024-07-16 00:27:47.615381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.992 [2024-07-16 00:27:47.615392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.992 qpair failed and we were unable to recover it. 00:26:28.992 [2024-07-16 00:27:47.615533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.992 [2024-07-16 00:27:47.615543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.992 qpair failed and we were unable to recover it. 00:26:28.992 [2024-07-16 00:27:47.615675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.992 [2024-07-16 00:27:47.615686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.992 qpair failed and we were unable to recover it. 00:26:28.992 [2024-07-16 00:27:47.615808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.992 [2024-07-16 00:27:47.615818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.992 qpair failed and we were unable to recover it. 00:26:28.992 [2024-07-16 00:27:47.616014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.992 [2024-07-16 00:27:47.616024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.992 qpair failed and we were unable to recover it. 00:26:28.992 [2024-07-16 00:27:47.616146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.992 [2024-07-16 00:27:47.616156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.992 qpair failed and we were unable to recover it. 00:26:28.992 [2024-07-16 00:27:47.616272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.992 [2024-07-16 00:27:47.616283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.992 qpair failed and we were unable to recover it. 00:26:28.992 [2024-07-16 00:27:47.616492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.992 [2024-07-16 00:27:47.616503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.992 qpair failed and we were unable to recover it. 00:26:28.992 [2024-07-16 00:27:47.616654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.992 [2024-07-16 00:27:47.616664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.992 qpair failed and we were unable to recover it. 00:26:28.992 [2024-07-16 00:27:47.616850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.992 [2024-07-16 00:27:47.616860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.992 qpair failed and we were unable to recover it. 00:26:28.992 [2024-07-16 00:27:47.617129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.992 [2024-07-16 00:27:47.617140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.992 qpair failed and we were unable to recover it. 00:26:28.992 [2024-07-16 00:27:47.617329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.992 [2024-07-16 00:27:47.617339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.992 qpair failed and we were unable to recover it. 00:26:28.992 [2024-07-16 00:27:47.617537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.992 [2024-07-16 00:27:47.617547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.992 qpair failed and we were unable to recover it. 00:26:28.992 [2024-07-16 00:27:47.617789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.992 [2024-07-16 00:27:47.617799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.992 qpair failed and we were unable to recover it. 00:26:28.992 [2024-07-16 00:27:47.617876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.992 [2024-07-16 00:27:47.617886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.992 qpair failed and we were unable to recover it. 00:26:28.992 [2024-07-16 00:27:47.618133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.992 [2024-07-16 00:27:47.618143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.992 qpair failed and we were unable to recover it. 00:26:28.992 [2024-07-16 00:27:47.618351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.992 [2024-07-16 00:27:47.618362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.992 qpair failed and we were unable to recover it. 00:26:28.992 [2024-07-16 00:27:47.618543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.992 [2024-07-16 00:27:47.618553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.992 qpair failed and we were unable to recover it. 00:26:28.992 [2024-07-16 00:27:47.618801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.992 [2024-07-16 00:27:47.618811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.992 qpair failed and we were unable to recover it. 00:26:28.992 [2024-07-16 00:27:47.618942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.992 [2024-07-16 00:27:47.618952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.992 qpair failed and we were unable to recover it. 00:26:28.992 [2024-07-16 00:27:47.619084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.992 [2024-07-16 00:27:47.619095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.992 qpair failed and we were unable to recover it. 00:26:28.992 [2024-07-16 00:27:47.619213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.992 [2024-07-16 00:27:47.619228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.992 qpair failed and we were unable to recover it. 00:26:28.992 [2024-07-16 00:27:47.619433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.992 [2024-07-16 00:27:47.619444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.992 qpair failed and we were unable to recover it. 00:26:28.992 [2024-07-16 00:27:47.619711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.992 [2024-07-16 00:27:47.619721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.992 qpair failed and we were unable to recover it. 00:26:28.992 [2024-07-16 00:27:47.619924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.992 [2024-07-16 00:27:47.619934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.992 qpair failed and we were unable to recover it. 00:26:28.992 [2024-07-16 00:27:47.620129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.992 [2024-07-16 00:27:47.620139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.992 qpair failed and we were unable to recover it. 00:26:28.992 [2024-07-16 00:27:47.620268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.992 [2024-07-16 00:27:47.620278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.992 qpair failed and we were unable to recover it. 00:26:28.992 [2024-07-16 00:27:47.620524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.992 [2024-07-16 00:27:47.620534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.992 qpair failed and we were unable to recover it. 00:26:28.992 [2024-07-16 00:27:47.620682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.992 [2024-07-16 00:27:47.620692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.992 qpair failed and we were unable to recover it. 00:26:28.992 [2024-07-16 00:27:47.620811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.992 [2024-07-16 00:27:47.620821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.992 qpair failed and we were unable to recover it. 00:26:28.992 [2024-07-16 00:27:47.621008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.992 [2024-07-16 00:27:47.621018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.992 qpair failed and we were unable to recover it. 00:26:28.992 [2024-07-16 00:27:47.621282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.992 [2024-07-16 00:27:47.621293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.992 qpair failed and we were unable to recover it. 00:26:28.992 [2024-07-16 00:27:47.621516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.992 [2024-07-16 00:27:47.621526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.992 qpair failed and we were unable to recover it. 00:26:28.992 [2024-07-16 00:27:47.621773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.992 [2024-07-16 00:27:47.621784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.992 qpair failed and we were unable to recover it. 00:26:28.992 [2024-07-16 00:27:47.621980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.992 [2024-07-16 00:27:47.621990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.992 qpair failed and we were unable to recover it. 00:26:28.992 [2024-07-16 00:27:47.622112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.992 [2024-07-16 00:27:47.622122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.992 qpair failed and we were unable to recover it. 00:26:28.992 [2024-07-16 00:27:47.622416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.992 [2024-07-16 00:27:47.622426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.992 qpair failed and we were unable to recover it. 00:26:28.992 [2024-07-16 00:27:47.622554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.992 [2024-07-16 00:27:47.622564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.992 qpair failed and we were unable to recover it. 00:26:28.992 [2024-07-16 00:27:47.622757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.992 [2024-07-16 00:27:47.622767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.992 qpair failed and we were unable to recover it. 00:26:28.993 [2024-07-16 00:27:47.622950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.993 [2024-07-16 00:27:47.622960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.993 qpair failed and we were unable to recover it. 00:26:28.993 [2024-07-16 00:27:47.623216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.993 [2024-07-16 00:27:47.623231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.993 qpair failed and we were unable to recover it. 00:26:28.993 [2024-07-16 00:27:47.623429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.993 [2024-07-16 00:27:47.623439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.993 qpair failed and we were unable to recover it. 00:26:28.993 [2024-07-16 00:27:47.623634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.993 [2024-07-16 00:27:47.623644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.993 qpair failed and we were unable to recover it. 00:26:28.993 [2024-07-16 00:27:47.623834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.993 [2024-07-16 00:27:47.623844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.993 qpair failed and we were unable to recover it. 00:26:28.993 [2024-07-16 00:27:47.624055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.993 [2024-07-16 00:27:47.624066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.993 qpair failed and we were unable to recover it. 00:26:28.993 [2024-07-16 00:27:47.624199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.993 [2024-07-16 00:27:47.624209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.993 qpair failed and we were unable to recover it. 00:26:28.993 [2024-07-16 00:27:47.624411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.993 [2024-07-16 00:27:47.624422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.993 qpair failed and we were unable to recover it. 00:26:28.993 [2024-07-16 00:27:47.624508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.993 [2024-07-16 00:27:47.624518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.993 qpair failed and we were unable to recover it. 00:26:28.993 [2024-07-16 00:27:47.624715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.993 [2024-07-16 00:27:47.624725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.993 qpair failed and we were unable to recover it. 00:26:28.993 [2024-07-16 00:27:47.624921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.993 [2024-07-16 00:27:47.624931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.993 qpair failed and we were unable to recover it. 00:26:28.993 [2024-07-16 00:27:47.625133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.993 [2024-07-16 00:27:47.625143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.993 qpair failed and we were unable to recover it. 00:26:28.993 [2024-07-16 00:27:47.625342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.993 [2024-07-16 00:27:47.625352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.993 qpair failed and we were unable to recover it. 00:26:28.993 [2024-07-16 00:27:47.625482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.993 [2024-07-16 00:27:47.625492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.993 qpair failed and we were unable to recover it. 00:26:28.993 [2024-07-16 00:27:47.625620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.993 [2024-07-16 00:27:47.625630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.993 qpair failed and we were unable to recover it. 00:26:28.993 [2024-07-16 00:27:47.625830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.993 [2024-07-16 00:27:47.625840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.993 qpair failed and we were unable to recover it. 00:26:28.993 [2024-07-16 00:27:47.626031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.993 [2024-07-16 00:27:47.626040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.993 qpair failed and we were unable to recover it. 00:26:28.993 [2024-07-16 00:27:47.626232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.993 [2024-07-16 00:27:47.626242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.993 qpair failed and we were unable to recover it. 00:26:28.993 [2024-07-16 00:27:47.626511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.993 [2024-07-16 00:27:47.626521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.993 qpair failed and we were unable to recover it. 00:26:28.993 [2024-07-16 00:27:47.626663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.993 [2024-07-16 00:27:47.626673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.993 qpair failed and we were unable to recover it. 00:26:28.993 [2024-07-16 00:27:47.626889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.993 [2024-07-16 00:27:47.626899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.993 qpair failed and we were unable to recover it. 00:26:28.993 [2024-07-16 00:27:47.627119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.993 [2024-07-16 00:27:47.627130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.993 qpair failed and we were unable to recover it. 00:26:28.993 [2024-07-16 00:27:47.627268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.993 [2024-07-16 00:27:47.627281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.993 qpair failed and we were unable to recover it. 00:26:28.993 [2024-07-16 00:27:47.627372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.993 [2024-07-16 00:27:47.627382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.993 qpair failed and we were unable to recover it. 00:26:28.993 [2024-07-16 00:27:47.627526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.993 [2024-07-16 00:27:47.627536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.993 qpair failed and we were unable to recover it. 00:26:28.993 [2024-07-16 00:27:47.627756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.993 [2024-07-16 00:27:47.627765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.993 qpair failed and we were unable to recover it. 00:26:28.993 [2024-07-16 00:27:47.627910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.993 [2024-07-16 00:27:47.627920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.993 qpair failed and we were unable to recover it. 00:26:28.993 [2024-07-16 00:27:47.628139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.993 [2024-07-16 00:27:47.628149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.993 qpair failed and we were unable to recover it. 00:26:28.993 [2024-07-16 00:27:47.628365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.993 [2024-07-16 00:27:47.628375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.993 qpair failed and we were unable to recover it. 00:26:28.993 [2024-07-16 00:27:47.628642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.993 [2024-07-16 00:27:47.628652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.993 qpair failed and we were unable to recover it. 00:26:28.993 [2024-07-16 00:27:47.628779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.993 [2024-07-16 00:27:47.628790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.993 qpair failed and we were unable to recover it. 00:26:28.993 [2024-07-16 00:27:47.628973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.993 [2024-07-16 00:27:47.628983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.993 qpair failed and we were unable to recover it. 00:26:28.993 [2024-07-16 00:27:47.629113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.993 [2024-07-16 00:27:47.629123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.993 qpair failed and we were unable to recover it. 00:26:28.993 [2024-07-16 00:27:47.629329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.993 [2024-07-16 00:27:47.629340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.993 qpair failed and we were unable to recover it. 00:26:28.993 [2024-07-16 00:27:47.629474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.993 [2024-07-16 00:27:47.629484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.993 qpair failed and we were unable to recover it. 00:26:28.993 [2024-07-16 00:27:47.629686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.993 [2024-07-16 00:27:47.629696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.993 qpair failed and we were unable to recover it. 00:26:28.993 [2024-07-16 00:27:47.629835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.993 [2024-07-16 00:27:47.629845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.994 qpair failed and we were unable to recover it. 00:26:28.994 [2024-07-16 00:27:47.630035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.994 [2024-07-16 00:27:47.630045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.994 qpair failed and we were unable to recover it. 00:26:28.994 [2024-07-16 00:27:47.630228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.994 [2024-07-16 00:27:47.630239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.994 qpair failed and we were unable to recover it. 00:26:28.994 [2024-07-16 00:27:47.630430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.994 [2024-07-16 00:27:47.630440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.994 qpair failed and we were unable to recover it. 00:26:28.994 [2024-07-16 00:27:47.630648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.994 [2024-07-16 00:27:47.630658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.994 qpair failed and we were unable to recover it. 00:26:28.994 [2024-07-16 00:27:47.630902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.994 [2024-07-16 00:27:47.630912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.994 qpair failed and we were unable to recover it. 00:26:28.994 [2024-07-16 00:27:47.631114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.994 [2024-07-16 00:27:47.631124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.994 qpair failed and we were unable to recover it. 00:26:28.994 [2024-07-16 00:27:47.631378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.994 [2024-07-16 00:27:47.631388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.994 qpair failed and we were unable to recover it. 00:26:28.994 [2024-07-16 00:27:47.631535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.994 [2024-07-16 00:27:47.631546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.994 qpair failed and we were unable to recover it. 00:26:28.994 [2024-07-16 00:27:47.631729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.994 [2024-07-16 00:27:47.631739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.994 qpair failed and we were unable to recover it. 00:26:28.994 [2024-07-16 00:27:47.631868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.994 [2024-07-16 00:27:47.631878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.994 qpair failed and we were unable to recover it. 00:26:28.994 [2024-07-16 00:27:47.632070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.994 [2024-07-16 00:27:47.632081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.994 qpair failed and we were unable to recover it. 00:26:28.994 [2024-07-16 00:27:47.632300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.994 [2024-07-16 00:27:47.632310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.994 qpair failed and we were unable to recover it. 00:26:28.994 [2024-07-16 00:27:47.632439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.994 [2024-07-16 00:27:47.632451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.994 qpair failed and we were unable to recover it. 00:26:28.994 [2024-07-16 00:27:47.632654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.994 [2024-07-16 00:27:47.632664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.994 qpair failed and we were unable to recover it. 00:26:28.994 [2024-07-16 00:27:47.632843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.994 [2024-07-16 00:27:47.632853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.994 qpair failed and we were unable to recover it. 00:26:28.994 [2024-07-16 00:27:47.633050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.994 [2024-07-16 00:27:47.633060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.994 qpair failed and we were unable to recover it. 00:26:28.994 [2024-07-16 00:27:47.633199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.994 [2024-07-16 00:27:47.633210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.994 qpair failed and we were unable to recover it. 00:26:28.994 [2024-07-16 00:27:47.633333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.994 [2024-07-16 00:27:47.633344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.994 qpair failed and we were unable to recover it. 00:26:28.994 [2024-07-16 00:27:47.633530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.994 [2024-07-16 00:27:47.633540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.994 qpair failed and we were unable to recover it. 00:26:28.994 [2024-07-16 00:27:47.633721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.994 [2024-07-16 00:27:47.633731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.994 qpair failed and we were unable to recover it. 00:26:28.994 [2024-07-16 00:27:47.633910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.994 [2024-07-16 00:27:47.633920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.994 qpair failed and we were unable to recover it. 00:26:28.994 [2024-07-16 00:27:47.634046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.994 [2024-07-16 00:27:47.634056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.994 qpair failed and we were unable to recover it. 00:26:28.994 [2024-07-16 00:27:47.634176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.994 [2024-07-16 00:27:47.634186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.994 qpair failed and we were unable to recover it. 00:26:28.994 [2024-07-16 00:27:47.634367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.994 [2024-07-16 00:27:47.634377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.994 qpair failed and we were unable to recover it. 00:26:28.994 [2024-07-16 00:27:47.634576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.994 [2024-07-16 00:27:47.634587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.994 qpair failed and we were unable to recover it. 00:26:28.994 [2024-07-16 00:27:47.634859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.994 [2024-07-16 00:27:47.634869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.994 qpair failed and we were unable to recover it. 00:26:28.994 [2024-07-16 00:27:47.635055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.994 [2024-07-16 00:27:47.635066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.994 qpair failed and we were unable to recover it. 00:26:28.994 [2024-07-16 00:27:47.635194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.994 [2024-07-16 00:27:47.635204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.994 qpair failed and we were unable to recover it. 00:26:28.994 [2024-07-16 00:27:47.635398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.994 [2024-07-16 00:27:47.635408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.994 qpair failed and we were unable to recover it. 00:26:28.994 [2024-07-16 00:27:47.635541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.994 [2024-07-16 00:27:47.635551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.994 qpair failed and we were unable to recover it. 00:26:28.994 [2024-07-16 00:27:47.635746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.994 [2024-07-16 00:27:47.635756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.994 qpair failed and we were unable to recover it. 00:26:28.994 [2024-07-16 00:27:47.635901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.994 [2024-07-16 00:27:47.635912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.994 qpair failed and we were unable to recover it. 00:26:28.994 [2024-07-16 00:27:47.636136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.994 [2024-07-16 00:27:47.636146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.994 qpair failed and we were unable to recover it. 00:26:28.994 [2024-07-16 00:27:47.636230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.994 [2024-07-16 00:27:47.636240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.994 qpair failed and we were unable to recover it. 00:26:28.994 [2024-07-16 00:27:47.636493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.994 [2024-07-16 00:27:47.636504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.994 qpair failed and we were unable to recover it. 00:26:28.994 [2024-07-16 00:27:47.636633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.994 [2024-07-16 00:27:47.636643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.994 qpair failed and we were unable to recover it. 00:26:28.994 [2024-07-16 00:27:47.636897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.994 [2024-07-16 00:27:47.636907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.994 qpair failed and we were unable to recover it. 00:26:28.994 [2024-07-16 00:27:47.637089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.994 [2024-07-16 00:27:47.637098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.994 qpair failed and we were unable to recover it. 00:26:28.994 [2024-07-16 00:27:47.637246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.994 [2024-07-16 00:27:47.637257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.994 qpair failed and we were unable to recover it. 00:26:28.995 [2024-07-16 00:27:47.637443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.995 [2024-07-16 00:27:47.637453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.995 qpair failed and we were unable to recover it. 00:26:28.995 [2024-07-16 00:27:47.637586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.995 [2024-07-16 00:27:47.637597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.995 qpair failed and we were unable to recover it. 00:26:28.995 [2024-07-16 00:27:47.637794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.995 [2024-07-16 00:27:47.637804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.995 qpair failed and we were unable to recover it. 00:26:28.995 [2024-07-16 00:27:47.637935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.995 [2024-07-16 00:27:47.637945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.995 qpair failed and we were unable to recover it. 00:26:28.995 [2024-07-16 00:27:47.638086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.995 [2024-07-16 00:27:47.638096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.995 qpair failed and we were unable to recover it. 00:26:28.995 [2024-07-16 00:27:47.638308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.995 [2024-07-16 00:27:47.638319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.995 qpair failed and we were unable to recover it. 00:26:28.995 [2024-07-16 00:27:47.638547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.995 [2024-07-16 00:27:47.638557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.995 qpair failed and we were unable to recover it. 00:26:28.995 [2024-07-16 00:27:47.638689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.995 [2024-07-16 00:27:47.638700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.995 qpair failed and we were unable to recover it. 00:26:28.995 [2024-07-16 00:27:47.638887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.995 [2024-07-16 00:27:47.638897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.995 qpair failed and we were unable to recover it. 00:26:28.995 [2024-07-16 00:27:47.639036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.995 [2024-07-16 00:27:47.639046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.995 qpair failed and we were unable to recover it. 00:26:28.995 [2024-07-16 00:27:47.639231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.995 [2024-07-16 00:27:47.639242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.995 qpair failed and we were unable to recover it. 00:26:28.995 [2024-07-16 00:27:47.639517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.995 [2024-07-16 00:27:47.639528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.995 qpair failed and we were unable to recover it. 00:26:28.995 [2024-07-16 00:27:47.639783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.995 [2024-07-16 00:27:47.639793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.995 qpair failed and we were unable to recover it. 00:26:28.995 [2024-07-16 00:27:47.639932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.995 [2024-07-16 00:27:47.639944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.995 qpair failed and we were unable to recover it. 00:26:28.995 [2024-07-16 00:27:47.640107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.995 [2024-07-16 00:27:47.640117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.995 qpair failed and we were unable to recover it. 00:26:28.995 [2024-07-16 00:27:47.640314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.995 [2024-07-16 00:27:47.640324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.995 qpair failed and we were unable to recover it. 00:26:28.995 [2024-07-16 00:27:47.640421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.995 [2024-07-16 00:27:47.640431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.995 qpair failed and we were unable to recover it. 00:26:28.995 [2024-07-16 00:27:47.640628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.995 [2024-07-16 00:27:47.640638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.995 qpair failed and we were unable to recover it. 00:26:28.995 [2024-07-16 00:27:47.640837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.995 [2024-07-16 00:27:47.640847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.995 qpair failed and we were unable to recover it. 00:26:28.995 [2024-07-16 00:27:47.641033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.995 [2024-07-16 00:27:47.641043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.995 qpair failed and we were unable to recover it. 00:26:28.995 [2024-07-16 00:27:47.641236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.995 [2024-07-16 00:27:47.641246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.995 qpair failed and we were unable to recover it. 00:26:28.995 [2024-07-16 00:27:47.641360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.995 [2024-07-16 00:27:47.641370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.995 qpair failed and we were unable to recover it. 00:26:28.995 [2024-07-16 00:27:47.641617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.995 [2024-07-16 00:27:47.641627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.995 qpair failed and we were unable to recover it. 00:26:28.995 [2024-07-16 00:27:47.641808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.995 [2024-07-16 00:27:47.641818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.995 qpair failed and we were unable to recover it. 00:26:28.995 [2024-07-16 00:27:47.642011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.995 [2024-07-16 00:27:47.642021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.995 qpair failed and we were unable to recover it. 00:26:28.995 [2024-07-16 00:27:47.642214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.995 [2024-07-16 00:27:47.642229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.995 qpair failed and we were unable to recover it. 00:26:28.995 [2024-07-16 00:27:47.642424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.995 [2024-07-16 00:27:47.642434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.995 qpair failed and we were unable to recover it. 00:26:28.995 [2024-07-16 00:27:47.642620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.995 [2024-07-16 00:27:47.642630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.995 qpair failed and we were unable to recover it. 00:26:28.995 [2024-07-16 00:27:47.642822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.995 [2024-07-16 00:27:47.642832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.995 qpair failed and we were unable to recover it. 00:26:28.995 [2024-07-16 00:27:47.643108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.995 [2024-07-16 00:27:47.643118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.995 qpair failed and we were unable to recover it. 00:26:28.995 [2024-07-16 00:27:47.643311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.995 [2024-07-16 00:27:47.643321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.995 qpair failed and we were unable to recover it. 00:26:28.995 [2024-07-16 00:27:47.643517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.995 [2024-07-16 00:27:47.643527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.995 qpair failed and we were unable to recover it. 00:26:28.995 [2024-07-16 00:27:47.643724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.995 [2024-07-16 00:27:47.643734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.995 qpair failed and we were unable to recover it. 00:26:28.995 [2024-07-16 00:27:47.643857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.995 [2024-07-16 00:27:47.643867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.995 qpair failed and we were unable to recover it. 00:26:28.995 [2024-07-16 00:27:47.643996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.995 [2024-07-16 00:27:47.644006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.995 qpair failed and we were unable to recover it. 00:26:28.995 [2024-07-16 00:27:47.644205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.995 [2024-07-16 00:27:47.644215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.995 qpair failed and we were unable to recover it. 00:26:28.995 [2024-07-16 00:27:47.644466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.995 [2024-07-16 00:27:47.644477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.995 qpair failed and we were unable to recover it. 00:26:28.995 [2024-07-16 00:27:47.644600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.995 [2024-07-16 00:27:47.644610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.995 qpair failed and we were unable to recover it. 00:26:28.995 [2024-07-16 00:27:47.644879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.995 [2024-07-16 00:27:47.644890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.995 qpair failed and we were unable to recover it. 00:26:28.995 [2024-07-16 00:27:47.645077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.995 [2024-07-16 00:27:47.645087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.995 qpair failed and we were unable to recover it. 00:26:28.995 [2024-07-16 00:27:47.645318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.995 [2024-07-16 00:27:47.645329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.996 qpair failed and we were unable to recover it. 00:26:28.996 [2024-07-16 00:27:47.645463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.996 [2024-07-16 00:27:47.645473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.996 qpair failed and we were unable to recover it. 00:26:28.996 [2024-07-16 00:27:47.645664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.996 [2024-07-16 00:27:47.645674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.996 qpair failed and we were unable to recover it. 00:26:28.996 [2024-07-16 00:27:47.645855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.996 [2024-07-16 00:27:47.645865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.996 qpair failed and we were unable to recover it. 00:26:28.996 [2024-07-16 00:27:47.646117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.996 [2024-07-16 00:27:47.646127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.996 qpair failed and we were unable to recover it. 00:26:28.996 [2024-07-16 00:27:47.646254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.996 [2024-07-16 00:27:47.646265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.996 qpair failed and we were unable to recover it. 00:26:28.996 [2024-07-16 00:27:47.646514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.996 [2024-07-16 00:27:47.646524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.996 qpair failed and we were unable to recover it. 00:26:28.996 [2024-07-16 00:27:47.646797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.996 [2024-07-16 00:27:47.646807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.996 qpair failed and we were unable to recover it. 00:26:28.996 [2024-07-16 00:27:47.647088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.996 [2024-07-16 00:27:47.647098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.996 qpair failed and we were unable to recover it. 00:26:28.996 [2024-07-16 00:27:47.647382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.996 [2024-07-16 00:27:47.647393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.996 qpair failed and we were unable to recover it. 00:26:28.996 [2024-07-16 00:27:47.647596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.996 [2024-07-16 00:27:47.647606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.996 qpair failed and we were unable to recover it. 00:26:28.996 [2024-07-16 00:27:47.647804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.996 [2024-07-16 00:27:47.647814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.996 qpair failed and we were unable to recover it. 00:26:28.996 [2024-07-16 00:27:47.648015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.996 [2024-07-16 00:27:47.648025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.996 qpair failed and we were unable to recover it. 00:26:28.996 [2024-07-16 00:27:47.648209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.996 [2024-07-16 00:27:47.648223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.996 qpair failed and we were unable to recover it. 00:26:28.996 [2024-07-16 00:27:47.648422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.996 [2024-07-16 00:27:47.648432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.996 qpair failed and we were unable to recover it. 00:26:28.996 [2024-07-16 00:27:47.648576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.996 [2024-07-16 00:27:47.648586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.996 qpair failed and we were unable to recover it. 00:26:28.996 [2024-07-16 00:27:47.648721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.996 [2024-07-16 00:27:47.648731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.996 qpair failed and we were unable to recover it. 00:26:28.996 [2024-07-16 00:27:47.648930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.996 [2024-07-16 00:27:47.648940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.996 qpair failed and we were unable to recover it. 00:26:28.996 [2024-07-16 00:27:47.649150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.996 [2024-07-16 00:27:47.649160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.996 qpair failed and we were unable to recover it. 00:26:28.996 [2024-07-16 00:27:47.649428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.996 [2024-07-16 00:27:47.649438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.996 qpair failed and we were unable to recover it. 00:26:28.996 [2024-07-16 00:27:47.649569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.996 [2024-07-16 00:27:47.649579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.996 qpair failed and we were unable to recover it. 00:26:28.996 [2024-07-16 00:27:47.649718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.996 [2024-07-16 00:27:47.649728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.996 qpair failed and we were unable to recover it. 00:26:28.996 [2024-07-16 00:27:47.649973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.996 [2024-07-16 00:27:47.649983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.996 qpair failed and we were unable to recover it. 00:26:28.996 [2024-07-16 00:27:47.650069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.996 [2024-07-16 00:27:47.650079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.996 qpair failed and we were unable to recover it. 00:26:28.996 [2024-07-16 00:27:47.650183] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:26:28.996 [2024-07-16 00:27:47.650223] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:28.996 [2024-07-16 00:27:47.650328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.996 [2024-07-16 00:27:47.650337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.996 qpair failed and we were unable to recover it. 00:26:28.996 [2024-07-16 00:27:47.650522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.996 [2024-07-16 00:27:47.650533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.996 qpair failed and we were unable to recover it. 00:26:28.996 [2024-07-16 00:27:47.650805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.996 [2024-07-16 00:27:47.650814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.996 qpair failed and we were unable to recover it. 00:26:28.996 [2024-07-16 00:27:47.650996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.996 [2024-07-16 00:27:47.651006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.996 qpair failed and we were unable to recover it. 00:26:28.996 [2024-07-16 00:27:47.651254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.996 [2024-07-16 00:27:47.651264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.996 qpair failed and we were unable to recover it. 00:26:28.996 [2024-07-16 00:27:47.651537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.996 [2024-07-16 00:27:47.651547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.996 qpair failed and we were unable to recover it. 00:26:28.996 [2024-07-16 00:27:47.651740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.996 [2024-07-16 00:27:47.651750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.996 qpair failed and we were unable to recover it. 00:26:28.996 [2024-07-16 00:27:47.651936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.996 [2024-07-16 00:27:47.651947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.996 qpair failed and we were unable to recover it. 00:26:28.996 [2024-07-16 00:27:47.652196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.996 [2024-07-16 00:27:47.652206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.996 qpair failed and we were unable to recover it. 00:26:28.996 [2024-07-16 00:27:47.652348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.996 [2024-07-16 00:27:47.652359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.996 qpair failed and we were unable to recover it. 00:26:28.996 [2024-07-16 00:27:47.652557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.996 [2024-07-16 00:27:47.652568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.996 qpair failed and we were unable to recover it. 00:26:28.996 [2024-07-16 00:27:47.652760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.996 [2024-07-16 00:27:47.652770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.996 qpair failed and we were unable to recover it. 00:26:28.996 [2024-07-16 00:27:47.653040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.996 [2024-07-16 00:27:47.653050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.996 qpair failed and we were unable to recover it. 00:26:28.996 [2024-07-16 00:27:47.653204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.996 [2024-07-16 00:27:47.653214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.996 qpair failed and we were unable to recover it. 00:26:28.996 [2024-07-16 00:27:47.653470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.996 [2024-07-16 00:27:47.653481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.996 qpair failed and we were unable to recover it. 00:26:28.996 [2024-07-16 00:27:47.653703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.996 [2024-07-16 00:27:47.653713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.996 qpair failed and we were unable to recover it. 00:26:28.996 [2024-07-16 00:27:47.653998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.997 [2024-07-16 00:27:47.654008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.997 qpair failed and we were unable to recover it. 00:26:28.997 [2024-07-16 00:27:47.654145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.997 [2024-07-16 00:27:47.654156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.997 qpair failed and we were unable to recover it. 00:26:28.997 [2024-07-16 00:27:47.654289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.997 [2024-07-16 00:27:47.654300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.997 qpair failed and we were unable to recover it. 00:26:28.997 [2024-07-16 00:27:47.654479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.997 [2024-07-16 00:27:47.654489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.997 qpair failed and we were unable to recover it. 00:26:28.997 [2024-07-16 00:27:47.654683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.997 [2024-07-16 00:27:47.654693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.997 qpair failed and we were unable to recover it. 00:26:28.997 [2024-07-16 00:27:47.654829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.997 [2024-07-16 00:27:47.654839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.997 qpair failed and we were unable to recover it. 00:26:28.997 [2024-07-16 00:27:47.654967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.997 [2024-07-16 00:27:47.654977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.997 qpair failed and we were unable to recover it. 00:26:28.997 [2024-07-16 00:27:47.655155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.997 [2024-07-16 00:27:47.655165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.997 qpair failed and we were unable to recover it. 00:26:28.997 [2024-07-16 00:27:47.655302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.997 [2024-07-16 00:27:47.655312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.997 qpair failed and we were unable to recover it. 00:26:28.997 [2024-07-16 00:27:47.655496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.997 [2024-07-16 00:27:47.655506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.997 qpair failed and we were unable to recover it. 00:26:28.997 [2024-07-16 00:27:47.655781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.997 [2024-07-16 00:27:47.655792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.997 qpair failed and we were unable to recover it. 00:26:28.997 [2024-07-16 00:27:47.656018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.997 [2024-07-16 00:27:47.656028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.997 qpair failed and we were unable to recover it. 00:26:28.997 [2024-07-16 00:27:47.656231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.997 [2024-07-16 00:27:47.656241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.997 qpair failed and we were unable to recover it. 00:26:28.997 [2024-07-16 00:27:47.656449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.997 [2024-07-16 00:27:47.656459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.997 qpair failed and we were unable to recover it. 00:26:28.997 [2024-07-16 00:27:47.656648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.997 [2024-07-16 00:27:47.656658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.997 qpair failed and we were unable to recover it. 00:26:28.997 [2024-07-16 00:27:47.656909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.997 [2024-07-16 00:27:47.656919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.997 qpair failed and we were unable to recover it. 00:26:28.997 [2024-07-16 00:27:47.657123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.997 [2024-07-16 00:27:47.657133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.997 qpair failed and we were unable to recover it. 00:26:28.997 [2024-07-16 00:27:47.657329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.997 [2024-07-16 00:27:47.657340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.997 qpair failed and we were unable to recover it. 00:26:28.997 [2024-07-16 00:27:47.657537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.997 [2024-07-16 00:27:47.657547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.997 qpair failed and we were unable to recover it. 00:26:28.997 [2024-07-16 00:27:47.657795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.997 [2024-07-16 00:27:47.657805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.997 qpair failed and we were unable to recover it. 00:26:28.997 [2024-07-16 00:27:47.657920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.997 [2024-07-16 00:27:47.657930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.997 qpair failed and we were unable to recover it. 00:26:28.997 [2024-07-16 00:27:47.658154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.997 [2024-07-16 00:27:47.658164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.997 qpair failed and we were unable to recover it. 00:26:28.997 [2024-07-16 00:27:47.658437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.997 [2024-07-16 00:27:47.658448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.997 qpair failed and we were unable to recover it. 00:26:28.997 [2024-07-16 00:27:47.658579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.997 [2024-07-16 00:27:47.658589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.997 qpair failed and we were unable to recover it. 00:26:28.997 [2024-07-16 00:27:47.658790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.997 [2024-07-16 00:27:47.658800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.997 qpair failed and we were unable to recover it. 00:26:28.997 [2024-07-16 00:27:47.659110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.997 [2024-07-16 00:27:47.659122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.997 qpair failed and we were unable to recover it. 00:26:28.997 [2024-07-16 00:27:47.659320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.997 [2024-07-16 00:27:47.659330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.997 qpair failed and we were unable to recover it. 00:26:28.997 [2024-07-16 00:27:47.659579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.997 [2024-07-16 00:27:47.659589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.997 qpair failed and we were unable to recover it. 00:26:28.997 [2024-07-16 00:27:47.659782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.997 [2024-07-16 00:27:47.659792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.997 qpair failed and we were unable to recover it. 00:26:28.997 [2024-07-16 00:27:47.660069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.997 [2024-07-16 00:27:47.660079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.997 qpair failed and we were unable to recover it. 00:26:28.997 [2024-07-16 00:27:47.660236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.997 [2024-07-16 00:27:47.660246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.997 qpair failed and we were unable to recover it. 00:26:28.997 [2024-07-16 00:27:47.660496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.997 [2024-07-16 00:27:47.660507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.997 qpair failed and we were unable to recover it. 00:26:28.997 [2024-07-16 00:27:47.660647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.997 [2024-07-16 00:27:47.660657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.997 qpair failed and we were unable to recover it. 00:26:28.997 [2024-07-16 00:27:47.660838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.997 [2024-07-16 00:27:47.660848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.997 qpair failed and we were unable to recover it. 00:26:28.997 [2024-07-16 00:27:47.661051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.997 [2024-07-16 00:27:47.661060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.997 qpair failed and we were unable to recover it. 00:26:28.997 [2024-07-16 00:27:47.661364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.997 [2024-07-16 00:27:47.661375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.997 qpair failed and we were unable to recover it. 00:26:28.997 [2024-07-16 00:27:47.661621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.997 [2024-07-16 00:27:47.661631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.997 qpair failed and we were unable to recover it. 00:26:28.997 [2024-07-16 00:27:47.661835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.997 [2024-07-16 00:27:47.661845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.997 qpair failed and we were unable to recover it. 00:26:28.997 [2024-07-16 00:27:47.662072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.997 [2024-07-16 00:27:47.662082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.997 qpair failed and we were unable to recover it. 00:26:28.997 [2024-07-16 00:27:47.662218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.997 [2024-07-16 00:27:47.662232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.997 qpair failed and we were unable to recover it. 00:26:28.997 [2024-07-16 00:27:47.662378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.997 [2024-07-16 00:27:47.662388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.997 qpair failed and we were unable to recover it. 00:26:28.998 [2024-07-16 00:27:47.662596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.998 [2024-07-16 00:27:47.662606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.998 qpair failed and we were unable to recover it. 00:26:28.998 [2024-07-16 00:27:47.662778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.998 [2024-07-16 00:27:47.662788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.998 qpair failed and we were unable to recover it. 00:26:28.998 [2024-07-16 00:27:47.663061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.998 [2024-07-16 00:27:47.663071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.998 qpair failed and we were unable to recover it. 00:26:28.998 [2024-07-16 00:27:47.663285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.998 [2024-07-16 00:27:47.663296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.998 qpair failed and we were unable to recover it. 00:26:28.998 [2024-07-16 00:27:47.663440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.998 [2024-07-16 00:27:47.663450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.998 qpair failed and we were unable to recover it. 00:26:28.998 [2024-07-16 00:27:47.663641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.998 [2024-07-16 00:27:47.663651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.998 qpair failed and we were unable to recover it. 00:26:28.998 [2024-07-16 00:27:47.663897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.998 [2024-07-16 00:27:47.663907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.998 qpair failed and we were unable to recover it. 00:26:28.998 [2024-07-16 00:27:47.664104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.998 [2024-07-16 00:27:47.664115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.998 qpair failed and we were unable to recover it. 00:26:28.998 [2024-07-16 00:27:47.664255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.998 [2024-07-16 00:27:47.664266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.998 qpair failed and we were unable to recover it. 00:26:28.998 [2024-07-16 00:27:47.664465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.998 [2024-07-16 00:27:47.664476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.998 qpair failed and we were unable to recover it. 00:26:28.998 [2024-07-16 00:27:47.664672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.998 [2024-07-16 00:27:47.664682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.998 qpair failed and we were unable to recover it. 00:26:28.998 [2024-07-16 00:27:47.664813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.998 [2024-07-16 00:27:47.664823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.998 qpair failed and we were unable to recover it. 00:26:28.998 [2024-07-16 00:27:47.665072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.998 [2024-07-16 00:27:47.665082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.998 qpair failed and we were unable to recover it. 00:26:28.998 [2024-07-16 00:27:47.665220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.998 [2024-07-16 00:27:47.665234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.998 qpair failed and we were unable to recover it. 00:26:28.998 [2024-07-16 00:27:47.665495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.998 [2024-07-16 00:27:47.665505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.998 qpair failed and we were unable to recover it. 00:26:28.998 [2024-07-16 00:27:47.665692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.998 [2024-07-16 00:27:47.665702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.998 qpair failed and we were unable to recover it. 00:26:28.998 [2024-07-16 00:27:47.665876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.998 [2024-07-16 00:27:47.665887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.998 qpair failed and we were unable to recover it. 00:26:28.998 [2024-07-16 00:27:47.666159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.998 [2024-07-16 00:27:47.666168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.998 qpair failed and we were unable to recover it. 00:26:28.998 [2024-07-16 00:27:47.666396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.998 [2024-07-16 00:27:47.666406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.998 qpair failed and we were unable to recover it. 00:26:28.998 [2024-07-16 00:27:47.666663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.998 [2024-07-16 00:27:47.666673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.998 qpair failed and we were unable to recover it. 00:26:28.998 [2024-07-16 00:27:47.666878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.998 [2024-07-16 00:27:47.666888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.998 qpair failed and we were unable to recover it. 00:26:28.998 [2024-07-16 00:27:47.667138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.998 [2024-07-16 00:27:47.667148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.998 qpair failed and we were unable to recover it. 00:26:28.998 [2024-07-16 00:27:47.667342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.998 [2024-07-16 00:27:47.667353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.998 qpair failed and we were unable to recover it. 00:26:28.998 [2024-07-16 00:27:47.667630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.998 [2024-07-16 00:27:47.667640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.998 qpair failed and we were unable to recover it. 00:26:28.998 [2024-07-16 00:27:47.667774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.998 [2024-07-16 00:27:47.667786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.998 qpair failed and we were unable to recover it. 00:26:28.998 [2024-07-16 00:27:47.667922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.998 [2024-07-16 00:27:47.667932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.998 qpair failed and we were unable to recover it. 00:26:28.998 [2024-07-16 00:27:47.668137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.998 [2024-07-16 00:27:47.668147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.998 qpair failed and we were unable to recover it. 00:26:28.998 [2024-07-16 00:27:47.668289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.998 [2024-07-16 00:27:47.668299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.998 qpair failed and we were unable to recover it. 00:26:28.998 [2024-07-16 00:27:47.668488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.998 [2024-07-16 00:27:47.668498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.998 qpair failed and we were unable to recover it. 00:26:28.998 [2024-07-16 00:27:47.668626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.998 [2024-07-16 00:27:47.668636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.998 qpair failed and we were unable to recover it. 00:26:28.998 [2024-07-16 00:27:47.668861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.998 [2024-07-16 00:27:47.668871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.998 qpair failed and we were unable to recover it. 00:26:28.998 [2024-07-16 00:27:47.668983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.998 [2024-07-16 00:27:47.668994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.998 qpair failed and we were unable to recover it. 00:26:28.998 [2024-07-16 00:27:47.669263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.998 [2024-07-16 00:27:47.669274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.998 qpair failed and we were unable to recover it. 00:26:28.998 [2024-07-16 00:27:47.669399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.998 [2024-07-16 00:27:47.669410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.998 qpair failed and we were unable to recover it. 00:26:28.998 [2024-07-16 00:27:47.669606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.998 [2024-07-16 00:27:47.669616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.998 qpair failed and we were unable to recover it. 00:26:28.998 [2024-07-16 00:27:47.669818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.998 [2024-07-16 00:27:47.669828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.999 qpair failed and we were unable to recover it. 00:26:28.999 [2024-07-16 00:27:47.670026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.999 [2024-07-16 00:27:47.670036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.999 qpair failed and we were unable to recover it. 00:26:28.999 [2024-07-16 00:27:47.670220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.999 [2024-07-16 00:27:47.670235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.999 qpair failed and we were unable to recover it. 00:26:28.999 [2024-07-16 00:27:47.670512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.999 [2024-07-16 00:27:47.670523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.999 qpair failed and we were unable to recover it. 00:26:28.999 [2024-07-16 00:27:47.670667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.999 [2024-07-16 00:27:47.670677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.999 qpair failed and we were unable to recover it. 00:26:28.999 [2024-07-16 00:27:47.670870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.999 [2024-07-16 00:27:47.670880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.999 qpair failed and we were unable to recover it. 00:26:28.999 [2024-07-16 00:27:47.671082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.999 [2024-07-16 00:27:47.671092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.999 qpair failed and we were unable to recover it. 00:26:28.999 [2024-07-16 00:27:47.671209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.999 [2024-07-16 00:27:47.671220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.999 qpair failed and we were unable to recover it. 00:26:28.999 [2024-07-16 00:27:47.671360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.999 [2024-07-16 00:27:47.671370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.999 qpair failed and we were unable to recover it. 00:26:28.999 [2024-07-16 00:27:47.671616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.999 [2024-07-16 00:27:47.671626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.999 qpair failed and we were unable to recover it. 00:26:28.999 [2024-07-16 00:27:47.671826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.999 [2024-07-16 00:27:47.671836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.999 qpair failed and we were unable to recover it. 00:26:28.999 [2024-07-16 00:27:47.672031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.999 [2024-07-16 00:27:47.672041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.999 qpair failed and we were unable to recover it. 00:26:28.999 [2024-07-16 00:27:47.672259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.999 [2024-07-16 00:27:47.672269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.999 qpair failed and we were unable to recover it. 00:26:28.999 [2024-07-16 00:27:47.672515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.999 [2024-07-16 00:27:47.672526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.999 qpair failed and we were unable to recover it. 00:26:28.999 [2024-07-16 00:27:47.672713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.999 [2024-07-16 00:27:47.672724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.999 qpair failed and we were unable to recover it. 00:26:28.999 [2024-07-16 00:27:47.672968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.999 [2024-07-16 00:27:47.672978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.999 qpair failed and we were unable to recover it. 00:26:28.999 [2024-07-16 00:27:47.673232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.999 [2024-07-16 00:27:47.673243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.999 qpair failed and we were unable to recover it. 00:26:28.999 [2024-07-16 00:27:47.673485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.999 [2024-07-16 00:27:47.673494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.999 qpair failed and we were unable to recover it. 00:26:28.999 [2024-07-16 00:27:47.673676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.999 [2024-07-16 00:27:47.673687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.999 qpair failed and we were unable to recover it. 00:26:28.999 [2024-07-16 00:27:47.673800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.999 [2024-07-16 00:27:47.673809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.999 qpair failed and we were unable to recover it. 00:26:28.999 [2024-07-16 00:27:47.673946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.999 [2024-07-16 00:27:47.673956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.999 qpair failed and we were unable to recover it. 00:26:28.999 [2024-07-16 00:27:47.674233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.999 [2024-07-16 00:27:47.674243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.999 qpair failed and we were unable to recover it. 00:26:28.999 [2024-07-16 00:27:47.674422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.999 [2024-07-16 00:27:47.674432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.999 qpair failed and we were unable to recover it. 00:26:28.999 [2024-07-16 00:27:47.674680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.999 [2024-07-16 00:27:47.674690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.999 qpair failed and we were unable to recover it. 00:26:28.999 [2024-07-16 00:27:47.674822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.999 [2024-07-16 00:27:47.674832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.999 qpair failed and we were unable to recover it. 00:26:28.999 [2024-07-16 00:27:47.675027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.999 [2024-07-16 00:27:47.675037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.999 qpair failed and we were unable to recover it. 00:26:28.999 [2024-07-16 00:27:47.675219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.999 [2024-07-16 00:27:47.675232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.999 qpair failed and we were unable to recover it. 00:26:28.999 [2024-07-16 00:27:47.675345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.999 [2024-07-16 00:27:47.675355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.999 qpair failed and we were unable to recover it. 00:26:28.999 [2024-07-16 00:27:47.675633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.999 [2024-07-16 00:27:47.675643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.999 qpair failed and we were unable to recover it. 00:26:28.999 [2024-07-16 00:27:47.675787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.999 [2024-07-16 00:27:47.675799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.999 qpair failed and we were unable to recover it. 00:26:28.999 [2024-07-16 00:27:47.675924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.999 [2024-07-16 00:27:47.675934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.999 qpair failed and we were unable to recover it. 00:26:28.999 [2024-07-16 00:27:47.676055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.999 [2024-07-16 00:27:47.676065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.999 qpair failed and we were unable to recover it. 00:26:28.999 [2024-07-16 00:27:47.676268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.999 [2024-07-16 00:27:47.676278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.999 qpair failed and we were unable to recover it. 00:26:28.999 [2024-07-16 00:27:47.676468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.999 [2024-07-16 00:27:47.676479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.999 qpair failed and we were unable to recover it. 00:26:28.999 [2024-07-16 00:27:47.676726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.999 [2024-07-16 00:27:47.676737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.999 qpair failed and we were unable to recover it. 00:26:28.999 [2024-07-16 00:27:47.676929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.999 [2024-07-16 00:27:47.676939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.999 qpair failed and we were unable to recover it. 00:26:28.999 [2024-07-16 00:27:47.677067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.999 [2024-07-16 00:27:47.677077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.999 qpair failed and we were unable to recover it. 00:26:28.999 [2024-07-16 00:27:47.677189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.999 [2024-07-16 00:27:47.677199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.999 qpair failed and we were unable to recover it. 00:26:28.999 [2024-07-16 00:27:47.677385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.999 [2024-07-16 00:27:47.677396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.999 qpair failed and we were unable to recover it. 00:26:28.999 [2024-07-16 00:27:47.677581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.999 [2024-07-16 00:27:47.677592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.999 qpair failed and we were unable to recover it. 00:26:28.999 [2024-07-16 00:27:47.677842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.999 [2024-07-16 00:27:47.677852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:28.999 qpair failed and we were unable to recover it. 00:26:28.999 [2024-07-16 00:27:47.678057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.000 [2024-07-16 00:27:47.678067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.000 qpair failed and we were unable to recover it. 00:26:29.000 [2024-07-16 00:27:47.678254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.000 [2024-07-16 00:27:47.678264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.000 qpair failed and we were unable to recover it. 00:26:29.000 [2024-07-16 00:27:47.678397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.000 [2024-07-16 00:27:47.678407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.000 qpair failed and we were unable to recover it. 00:26:29.000 [2024-07-16 00:27:47.678682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.000 [2024-07-16 00:27:47.678691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.000 qpair failed and we were unable to recover it. 00:26:29.000 [2024-07-16 00:27:47.678888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.000 [2024-07-16 00:27:47.678898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.000 qpair failed and we were unable to recover it. 00:26:29.000 [2024-07-16 00:27:47.679172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.000 [2024-07-16 00:27:47.679183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.000 qpair failed and we were unable to recover it. 00:26:29.000 [2024-07-16 00:27:47.679404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.000 [2024-07-16 00:27:47.679414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.000 qpair failed and we were unable to recover it. 00:26:29.000 [2024-07-16 00:27:47.679530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.000 [2024-07-16 00:27:47.679540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.000 qpair failed and we were unable to recover it. 00:26:29.000 [2024-07-16 00:27:47.679813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.000 [2024-07-16 00:27:47.679823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.000 qpair failed and we were unable to recover it. 00:26:29.000 [2024-07-16 00:27:47.679945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.000 [2024-07-16 00:27:47.679955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.000 qpair failed and we were unable to recover it. 00:26:29.000 [2024-07-16 00:27:47.680094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.000 [2024-07-16 00:27:47.680104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.000 qpair failed and we were unable to recover it. 00:26:29.000 [2024-07-16 00:27:47.680368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.000 [2024-07-16 00:27:47.680379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.000 qpair failed and we were unable to recover it. 00:26:29.000 [2024-07-16 00:27:47.680571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.000 [2024-07-16 00:27:47.680582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.000 qpair failed and we were unable to recover it. 00:26:29.000 [2024-07-16 00:27:47.680762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.000 [2024-07-16 00:27:47.680772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.000 qpair failed and we were unable to recover it. 00:26:29.000 [2024-07-16 00:27:47.681024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.000 [2024-07-16 00:27:47.681034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.000 qpair failed and we were unable to recover it. 00:26:29.000 [2024-07-16 00:27:47.681169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.000 [2024-07-16 00:27:47.681179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.000 qpair failed and we were unable to recover it. 00:26:29.000 [2024-07-16 00:27:47.681371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.000 [2024-07-16 00:27:47.681382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.000 qpair failed and we were unable to recover it. 00:26:29.000 [2024-07-16 00:27:47.681565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.000 [2024-07-16 00:27:47.681575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.000 qpair failed and we were unable to recover it. 00:26:29.000 [2024-07-16 00:27:47.681713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.000 [2024-07-16 00:27:47.681722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.000 qpair failed and we were unable to recover it. 00:26:29.000 [2024-07-16 00:27:47.681933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.000 [2024-07-16 00:27:47.681943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.000 qpair failed and we were unable to recover it. 00:26:29.000 [2024-07-16 00:27:47.682163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.000 [2024-07-16 00:27:47.682173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.000 qpair failed and we were unable to recover it. 00:26:29.000 [2024-07-16 00:27:47.682358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.000 [2024-07-16 00:27:47.682369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.000 qpair failed and we were unable to recover it. 00:26:29.000 [2024-07-16 00:27:47.682588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.000 [2024-07-16 00:27:47.682597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.000 qpair failed and we were unable to recover it. 00:26:29.000 [2024-07-16 00:27:47.682729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.000 [2024-07-16 00:27:47.682740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.000 qpair failed and we were unable to recover it. 00:26:29.000 [2024-07-16 00:27:47.682952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.000 [2024-07-16 00:27:47.682961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.000 qpair failed and we were unable to recover it. 00:26:29.000 [2024-07-16 00:27:47.683173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.000 [2024-07-16 00:27:47.683183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.000 qpair failed and we were unable to recover it. 00:26:29.000 [2024-07-16 00:27:47.683378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.000 [2024-07-16 00:27:47.683388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.000 qpair failed and we were unable to recover it. 00:26:29.000 [2024-07-16 00:27:47.683520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.000 [2024-07-16 00:27:47.683530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.000 qpair failed and we were unable to recover it. 00:26:29.000 [2024-07-16 00:27:47.683717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.000 [2024-07-16 00:27:47.683729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.000 qpair failed and we were unable to recover it. 00:26:29.000 [2024-07-16 00:27:47.683872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.000 [2024-07-16 00:27:47.683882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.000 qpair failed and we were unable to recover it. 00:26:29.000 [2024-07-16 00:27:47.684132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.000 [2024-07-16 00:27:47.684142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.000 qpair failed and we were unable to recover it. 00:26:29.000 [2024-07-16 00:27:47.684343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.000 [2024-07-16 00:27:47.684352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.000 qpair failed and we were unable to recover it. 00:26:29.000 [2024-07-16 00:27:47.684625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.000 [2024-07-16 00:27:47.684635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.000 qpair failed and we were unable to recover it. 00:26:29.000 [2024-07-16 00:27:47.684763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.000 [2024-07-16 00:27:47.684774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.000 qpair failed and we were unable to recover it. 00:26:29.000 [2024-07-16 00:27:47.684988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.000 [2024-07-16 00:27:47.684997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.000 qpair failed and we were unable to recover it. 00:26:29.000 [2024-07-16 00:27:47.685137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.000 [2024-07-16 00:27:47.685146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.000 qpair failed and we were unable to recover it. 00:26:29.000 [2024-07-16 00:27:47.685283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.000 [2024-07-16 00:27:47.685293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.000 qpair failed and we were unable to recover it. 00:26:29.000 [2024-07-16 00:27:47.685475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.000 [2024-07-16 00:27:47.685484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.000 qpair failed and we were unable to recover it. 00:26:29.000 [2024-07-16 00:27:47.685680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.000 [2024-07-16 00:27:47.685689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.000 qpair failed and we were unable to recover it. 00:26:29.000 [2024-07-16 00:27:47.685912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.000 [2024-07-16 00:27:47.685922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.000 qpair failed and we were unable to recover it. 00:26:29.000 [2024-07-16 00:27:47.686110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.000 [2024-07-16 00:27:47.686121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.000 qpair failed and we were unable to recover it. 00:26:29.000 [2024-07-16 00:27:47.686322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.001 [2024-07-16 00:27:47.686333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.001 qpair failed and we were unable to recover it. 00:26:29.001 [2024-07-16 00:27:47.686512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.001 [2024-07-16 00:27:47.686521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.001 qpair failed and we were unable to recover it. 00:26:29.001 [2024-07-16 00:27:47.686778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.001 [2024-07-16 00:27:47.686788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.001 qpair failed and we were unable to recover it. 00:26:29.001 [2024-07-16 00:27:47.686879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.001 [2024-07-16 00:27:47.686888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.001 qpair failed and we were unable to recover it. 00:26:29.001 [2024-07-16 00:27:47.687135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.001 [2024-07-16 00:27:47.687145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.001 qpair failed and we were unable to recover it. 00:26:29.001 [2024-07-16 00:27:47.687356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.001 [2024-07-16 00:27:47.687366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.001 qpair failed and we were unable to recover it. 00:26:29.001 [2024-07-16 00:27:47.687499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.001 [2024-07-16 00:27:47.687510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.001 qpair failed and we were unable to recover it. 00:26:29.001 [2024-07-16 00:27:47.687760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.001 [2024-07-16 00:27:47.687770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.001 qpair failed and we were unable to recover it. 00:26:29.001 [2024-07-16 00:27:47.687968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.001 [2024-07-16 00:27:47.687978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.001 qpair failed and we were unable to recover it. 00:26:29.001 [2024-07-16 00:27:47.688105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.001 [2024-07-16 00:27:47.688115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.001 qpair failed and we were unable to recover it. 00:26:29.001 [2024-07-16 00:27:47.688375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.001 [2024-07-16 00:27:47.688385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.001 qpair failed and we were unable to recover it. 00:26:29.001 [2024-07-16 00:27:47.688593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.001 [2024-07-16 00:27:47.688604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.001 qpair failed and we were unable to recover it. 00:26:29.001 [2024-07-16 00:27:47.688750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.001 [2024-07-16 00:27:47.688759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.001 qpair failed and we were unable to recover it. 00:26:29.001 [2024-07-16 00:27:47.688944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.001 [2024-07-16 00:27:47.688954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.001 qpair failed and we were unable to recover it. 00:26:29.001 [2024-07-16 00:27:47.689183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.001 [2024-07-16 00:27:47.689203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:29.001 qpair failed and we were unable to recover it. 00:26:29.001 [2024-07-16 00:27:47.689438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.001 [2024-07-16 00:27:47.689453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:29.001 qpair failed and we were unable to recover it. 00:26:29.001 [2024-07-16 00:27:47.689647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.001 [2024-07-16 00:27:47.689660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:29.001 qpair failed and we were unable to recover it. 00:26:29.001 [2024-07-16 00:27:47.689857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.001 [2024-07-16 00:27:47.689870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:29.001 qpair failed and we were unable to recover it. 00:26:29.001 [2024-07-16 00:27:47.690088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.001 [2024-07-16 00:27:47.690101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:29.001 qpair failed and we were unable to recover it. 00:26:29.001 [2024-07-16 00:27:47.690308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.001 [2024-07-16 00:27:47.690322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:29.001 qpair failed and we were unable to recover it. 00:26:29.001 [2024-07-16 00:27:47.690628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.001 [2024-07-16 00:27:47.690641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:29.001 qpair failed and we were unable to recover it. 00:26:29.001 [2024-07-16 00:27:47.690902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.001 [2024-07-16 00:27:47.690915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:29.001 qpair failed and we were unable to recover it. 00:26:29.001 [2024-07-16 00:27:47.691170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.001 [2024-07-16 00:27:47.691183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:29.001 qpair failed and we were unable to recover it. 00:26:29.001 [2024-07-16 00:27:47.691390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.001 [2024-07-16 00:27:47.691403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:29.001 qpair failed and we were unable to recover it. 00:26:29.001 [2024-07-16 00:27:47.691612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.001 [2024-07-16 00:27:47.691625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:29.001 qpair failed and we were unable to recover it. 00:26:29.001 [2024-07-16 00:27:47.691903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.001 [2024-07-16 00:27:47.691917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:29.001 qpair failed and we were unable to recover it. 00:26:29.001 [2024-07-16 00:27:47.692110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.001 [2024-07-16 00:27:47.692124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:29.001 qpair failed and we were unable to recover it. 00:26:29.001 [2024-07-16 00:27:47.692381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.001 [2024-07-16 00:27:47.692398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:29.001 qpair failed and we were unable to recover it. 00:26:29.001 [2024-07-16 00:27:47.692551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.001 [2024-07-16 00:27:47.692565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:29.001 qpair failed and we were unable to recover it. 00:26:29.001 [2024-07-16 00:27:47.692713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.001 [2024-07-16 00:27:47.692727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:29.001 qpair failed and we were unable to recover it. 00:26:29.001 [2024-07-16 00:27:47.693006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.001 [2024-07-16 00:27:47.693019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:29.001 qpair failed and we were unable to recover it. 00:26:29.001 [2024-07-16 00:27:47.693171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.001 [2024-07-16 00:27:47.693185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:29.001 qpair failed and we were unable to recover it. 00:26:29.001 [2024-07-16 00:27:47.693415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.001 [2024-07-16 00:27:47.693429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:29.001 qpair failed and we were unable to recover it. 00:26:29.001 [2024-07-16 00:27:47.693621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.001 [2024-07-16 00:27:47.693635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:29.001 qpair failed and we were unable to recover it. 00:26:29.001 [2024-07-16 00:27:47.693893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.001 [2024-07-16 00:27:47.693906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:29.001 qpair failed and we were unable to recover it. 00:26:29.001 [2024-07-16 00:27:47.694111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.001 [2024-07-16 00:27:47.694124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:29.001 qpair failed and we were unable to recover it. 00:26:29.001 [2024-07-16 00:27:47.694409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.001 [2024-07-16 00:27:47.694423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:29.001 qpair failed and we were unable to recover it. 00:26:29.001 [2024-07-16 00:27:47.694627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.001 [2024-07-16 00:27:47.694640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:29.001 qpair failed and we were unable to recover it. 00:26:29.001 [2024-07-16 00:27:47.694921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.001 [2024-07-16 00:27:47.694935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:29.001 qpair failed and we were unable to recover it. 00:26:29.001 [2024-07-16 00:27:47.695157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.001 [2024-07-16 00:27:47.695170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:29.001 qpair failed and we were unable to recover it. 00:26:29.001 [2024-07-16 00:27:47.695374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.001 [2024-07-16 00:27:47.695388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:29.002 qpair failed and we were unable to recover it. 00:26:29.002 [2024-07-16 00:27:47.695579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.002 [2024-07-16 00:27:47.695592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:29.002 qpair failed and we were unable to recover it. 00:26:29.002 [2024-07-16 00:27:47.695731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.002 [2024-07-16 00:27:47.695744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:29.002 qpair failed and we were unable to recover it. 00:26:29.002 [2024-07-16 00:27:47.695940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.002 [2024-07-16 00:27:47.695953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:29.002 qpair failed and we were unable to recover it. 00:26:29.002 [2024-07-16 00:27:47.696221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.002 [2024-07-16 00:27:47.696239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:29.002 qpair failed and we were unable to recover it. 00:26:29.002 [2024-07-16 00:27:47.696445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.002 [2024-07-16 00:27:47.696459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:29.002 qpair failed and we were unable to recover it. 00:26:29.002 [2024-07-16 00:27:47.696585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.002 [2024-07-16 00:27:47.696599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:29.002 qpair failed and we were unable to recover it. 00:26:29.002 [2024-07-16 00:27:47.696856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.002 [2024-07-16 00:27:47.696869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:29.002 qpair failed and we were unable to recover it. 00:26:29.002 [2024-07-16 00:27:47.697075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.002 [2024-07-16 00:27:47.697088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:29.002 qpair failed and we were unable to recover it. 00:26:29.002 [2024-07-16 00:27:47.697278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.002 [2024-07-16 00:27:47.697291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:29.002 qpair failed and we were unable to recover it. 00:26:29.002 [2024-07-16 00:27:47.697568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.002 [2024-07-16 00:27:47.697582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:29.002 qpair failed and we were unable to recover it. 00:26:29.002 [2024-07-16 00:27:47.697739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.002 [2024-07-16 00:27:47.697753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:29.002 qpair failed and we were unable to recover it. 00:26:29.002 [2024-07-16 00:27:47.697905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.002 [2024-07-16 00:27:47.697918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:29.002 qpair failed and we were unable to recover it. 00:26:29.002 [2024-07-16 00:27:47.698177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.002 [2024-07-16 00:27:47.698190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:29.002 qpair failed and we were unable to recover it. 00:26:29.002 [2024-07-16 00:27:47.698361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.002 [2024-07-16 00:27:47.698377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:29.002 qpair failed and we were unable to recover it. 00:26:29.002 [2024-07-16 00:27:47.698509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.002 [2024-07-16 00:27:47.698523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:29.002 qpair failed and we were unable to recover it. 00:26:29.002 [2024-07-16 00:27:47.698742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.002 [2024-07-16 00:27:47.698756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:29.002 qpair failed and we were unable to recover it. 00:26:29.002 [2024-07-16 00:27:47.698956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.002 [2024-07-16 00:27:47.698970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:29.002 qpair failed and we were unable to recover it. 00:26:29.002 [2024-07-16 00:27:47.699250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.002 [2024-07-16 00:27:47.699264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:29.002 qpair failed and we were unable to recover it. 00:26:29.002 [2024-07-16 00:27:47.699420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.002 [2024-07-16 00:27:47.699434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:29.002 qpair failed and we were unable to recover it. 00:26:29.002 [2024-07-16 00:27:47.699643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.002 [2024-07-16 00:27:47.699656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:29.002 qpair failed and we were unable to recover it. 00:26:29.002 [2024-07-16 00:27:47.699859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.002 [2024-07-16 00:27:47.699873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:29.002 qpair failed and we were unable to recover it. 00:26:29.002 [2024-07-16 00:27:47.700152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.002 [2024-07-16 00:27:47.700165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:29.002 qpair failed and we were unable to recover it. 00:26:29.002 [2024-07-16 00:27:47.700380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.002 [2024-07-16 00:27:47.700395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:29.002 qpair failed and we were unable to recover it. 00:26:29.002 [2024-07-16 00:27:47.700657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.002 [2024-07-16 00:27:47.700670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:29.002 qpair failed and we were unable to recover it. 00:26:29.002 [2024-07-16 00:27:47.700801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.002 [2024-07-16 00:27:47.700815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:29.002 qpair failed and we were unable to recover it. 00:26:29.002 [2024-07-16 00:27:47.701025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.002 [2024-07-16 00:27:47.701039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:29.002 qpair failed and we were unable to recover it. 00:26:29.002 [2024-07-16 00:27:47.701181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.002 [2024-07-16 00:27:47.701194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:29.002 qpair failed and we were unable to recover it. 00:26:29.002 [2024-07-16 00:27:47.701358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.002 [2024-07-16 00:27:47.701372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:29.002 qpair failed and we were unable to recover it. 00:26:29.002 [2024-07-16 00:27:47.701632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.002 [2024-07-16 00:27:47.701645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:29.002 qpair failed and we were unable to recover it. 00:26:29.002 [2024-07-16 00:27:47.701770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.002 [2024-07-16 00:27:47.701784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:29.002 qpair failed and we were unable to recover it. 00:26:29.002 [2024-07-16 00:27:47.701995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.002 [2024-07-16 00:27:47.702008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:29.002 qpair failed and we were unable to recover it. 00:26:29.002 [2024-07-16 00:27:47.702160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.002 [2024-07-16 00:27:47.702174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:29.002 qpair failed and we were unable to recover it. 00:26:29.002 [2024-07-16 00:27:47.702379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.002 [2024-07-16 00:27:47.702393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:29.002 qpair failed and we were unable to recover it. 00:26:29.002 [2024-07-16 00:27:47.702596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.002 [2024-07-16 00:27:47.702609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:29.002 qpair failed and we were unable to recover it. 00:26:29.002 [2024-07-16 00:27:47.702818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.002 [2024-07-16 00:27:47.702831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:29.002 qpair failed and we were unable to recover it. 00:26:29.002 [2024-07-16 00:27:47.702971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.003 [2024-07-16 00:27:47.702985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:29.003 qpair failed and we were unable to recover it. 00:26:29.003 [2024-07-16 00:27:47.703176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.003 [2024-07-16 00:27:47.703190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:29.003 qpair failed and we were unable to recover it. 00:26:29.003 [2024-07-16 00:27:47.703415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.003 [2024-07-16 00:27:47.703429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:29.003 qpair failed and we were unable to recover it. 00:26:29.003 [2024-07-16 00:27:47.703769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.003 [2024-07-16 00:27:47.703804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:29.003 qpair failed and we were unable to recover it. 00:26:29.003 [2024-07-16 00:27:47.704015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.003 [2024-07-16 00:27:47.704027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.003 qpair failed and we were unable to recover it. 00:26:29.003 [2024-07-16 00:27:47.704219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.003 [2024-07-16 00:27:47.704237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.003 qpair failed and we were unable to recover it. 00:26:29.003 [2024-07-16 00:27:47.704449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.003 [2024-07-16 00:27:47.704458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.003 qpair failed and we were unable to recover it. 00:26:29.003 [2024-07-16 00:27:47.704662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.003 [2024-07-16 00:27:47.704671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.003 qpair failed and we were unable to recover it. 00:26:29.003 [2024-07-16 00:27:47.704801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.003 [2024-07-16 00:27:47.704811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.003 qpair failed and we were unable to recover it. 00:26:29.003 [2024-07-16 00:27:47.705011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.003 [2024-07-16 00:27:47.705020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.003 qpair failed and we were unable to recover it. 00:26:29.003 [2024-07-16 00:27:47.705222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.003 [2024-07-16 00:27:47.705237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.003 qpair failed and we were unable to recover it. 00:26:29.003 [2024-07-16 00:27:47.705384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.003 [2024-07-16 00:27:47.705394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.003 qpair failed and we were unable to recover it. 00:26:29.003 [2024-07-16 00:27:47.705646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.003 [2024-07-16 00:27:47.705656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.003 qpair failed and we were unable to recover it. 00:26:29.003 [2024-07-16 00:27:47.705896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.003 [2024-07-16 00:27:47.705906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.003 qpair failed and we were unable to recover it. 00:26:29.003 [2024-07-16 00:27:47.706128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.003 [2024-07-16 00:27:47.706138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.003 qpair failed and we were unable to recover it. 00:26:29.003 [2024-07-16 00:27:47.706418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.003 [2024-07-16 00:27:47.706428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.003 qpair failed and we were unable to recover it. 00:26:29.003 [2024-07-16 00:27:47.706698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.003 [2024-07-16 00:27:47.706707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.003 qpair failed and we were unable to recover it. 00:26:29.003 [2024-07-16 00:27:47.706909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.003 [2024-07-16 00:27:47.706919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.003 qpair failed and we were unable to recover it. 00:26:29.003 [2024-07-16 00:27:47.707185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.003 [2024-07-16 00:27:47.707197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.003 qpair failed and we were unable to recover it. 00:26:29.003 [2024-07-16 00:27:47.707426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.003 [2024-07-16 00:27:47.707436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.003 qpair failed and we were unable to recover it. 00:26:29.003 [2024-07-16 00:27:47.707656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.003 [2024-07-16 00:27:47.707666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.003 qpair failed and we were unable to recover it. 00:26:29.003 [2024-07-16 00:27:47.707912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.003 [2024-07-16 00:27:47.707922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.003 qpair failed and we were unable to recover it. 00:26:29.003 [2024-07-16 00:27:47.708207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.003 [2024-07-16 00:27:47.708217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.003 qpair failed and we were unable to recover it. 00:26:29.003 [2024-07-16 00:27:47.708410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.003 [2024-07-16 00:27:47.708421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.003 qpair failed and we were unable to recover it. 00:26:29.003 [2024-07-16 00:27:47.708612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.003 [2024-07-16 00:27:47.708621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.003 qpair failed and we were unable to recover it. 00:26:29.003 [2024-07-16 00:27:47.708814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.003 [2024-07-16 00:27:47.708824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.003 qpair failed and we were unable to recover it. 00:26:29.003 [2024-07-16 00:27:47.709097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.003 [2024-07-16 00:27:47.709107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.003 qpair failed and we were unable to recover it. 00:26:29.003 [2024-07-16 00:27:47.709360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.003 [2024-07-16 00:27:47.709370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.003 qpair failed and we were unable to recover it. 00:26:29.003 [2024-07-16 00:27:47.709515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.003 [2024-07-16 00:27:47.709525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.003 qpair failed and we were unable to recover it. 00:26:29.003 [2024-07-16 00:27:47.709644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.003 [2024-07-16 00:27:47.709654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.003 qpair failed and we were unable to recover it. 00:26:29.003 [2024-07-16 00:27:47.709860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.003 [2024-07-16 00:27:47.709869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.003 qpair failed and we were unable to recover it. 00:26:29.003 [2024-07-16 00:27:47.710140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.003 [2024-07-16 00:27:47.710149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.003 qpair failed and we were unable to recover it. 00:26:29.003 [2024-07-16 00:27:47.710355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.003 [2024-07-16 00:27:47.710366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.003 qpair failed and we were unable to recover it. 00:26:29.003 [2024-07-16 00:27:47.710502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.003 [2024-07-16 00:27:47.710511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.003 qpair failed and we were unable to recover it. 00:26:29.003 [2024-07-16 00:27:47.710737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.003 [2024-07-16 00:27:47.710746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.003 qpair failed and we were unable to recover it. 00:26:29.003 [2024-07-16 00:27:47.710912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.003 [2024-07-16 00:27:47.710922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.003 qpair failed and we were unable to recover it. 00:26:29.003 [2024-07-16 00:27:47.711050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.003 [2024-07-16 00:27:47.711060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.003 qpair failed and we were unable to recover it. 00:26:29.003 [2024-07-16 00:27:47.711277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.003 [2024-07-16 00:27:47.711286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.003 qpair failed and we were unable to recover it. 00:26:29.003 [2024-07-16 00:27:47.711467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.003 [2024-07-16 00:27:47.711477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.003 qpair failed and we were unable to recover it. 00:26:29.003 [2024-07-16 00:27:47.711775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.003 [2024-07-16 00:27:47.711785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.003 qpair failed and we were unable to recover it. 00:26:29.003 [2024-07-16 00:27:47.711880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.003 [2024-07-16 00:27:47.711890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.003 qpair failed and we were unable to recover it. 00:26:29.004 [2024-07-16 00:27:47.712035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.004 [2024-07-16 00:27:47.712045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.004 qpair failed and we were unable to recover it. 00:26:29.004 [2024-07-16 00:27:47.712168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.004 [2024-07-16 00:27:47.712177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.004 qpair failed and we were unable to recover it. 00:26:29.004 [2024-07-16 00:27:47.712320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.004 [2024-07-16 00:27:47.712330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.004 qpair failed and we were unable to recover it. 00:26:29.004 [2024-07-16 00:27:47.712558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.004 [2024-07-16 00:27:47.712568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.004 qpair failed and we were unable to recover it. 00:26:29.004 [2024-07-16 00:27:47.712821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.004 [2024-07-16 00:27:47.712830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.004 qpair failed and we were unable to recover it. 00:26:29.004 [2024-07-16 00:27:47.713021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.004 [2024-07-16 00:27:47.713031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.004 qpair failed and we were unable to recover it. 00:26:29.004 [2024-07-16 00:27:47.713235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.004 [2024-07-16 00:27:47.713245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.004 qpair failed and we were unable to recover it. 00:26:29.004 [2024-07-16 00:27:47.713391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.004 [2024-07-16 00:27:47.713401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.004 qpair failed and we were unable to recover it. 00:26:29.004 [2024-07-16 00:27:47.713580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.004 [2024-07-16 00:27:47.713590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.004 qpair failed and we were unable to recover it. 00:26:29.004 [2024-07-16 00:27:47.713786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.004 [2024-07-16 00:27:47.713796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.004 qpair failed and we were unable to recover it. 00:26:29.004 [2024-07-16 00:27:47.714024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.004 [2024-07-16 00:27:47.714034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.004 qpair failed and we were unable to recover it. 00:26:29.004 [2024-07-16 00:27:47.714238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.004 [2024-07-16 00:27:47.714248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.004 qpair failed and we were unable to recover it. 00:26:29.004 [2024-07-16 00:27:47.714498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.004 [2024-07-16 00:27:47.714508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.004 qpair failed and we were unable to recover it. 00:26:29.004 [2024-07-16 00:27:47.714708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.004 [2024-07-16 00:27:47.714717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.004 qpair failed and we were unable to recover it. 00:26:29.004 [2024-07-16 00:27:47.714908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.004 [2024-07-16 00:27:47.714917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.004 qpair failed and we were unable to recover it. 00:26:29.004 [2024-07-16 00:27:47.715168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.004 [2024-07-16 00:27:47.715179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.004 qpair failed and we were unable to recover it. 00:26:29.004 [2024-07-16 00:27:47.715307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.004 [2024-07-16 00:27:47.715317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.004 qpair failed and we were unable to recover it. 00:26:29.004 [2024-07-16 00:27:47.715510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.004 [2024-07-16 00:27:47.715521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.004 qpair failed and we were unable to recover it. 00:26:29.004 [2024-07-16 00:27:47.715654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.004 [2024-07-16 00:27:47.715664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.004 qpair failed and we were unable to recover it. 00:26:29.004 [2024-07-16 00:27:47.715930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.004 [2024-07-16 00:27:47.715940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.004 qpair failed and we were unable to recover it. 00:26:29.004 [2024-07-16 00:27:47.716123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.004 [2024-07-16 00:27:47.716132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.004 qpair failed and we were unable to recover it. 00:26:29.004 [2024-07-16 00:27:47.716319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.004 [2024-07-16 00:27:47.716329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.004 qpair failed and we were unable to recover it. 00:26:29.004 [2024-07-16 00:27:47.716604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.004 [2024-07-16 00:27:47.716614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.004 qpair failed and we were unable to recover it. 00:26:29.004 [2024-07-16 00:27:47.716869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.004 [2024-07-16 00:27:47.716879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.004 qpair failed and we were unable to recover it. 00:26:29.004 [2024-07-16 00:27:47.717069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.004 [2024-07-16 00:27:47.717079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.004 qpair failed and we were unable to recover it. 00:26:29.004 [2024-07-16 00:27:47.717263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.004 [2024-07-16 00:27:47.717273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.004 qpair failed and we were unable to recover it. 00:26:29.004 [2024-07-16 00:27:47.717543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.004 [2024-07-16 00:27:47.717553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.004 qpair failed and we were unable to recover it. 00:26:29.004 [2024-07-16 00:27:47.717766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.004 [2024-07-16 00:27:47.717776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.004 qpair failed and we were unable to recover it. 00:26:29.004 [2024-07-16 00:27:47.718045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.004 [2024-07-16 00:27:47.718054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.004 qpair failed and we were unable to recover it. 00:26:29.004 [2024-07-16 00:27:47.718250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.004 [2024-07-16 00:27:47.718260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.004 qpair failed and we were unable to recover it. 00:26:29.004 [2024-07-16 00:27:47.718393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.004 [2024-07-16 00:27:47.718403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.004 qpair failed and we were unable to recover it. 00:26:29.004 [2024-07-16 00:27:47.718590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.004 [2024-07-16 00:27:47.718600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.004 qpair failed and we were unable to recover it. 00:26:29.004 [2024-07-16 00:27:47.718733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.004 [2024-07-16 00:27:47.718743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.004 qpair failed and we were unable to recover it. 00:26:29.004 [2024-07-16 00:27:47.718881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.004 [2024-07-16 00:27:47.718891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.004 qpair failed and we were unable to recover it. 00:26:29.004 [2024-07-16 00:27:47.719025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.004 [2024-07-16 00:27:47.719034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.004 qpair failed and we were unable to recover it. 00:26:29.004 [2024-07-16 00:27:47.719183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.004 [2024-07-16 00:27:47.719192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.004 qpair failed and we were unable to recover it. 00:26:29.004 [2024-07-16 00:27:47.719390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.004 [2024-07-16 00:27:47.719401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.004 qpair failed and we were unable to recover it. 00:26:29.004 [2024-07-16 00:27:47.719532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.004 [2024-07-16 00:27:47.719542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.004 qpair failed and we were unable to recover it. 00:26:29.004 [2024-07-16 00:27:47.719742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.004 [2024-07-16 00:27:47.719751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.004 qpair failed and we were unable to recover it. 00:26:29.004 [2024-07-16 00:27:47.719966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.004 [2024-07-16 00:27:47.719976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.004 qpair failed and we were unable to recover it. 00:26:29.004 [2024-07-16 00:27:47.720103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.005 [2024-07-16 00:27:47.720112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.005 qpair failed and we were unable to recover it. 00:26:29.005 [2024-07-16 00:27:47.720251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.005 [2024-07-16 00:27:47.720261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.005 qpair failed and we were unable to recover it. 00:26:29.005 [2024-07-16 00:27:47.720444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.005 [2024-07-16 00:27:47.720454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.005 qpair failed and we were unable to recover it. 00:26:29.005 [2024-07-16 00:27:47.720591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.005 [2024-07-16 00:27:47.720601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.005 qpair failed and we were unable to recover it. 00:26:29.005 [2024-07-16 00:27:47.720829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.005 [2024-07-16 00:27:47.720838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.005 qpair failed and we were unable to recover it. 00:26:29.005 [2024-07-16 00:27:47.721038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.005 [2024-07-16 00:27:47.721048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.005 qpair failed and we were unable to recover it. 00:26:29.005 [2024-07-16 00:27:47.721242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.005 [2024-07-16 00:27:47.721252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.005 [2024-07-16 00:27:47.721250] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:29.005 qpair failed and we were unable to recover it. 00:26:29.005 [2024-07-16 00:27:47.721450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.005 [2024-07-16 00:27:47.721460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.005 qpair failed and we were unable to recover it. 00:26:29.005 [2024-07-16 00:27:47.721652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.005 [2024-07-16 00:27:47.721663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.005 qpair failed and we were unable to recover it. 00:26:29.005 [2024-07-16 00:27:47.721784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.005 [2024-07-16 00:27:47.721793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.005 qpair failed and we were unable to recover it. 00:26:29.005 [2024-07-16 00:27:47.721977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.005 [2024-07-16 00:27:47.721987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.005 qpair failed and we were unable to recover it. 00:26:29.005 [2024-07-16 00:27:47.722181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.005 [2024-07-16 00:27:47.722191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.005 qpair failed and we were unable to recover it. 00:26:29.005 [2024-07-16 00:27:47.722372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.005 [2024-07-16 00:27:47.722382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.005 qpair failed and we were unable to recover it. 00:26:29.005 [2024-07-16 00:27:47.722594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.005 [2024-07-16 00:27:47.722604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.005 qpair failed and we were unable to recover it. 00:26:29.005 [2024-07-16 00:27:47.722741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.005 [2024-07-16 00:27:47.722751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.005 qpair failed and we were unable to recover it. 00:26:29.005 [2024-07-16 00:27:47.722880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.005 [2024-07-16 00:27:47.722890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.005 qpair failed and we were unable to recover it. 00:26:29.005 [2024-07-16 00:27:47.723084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.005 [2024-07-16 00:27:47.723093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.005 qpair failed and we were unable to recover it. 00:26:29.005 [2024-07-16 00:27:47.723219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.005 [2024-07-16 00:27:47.723233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.005 qpair failed and we were unable to recover it. 00:26:29.005 [2024-07-16 00:27:47.723484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.005 [2024-07-16 00:27:47.723494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.005 qpair failed and we were unable to recover it. 00:26:29.005 [2024-07-16 00:27:47.723786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.005 [2024-07-16 00:27:47.723795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.005 qpair failed and we were unable to recover it. 00:26:29.005 [2024-07-16 00:27:47.723978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.005 [2024-07-16 00:27:47.723988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.005 qpair failed and we were unable to recover it. 00:26:29.005 [2024-07-16 00:27:47.724129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.005 [2024-07-16 00:27:47.724140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.005 qpair failed and we were unable to recover it. 00:26:29.005 [2024-07-16 00:27:47.724413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.005 [2024-07-16 00:27:47.724423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.005 qpair failed and we were unable to recover it. 00:26:29.005 [2024-07-16 00:27:47.724665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.005 [2024-07-16 00:27:47.724675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.005 qpair failed and we were unable to recover it. 00:26:29.005 [2024-07-16 00:27:47.724877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.005 [2024-07-16 00:27:47.724887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.005 qpair failed and we were unable to recover it. 00:26:29.005 [2024-07-16 00:27:47.725131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.005 [2024-07-16 00:27:47.725141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.005 qpair failed and we were unable to recover it. 00:26:29.005 [2024-07-16 00:27:47.725334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.005 [2024-07-16 00:27:47.725345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.005 qpair failed and we were unable to recover it. 00:26:29.005 [2024-07-16 00:27:47.725490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.005 [2024-07-16 00:27:47.725500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.005 qpair failed and we were unable to recover it. 00:26:29.005 [2024-07-16 00:27:47.725783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.005 [2024-07-16 00:27:47.725794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.005 qpair failed and we were unable to recover it. 00:26:29.005 [2024-07-16 00:27:47.725997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.005 [2024-07-16 00:27:47.726008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.005 qpair failed and we were unable to recover it. 00:26:29.005 [2024-07-16 00:27:47.726193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.005 [2024-07-16 00:27:47.726205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.005 qpair failed and we were unable to recover it. 00:26:29.005 [2024-07-16 00:27:47.726376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.005 [2024-07-16 00:27:47.726386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.005 qpair failed and we were unable to recover it. 00:26:29.005 [2024-07-16 00:27:47.726516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.005 [2024-07-16 00:27:47.726527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.005 qpair failed and we were unable to recover it. 00:26:29.005 [2024-07-16 00:27:47.726802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.005 [2024-07-16 00:27:47.726813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.005 qpair failed and we were unable to recover it. 00:26:29.005 [2024-07-16 00:27:47.727061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.005 [2024-07-16 00:27:47.727071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.005 qpair failed and we were unable to recover it. 00:26:29.005 [2024-07-16 00:27:47.727320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.005 [2024-07-16 00:27:47.727330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.005 qpair failed and we were unable to recover it. 00:26:29.005 [2024-07-16 00:27:47.727608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.005 [2024-07-16 00:27:47.727619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.005 qpair failed and we were unable to recover it. 00:26:29.005 [2024-07-16 00:27:47.727904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.005 [2024-07-16 00:27:47.727915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.005 qpair failed and we were unable to recover it. 00:26:29.005 [2024-07-16 00:27:47.728055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.005 [2024-07-16 00:27:47.728065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.005 qpair failed and we were unable to recover it. 00:26:29.005 [2024-07-16 00:27:47.728250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.005 [2024-07-16 00:27:47.728262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.005 qpair failed and we were unable to recover it. 00:26:29.005 [2024-07-16 00:27:47.728527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.005 [2024-07-16 00:27:47.728537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.005 qpair failed and we were unable to recover it. 00:26:29.006 [2024-07-16 00:27:47.728732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.006 [2024-07-16 00:27:47.728745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.006 qpair failed and we were unable to recover it. 00:26:29.006 [2024-07-16 00:27:47.728945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.006 [2024-07-16 00:27:47.728956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.006 qpair failed and we were unable to recover it. 00:26:29.006 [2024-07-16 00:27:47.729082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.006 [2024-07-16 00:27:47.729092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.006 qpair failed and we were unable to recover it. 00:26:29.006 [2024-07-16 00:27:47.729285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.006 [2024-07-16 00:27:47.729297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.006 qpair failed and we were unable to recover it. 00:26:29.006 [2024-07-16 00:27:47.729444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.006 [2024-07-16 00:27:47.729454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.006 qpair failed and we were unable to recover it. 00:26:29.006 [2024-07-16 00:27:47.729651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.006 [2024-07-16 00:27:47.729662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.006 qpair failed and we were unable to recover it. 00:26:29.006 [2024-07-16 00:27:47.729847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.006 [2024-07-16 00:27:47.729858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.006 qpair failed and we were unable to recover it. 00:26:29.006 [2024-07-16 00:27:47.730110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.006 [2024-07-16 00:27:47.730122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.006 qpair failed and we were unable to recover it. 00:26:29.006 [2024-07-16 00:27:47.730320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.006 [2024-07-16 00:27:47.730332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.006 qpair failed and we were unable to recover it. 00:26:29.006 [2024-07-16 00:27:47.730528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.006 [2024-07-16 00:27:47.730540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.006 qpair failed and we were unable to recover it. 00:26:29.006 [2024-07-16 00:27:47.730803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.006 [2024-07-16 00:27:47.730814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.006 qpair failed and we were unable to recover it. 00:26:29.006 [2024-07-16 00:27:47.730945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.006 [2024-07-16 00:27:47.730957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.006 qpair failed and we were unable to recover it. 00:26:29.006 [2024-07-16 00:27:47.731153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.006 [2024-07-16 00:27:47.731163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.006 qpair failed and we were unable to recover it. 00:26:29.006 [2024-07-16 00:27:47.731364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.006 [2024-07-16 00:27:47.731374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.006 qpair failed and we were unable to recover it. 00:26:29.006 [2024-07-16 00:27:47.731498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.006 [2024-07-16 00:27:47.731508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.006 qpair failed and we were unable to recover it. 00:26:29.006 [2024-07-16 00:27:47.731758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.006 [2024-07-16 00:27:47.731769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.006 qpair failed and we were unable to recover it. 00:26:29.006 [2024-07-16 00:27:47.732018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.006 [2024-07-16 00:27:47.732028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.006 qpair failed and we were unable to recover it. 00:26:29.006 [2024-07-16 00:27:47.732172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.006 [2024-07-16 00:27:47.732182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.006 qpair failed and we were unable to recover it. 00:26:29.006 [2024-07-16 00:27:47.732454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.006 [2024-07-16 00:27:47.732465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.006 qpair failed and we were unable to recover it. 00:26:29.006 [2024-07-16 00:27:47.732596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.006 [2024-07-16 00:27:47.732606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.006 qpair failed and we were unable to recover it. 00:26:29.006 [2024-07-16 00:27:47.732797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.006 [2024-07-16 00:27:47.732806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.006 qpair failed and we were unable to recover it. 00:26:29.006 [2024-07-16 00:27:47.733052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.006 [2024-07-16 00:27:47.733062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.006 qpair failed and we were unable to recover it. 00:26:29.006 [2024-07-16 00:27:47.733275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.006 [2024-07-16 00:27:47.733285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.006 qpair failed and we were unable to recover it. 00:26:29.006 [2024-07-16 00:27:47.733483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.006 [2024-07-16 00:27:47.733492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.006 qpair failed and we were unable to recover it. 00:26:29.006 [2024-07-16 00:27:47.733694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.006 [2024-07-16 00:27:47.733704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.006 qpair failed and we were unable to recover it. 00:26:29.006 [2024-07-16 00:27:47.733932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.006 [2024-07-16 00:27:47.733942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.006 qpair failed and we were unable to recover it. 00:26:29.006 [2024-07-16 00:27:47.734136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.006 [2024-07-16 00:27:47.734146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.006 qpair failed and we were unable to recover it. 00:26:29.006 [2024-07-16 00:27:47.734402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.006 [2024-07-16 00:27:47.734412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.006 qpair failed and we were unable to recover it. 00:26:29.006 [2024-07-16 00:27:47.734538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.006 [2024-07-16 00:27:47.734548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.006 qpair failed and we were unable to recover it. 00:26:29.006 [2024-07-16 00:27:47.734759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.006 [2024-07-16 00:27:47.734771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.006 qpair failed and we were unable to recover it. 00:26:29.006 [2024-07-16 00:27:47.735020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.006 [2024-07-16 00:27:47.735030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.006 qpair failed and we were unable to recover it. 00:26:29.006 [2024-07-16 00:27:47.735279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.006 [2024-07-16 00:27:47.735289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.006 qpair failed and we were unable to recover it. 00:26:29.006 [2024-07-16 00:27:47.735559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.006 [2024-07-16 00:27:47.735568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.006 qpair failed and we were unable to recover it. 00:26:29.006 [2024-07-16 00:27:47.735771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.006 [2024-07-16 00:27:47.735780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.006 qpair failed and we were unable to recover it. 00:26:29.006 [2024-07-16 00:27:47.735928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.006 [2024-07-16 00:27:47.735938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.006 qpair failed and we were unable to recover it. 00:26:29.007 [2024-07-16 00:27:47.736197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.007 [2024-07-16 00:27:47.736208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.007 qpair failed and we were unable to recover it. 00:26:29.007 [2024-07-16 00:27:47.736402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.007 [2024-07-16 00:27:47.736412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.007 qpair failed and we were unable to recover it. 00:26:29.007 [2024-07-16 00:27:47.736662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.007 [2024-07-16 00:27:47.736672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.007 qpair failed and we were unable to recover it. 00:26:29.007 [2024-07-16 00:27:47.736933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.007 [2024-07-16 00:27:47.736943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.007 qpair failed and we were unable to recover it. 00:26:29.007 [2024-07-16 00:27:47.737193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.007 [2024-07-16 00:27:47.737203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.007 qpair failed and we were unable to recover it. 00:26:29.007 [2024-07-16 00:27:47.737475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.007 [2024-07-16 00:27:47.737485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.007 qpair failed and we were unable to recover it. 00:26:29.007 [2024-07-16 00:27:47.737631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.007 [2024-07-16 00:27:47.737640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.007 qpair failed and we were unable to recover it. 00:26:29.007 [2024-07-16 00:27:47.737841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.007 [2024-07-16 00:27:47.737852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.007 qpair failed and we were unable to recover it. 00:26:29.007 [2024-07-16 00:27:47.737998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.007 [2024-07-16 00:27:47.738009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.007 qpair failed and we were unable to recover it. 00:26:29.007 [2024-07-16 00:27:47.738209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.007 [2024-07-16 00:27:47.738218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.007 qpair failed and we were unable to recover it. 00:26:29.007 [2024-07-16 00:27:47.738405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.007 [2024-07-16 00:27:47.738415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.007 qpair failed and we were unable to recover it. 00:26:29.007 [2024-07-16 00:27:47.738559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.007 [2024-07-16 00:27:47.738568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.007 qpair failed and we were unable to recover it. 00:26:29.007 [2024-07-16 00:27:47.738709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.007 [2024-07-16 00:27:47.738719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.007 qpair failed and we were unable to recover it. 00:26:29.007 [2024-07-16 00:27:47.738921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.007 [2024-07-16 00:27:47.738930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.007 qpair failed and we were unable to recover it. 00:26:29.007 [2024-07-16 00:27:47.739144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.007 [2024-07-16 00:27:47.739153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.007 qpair failed and we were unable to recover it. 00:26:29.007 [2024-07-16 00:27:47.739357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.007 [2024-07-16 00:27:47.739367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.007 qpair failed and we were unable to recover it. 00:26:29.007 [2024-07-16 00:27:47.739482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.007 [2024-07-16 00:27:47.739492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.007 qpair failed and we were unable to recover it. 00:26:29.007 [2024-07-16 00:27:47.739743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.007 [2024-07-16 00:27:47.739753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.007 qpair failed and we were unable to recover it. 00:26:29.007 [2024-07-16 00:27:47.739951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.007 [2024-07-16 00:27:47.739960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.007 qpair failed and we were unable to recover it. 00:26:29.007 [2024-07-16 00:27:47.740147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.007 [2024-07-16 00:27:47.740156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.007 qpair failed and we were unable to recover it. 00:26:29.007 [2024-07-16 00:27:47.740369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.007 [2024-07-16 00:27:47.740379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.007 qpair failed and we were unable to recover it. 00:26:29.007 [2024-07-16 00:27:47.740458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.007 [2024-07-16 00:27:47.740467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.007 qpair failed and we were unable to recover it. 00:26:29.007 [2024-07-16 00:27:47.740683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.007 [2024-07-16 00:27:47.740694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.007 qpair failed and we were unable to recover it. 00:26:29.007 [2024-07-16 00:27:47.740811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.007 [2024-07-16 00:27:47.740820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.007 qpair failed and we were unable to recover it. 00:26:29.007 [2024-07-16 00:27:47.740937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.007 [2024-07-16 00:27:47.740946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.007 qpair failed and we were unable to recover it. 00:26:29.007 [2024-07-16 00:27:47.741196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.007 [2024-07-16 00:27:47.741207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.007 qpair failed and we were unable to recover it. 00:26:29.007 [2024-07-16 00:27:47.741403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.007 [2024-07-16 00:27:47.741413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.007 qpair failed and we were unable to recover it. 00:26:29.007 [2024-07-16 00:27:47.741612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.007 [2024-07-16 00:27:47.741621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.007 qpair failed and we were unable to recover it. 00:26:29.007 [2024-07-16 00:27:47.741849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.007 [2024-07-16 00:27:47.741859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.007 qpair failed and we were unable to recover it. 00:26:29.007 [2024-07-16 00:27:47.742126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.007 [2024-07-16 00:27:47.742135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.007 qpair failed and we were unable to recover it. 00:26:29.007 [2024-07-16 00:27:47.742411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.007 [2024-07-16 00:27:47.742422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.007 qpair failed and we were unable to recover it. 00:26:29.007 [2024-07-16 00:27:47.742629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.007 [2024-07-16 00:27:47.742639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.007 qpair failed and we were unable to recover it. 00:26:29.007 [2024-07-16 00:27:47.742834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.007 [2024-07-16 00:27:47.742844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.007 qpair failed and we were unable to recover it. 00:26:29.007 [2024-07-16 00:27:47.743035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.007 [2024-07-16 00:27:47.743045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.007 qpair failed and we were unable to recover it. 00:26:29.007 [2024-07-16 00:27:47.743296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.007 [2024-07-16 00:27:47.743311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.007 qpair failed and we were unable to recover it. 00:26:29.007 [2024-07-16 00:27:47.743402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.007 [2024-07-16 00:27:47.743411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.007 qpair failed and we were unable to recover it. 00:26:29.007 [2024-07-16 00:27:47.743598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.007 [2024-07-16 00:27:47.743608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.007 qpair failed and we were unable to recover it. 00:26:29.007 [2024-07-16 00:27:47.743807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.007 [2024-07-16 00:27:47.743817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.007 qpair failed and we were unable to recover it. 00:26:29.007 [2024-07-16 00:27:47.743947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.007 [2024-07-16 00:27:47.743958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.007 qpair failed and we were unable to recover it. 00:26:29.007 [2024-07-16 00:27:47.744099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.007 [2024-07-16 00:27:47.744108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.007 qpair failed and we were unable to recover it. 00:26:29.007 [2024-07-16 00:27:47.744406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.007 [2024-07-16 00:27:47.744416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.008 qpair failed and we were unable to recover it. 00:26:29.008 [2024-07-16 00:27:47.744617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.008 [2024-07-16 00:27:47.744626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.008 qpair failed and we were unable to recover it. 00:26:29.008 [2024-07-16 00:27:47.744744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.008 [2024-07-16 00:27:47.744753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.008 qpair failed and we were unable to recover it. 00:26:29.008 [2024-07-16 00:27:47.744958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.008 [2024-07-16 00:27:47.744969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.008 qpair failed and we were unable to recover it. 00:26:29.008 [2024-07-16 00:27:47.745104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.008 [2024-07-16 00:27:47.745113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.008 qpair failed and we were unable to recover it. 00:26:29.008 [2024-07-16 00:27:47.745319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.008 [2024-07-16 00:27:47.745329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.008 qpair failed and we were unable to recover it. 00:26:29.008 [2024-07-16 00:27:47.745631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.008 [2024-07-16 00:27:47.745642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.008 qpair failed and we were unable to recover it. 00:26:29.008 [2024-07-16 00:27:47.745835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.008 [2024-07-16 00:27:47.745845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.008 qpair failed and we were unable to recover it. 00:26:29.008 [2024-07-16 00:27:47.745999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.008 [2024-07-16 00:27:47.746009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.008 qpair failed and we were unable to recover it. 00:26:29.008 [2024-07-16 00:27:47.746188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.008 [2024-07-16 00:27:47.746199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.008 qpair failed and we were unable to recover it. 00:26:29.008 [2024-07-16 00:27:47.746384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.008 [2024-07-16 00:27:47.746394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.008 qpair failed and we were unable to recover it. 00:26:29.008 [2024-07-16 00:27:47.746591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.008 [2024-07-16 00:27:47.746600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.008 qpair failed and we were unable to recover it. 00:26:29.008 [2024-07-16 00:27:47.746803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.008 [2024-07-16 00:27:47.746813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.008 qpair failed and we were unable to recover it. 00:26:29.008 [2024-07-16 00:27:47.746902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.008 [2024-07-16 00:27:47.746911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.008 qpair failed and we were unable to recover it. 00:26:29.008 [2024-07-16 00:27:47.747047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.008 [2024-07-16 00:27:47.747057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.008 qpair failed and we were unable to recover it. 00:26:29.008 [2024-07-16 00:27:47.747280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.008 [2024-07-16 00:27:47.747291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.008 qpair failed and we were unable to recover it. 00:26:29.008 [2024-07-16 00:27:47.747520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.008 [2024-07-16 00:27:47.747530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.008 qpair failed and we were unable to recover it. 00:26:29.008 [2024-07-16 00:27:47.747726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.008 [2024-07-16 00:27:47.747735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.008 qpair failed and we were unable to recover it. 00:26:29.008 [2024-07-16 00:27:47.747921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.008 [2024-07-16 00:27:47.747931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.008 qpair failed and we were unable to recover it. 00:26:29.008 [2024-07-16 00:27:47.748065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.008 [2024-07-16 00:27:47.748075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.008 qpair failed and we were unable to recover it. 00:26:29.008 [2024-07-16 00:27:47.748336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.008 [2024-07-16 00:27:47.748347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.008 qpair failed and we were unable to recover it. 00:26:29.008 [2024-07-16 00:27:47.748438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.008 [2024-07-16 00:27:47.748448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.008 qpair failed and we were unable to recover it. 00:26:29.008 [2024-07-16 00:27:47.748571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.008 [2024-07-16 00:27:47.748582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.008 qpair failed and we were unable to recover it. 00:26:29.008 [2024-07-16 00:27:47.748782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.008 [2024-07-16 00:27:47.748793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.008 qpair failed and we were unable to recover it. 00:26:29.008 [2024-07-16 00:27:47.749053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.008 [2024-07-16 00:27:47.749063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.008 qpair failed and we were unable to recover it. 00:26:29.008 [2024-07-16 00:27:47.749313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.008 [2024-07-16 00:27:47.749323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.008 qpair failed and we were unable to recover it. 00:26:29.008 [2024-07-16 00:27:47.749602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.008 [2024-07-16 00:27:47.749612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.008 qpair failed and we were unable to recover it. 00:26:29.008 [2024-07-16 00:27:47.749814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.008 [2024-07-16 00:27:47.749825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.008 qpair failed and we were unable to recover it. 00:26:29.008 [2024-07-16 00:27:47.750008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.008 [2024-07-16 00:27:47.750017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.008 qpair failed and we were unable to recover it. 00:26:29.008 [2024-07-16 00:27:47.750215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.008 [2024-07-16 00:27:47.750230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.008 qpair failed and we were unable to recover it. 00:26:29.008 [2024-07-16 00:27:47.750374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.008 [2024-07-16 00:27:47.750384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.008 qpair failed and we were unable to recover it. 00:26:29.008 [2024-07-16 00:27:47.750580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.008 [2024-07-16 00:27:47.750589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.008 qpair failed and we were unable to recover it. 00:26:29.008 [2024-07-16 00:27:47.750835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.008 [2024-07-16 00:27:47.750844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.008 qpair failed and we were unable to recover it. 00:26:29.008 [2024-07-16 00:27:47.751107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.008 [2024-07-16 00:27:47.751117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.008 qpair failed and we were unable to recover it. 00:26:29.008 [2024-07-16 00:27:47.751258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.008 [2024-07-16 00:27:47.751270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.008 qpair failed and we were unable to recover it. 00:26:29.008 [2024-07-16 00:27:47.751521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.008 [2024-07-16 00:27:47.751531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.008 qpair failed and we were unable to recover it. 00:26:29.008 [2024-07-16 00:27:47.751669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.008 [2024-07-16 00:27:47.751679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.008 qpair failed and we were unable to recover it. 00:26:29.008 [2024-07-16 00:27:47.751928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.008 [2024-07-16 00:27:47.751938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.008 qpair failed and we were unable to recover it. 00:26:29.008 [2024-07-16 00:27:47.752121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.008 [2024-07-16 00:27:47.752131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.008 qpair failed and we were unable to recover it. 00:26:29.008 [2024-07-16 00:27:47.752329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.008 [2024-07-16 00:27:47.752338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.008 qpair failed and we were unable to recover it. 00:26:29.008 [2024-07-16 00:27:47.752525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.008 [2024-07-16 00:27:47.752535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.008 qpair failed and we were unable to recover it. 00:26:29.008 [2024-07-16 00:27:47.752737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.009 [2024-07-16 00:27:47.752747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.009 qpair failed and we were unable to recover it. 00:26:29.009 [2024-07-16 00:27:47.752874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.009 [2024-07-16 00:27:47.752884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.009 qpair failed and we were unable to recover it. 00:26:29.009 [2024-07-16 00:27:47.753009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.009 [2024-07-16 00:27:47.753019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.009 qpair failed and we were unable to recover it. 00:26:29.009 [2024-07-16 00:27:47.753199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.009 [2024-07-16 00:27:47.753210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.009 qpair failed and we were unable to recover it. 00:26:29.009 [2024-07-16 00:27:47.753402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.009 [2024-07-16 00:27:47.753412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.009 qpair failed and we were unable to recover it. 00:26:29.009 [2024-07-16 00:27:47.753597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.009 [2024-07-16 00:27:47.753607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.009 qpair failed and we were unable to recover it. 00:26:29.009 [2024-07-16 00:27:47.753802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.009 [2024-07-16 00:27:47.753812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.009 qpair failed and we were unable to recover it. 00:26:29.009 [2024-07-16 00:27:47.753951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.009 [2024-07-16 00:27:47.753961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.009 qpair failed and we were unable to recover it. 00:26:29.009 [2024-07-16 00:27:47.754103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.009 [2024-07-16 00:27:47.754113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.009 qpair failed and we were unable to recover it. 00:26:29.009 [2024-07-16 00:27:47.754301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.009 [2024-07-16 00:27:47.754310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.009 qpair failed and we were unable to recover it. 00:26:29.009 [2024-07-16 00:27:47.754451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.009 [2024-07-16 00:27:47.754461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.009 qpair failed and we were unable to recover it. 00:26:29.009 [2024-07-16 00:27:47.754614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.009 [2024-07-16 00:27:47.754624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.009 qpair failed and we were unable to recover it. 00:26:29.009 [2024-07-16 00:27:47.754756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.009 [2024-07-16 00:27:47.754765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.009 qpair failed and we were unable to recover it. 00:26:29.009 [2024-07-16 00:27:47.754950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.009 [2024-07-16 00:27:47.754960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.009 qpair failed and we were unable to recover it. 00:26:29.009 [2024-07-16 00:27:47.755163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.009 [2024-07-16 00:27:47.755173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.009 qpair failed and we were unable to recover it. 00:26:29.009 [2024-07-16 00:27:47.755450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.009 [2024-07-16 00:27:47.755460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.009 qpair failed and we were unable to recover it. 00:26:29.009 [2024-07-16 00:27:47.755750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.009 [2024-07-16 00:27:47.755759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.009 qpair failed and we were unable to recover it. 00:26:29.009 [2024-07-16 00:27:47.755894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.009 [2024-07-16 00:27:47.755904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.009 qpair failed and we were unable to recover it. 00:26:29.009 [2024-07-16 00:27:47.756103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.009 [2024-07-16 00:27:47.756112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.009 qpair failed and we were unable to recover it. 00:26:29.009 [2024-07-16 00:27:47.756254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.009 [2024-07-16 00:27:47.756264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.009 qpair failed and we were unable to recover it. 00:26:29.009 [2024-07-16 00:27:47.756409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.009 [2024-07-16 00:27:47.756433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:29.009 qpair failed and we were unable to recover it. 00:26:29.009 [2024-07-16 00:27:47.756649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.009 [2024-07-16 00:27:47.756663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:29.009 qpair failed and we were unable to recover it. 00:26:29.009 [2024-07-16 00:27:47.756877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.009 [2024-07-16 00:27:47.756891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:29.009 qpair failed and we were unable to recover it. 00:26:29.009 [2024-07-16 00:27:47.757176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.009 [2024-07-16 00:27:47.757189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:29.009 qpair failed and we were unable to recover it. 00:26:29.009 [2024-07-16 00:27:47.757424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.009 [2024-07-16 00:27:47.757438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:29.009 qpair failed and we were unable to recover it. 00:26:29.009 [2024-07-16 00:27:47.757696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.009 [2024-07-16 00:27:47.757710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:29.009 qpair failed and we were unable to recover it. 00:26:29.009 [2024-07-16 00:27:47.757967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.009 [2024-07-16 00:27:47.757981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:29.009 qpair failed and we were unable to recover it. 00:26:29.009 [2024-07-16 00:27:47.758252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.009 [2024-07-16 00:27:47.758265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:29.009 qpair failed and we were unable to recover it. 00:26:29.009 [2024-07-16 00:27:47.758502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.009 [2024-07-16 00:27:47.758517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:29.009 qpair failed and we were unable to recover it. 00:26:29.009 [2024-07-16 00:27:47.758715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.009 [2024-07-16 00:27:47.758733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:29.009 qpair failed and we were unable to recover it. 00:26:29.009 [2024-07-16 00:27:47.758931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.009 [2024-07-16 00:27:47.758949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:29.009 qpair failed and we were unable to recover it. 00:26:29.009 [2024-07-16 00:27:47.759159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.009 [2024-07-16 00:27:47.759176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:29.009 qpair failed and we were unable to recover it. 00:26:29.009 [2024-07-16 00:27:47.759392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.009 [2024-07-16 00:27:47.759410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:29.009 qpair failed and we were unable to recover it. 00:26:29.009 [2024-07-16 00:27:47.759570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.009 [2024-07-16 00:27:47.759591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:29.009 qpair failed and we were unable to recover it. 00:26:29.009 [2024-07-16 00:27:47.759830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.009 [2024-07-16 00:27:47.759848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:29.009 qpair failed and we were unable to recover it. 00:26:29.009 [2024-07-16 00:27:47.759997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.009 [2024-07-16 00:27:47.760012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:29.009 qpair failed and we were unable to recover it. 00:26:29.009 [2024-07-16 00:27:47.760270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.009 [2024-07-16 00:27:47.760288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:29.009 qpair failed and we were unable to recover it. 00:26:29.009 [2024-07-16 00:27:47.760495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.009 [2024-07-16 00:27:47.760512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:29.009 qpair failed and we were unable to recover it. 00:26:29.009 [2024-07-16 00:27:47.760736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.009 [2024-07-16 00:27:47.760752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:29.009 qpair failed and we were unable to recover it. 00:26:29.009 [2024-07-16 00:27:47.760954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.009 [2024-07-16 00:27:47.760969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:29.009 qpair failed and we were unable to recover it. 00:26:29.009 [2024-07-16 00:27:47.761086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.009 [2024-07-16 00:27:47.761101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:29.009 qpair failed and we were unable to recover it. 00:26:29.009 [2024-07-16 00:27:47.761327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.010 [2024-07-16 00:27:47.761342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:29.010 qpair failed and we were unable to recover it. 00:26:29.010 [2024-07-16 00:27:47.761625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.010 [2024-07-16 00:27:47.761640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:29.010 qpair failed and we were unable to recover it. 00:26:29.010 [2024-07-16 00:27:47.761853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.010 [2024-07-16 00:27:47.761867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:29.010 qpair failed and we were unable to recover it. 00:26:29.010 [2024-07-16 00:27:47.762019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.010 [2024-07-16 00:27:47.762033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:29.010 qpair failed and we were unable to recover it. 00:26:29.010 [2024-07-16 00:27:47.762181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.010 [2024-07-16 00:27:47.762195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:29.010 qpair failed and we were unable to recover it. 00:26:29.010 [2024-07-16 00:27:47.762343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.010 [2024-07-16 00:27:47.762357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:29.010 qpair failed and we were unable to recover it. 00:26:29.010 [2024-07-16 00:27:47.762555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.010 [2024-07-16 00:27:47.762569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:29.010 qpair failed and we were unable to recover it. 00:26:29.010 [2024-07-16 00:27:47.762775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.010 [2024-07-16 00:27:47.762790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:29.010 qpair failed and we were unable to recover it. 00:26:29.010 [2024-07-16 00:27:47.763048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.010 [2024-07-16 00:27:47.763062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:29.010 qpair failed and we were unable to recover it. 00:26:29.010 [2024-07-16 00:27:47.763278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.010 [2024-07-16 00:27:47.763292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:29.010 qpair failed and we were unable to recover it. 00:26:29.010 [2024-07-16 00:27:47.763486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.010 [2024-07-16 00:27:47.763500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:29.010 qpair failed and we were unable to recover it. 00:26:29.010 [2024-07-16 00:27:47.763644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.010 [2024-07-16 00:27:47.763658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:29.010 qpair failed and we were unable to recover it. 00:26:29.010 [2024-07-16 00:27:47.763782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.010 [2024-07-16 00:27:47.763796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:29.010 qpair failed and we were unable to recover it. 00:26:29.010 [2024-07-16 00:27:47.763988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.010 [2024-07-16 00:27:47.764001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:29.010 qpair failed and we were unable to recover it. 00:26:29.010 [2024-07-16 00:27:47.764260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.010 [2024-07-16 00:27:47.764275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:29.010 qpair failed and we were unable to recover it. 00:26:29.010 [2024-07-16 00:27:47.764563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.010 [2024-07-16 00:27:47.764578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:29.010 qpair failed and we were unable to recover it. 00:26:29.010 [2024-07-16 00:27:47.764731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.010 [2024-07-16 00:27:47.764746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:29.010 qpair failed and we were unable to recover it. 00:26:29.010 [2024-07-16 00:27:47.765054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.010 [2024-07-16 00:27:47.765068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:29.010 qpair failed and we were unable to recover it. 00:26:29.010 [2024-07-16 00:27:47.765327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.010 [2024-07-16 00:27:47.765341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:29.010 qpair failed and we were unable to recover it. 00:26:29.010 [2024-07-16 00:27:47.765529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.010 [2024-07-16 00:27:47.765564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5ded0 with addr=10.0.0.2, port=4420 00:26:29.010 qpair failed and we were unable to recover it. 00:26:29.010 [2024-07-16 00:27:47.765885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.010 [2024-07-16 00:27:47.765924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:29.010 qpair failed and we were unable to recover it. 00:26:29.010 [2024-07-16 00:27:47.766235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.010 [2024-07-16 00:27:47.766251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.010 qpair failed and we were unable to recover it. 00:26:29.010 [2024-07-16 00:27:47.766380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.010 [2024-07-16 00:27:47.766390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.010 qpair failed and we were unable to recover it. 00:26:29.010 [2024-07-16 00:27:47.766663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.010 [2024-07-16 00:27:47.766673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.010 qpair failed and we were unable to recover it. 00:26:29.010 [2024-07-16 00:27:47.766871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.010 [2024-07-16 00:27:47.766881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.010 qpair failed and we were unable to recover it. 00:26:29.010 [2024-07-16 00:27:47.767130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.010 [2024-07-16 00:27:47.767140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.010 qpair failed and we were unable to recover it. 00:26:29.010 [2024-07-16 00:27:47.767334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.010 [2024-07-16 00:27:47.767343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.010 qpair failed and we were unable to recover it. 00:26:29.010 [2024-07-16 00:27:47.767538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.010 [2024-07-16 00:27:47.767548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.010 qpair failed and we were unable to recover it. 00:26:29.010 [2024-07-16 00:27:47.767798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.010 [2024-07-16 00:27:47.767808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.010 qpair failed and we were unable to recover it. 00:26:29.010 [2024-07-16 00:27:47.768004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.010 [2024-07-16 00:27:47.768014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.010 qpair failed and we were unable to recover it. 00:26:29.010 [2024-07-16 00:27:47.768288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.010 [2024-07-16 00:27:47.768298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.010 qpair failed and we were unable to recover it. 00:26:29.010 [2024-07-16 00:27:47.768506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.010 [2024-07-16 00:27:47.768516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.010 qpair failed and we were unable to recover it. 00:26:29.010 [2024-07-16 00:27:47.768716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.010 [2024-07-16 00:27:47.768729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.010 qpair failed and we were unable to recover it. 00:26:29.010 [2024-07-16 00:27:47.768861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.010 [2024-07-16 00:27:47.768871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.010 qpair failed and we were unable to recover it. 00:26:29.010 [2024-07-16 00:27:47.769074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.010 [2024-07-16 00:27:47.769084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.010 qpair failed and we were unable to recover it. 00:26:29.010 [2024-07-16 00:27:47.769223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.010 [2024-07-16 00:27:47.769238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.010 qpair failed and we were unable to recover it. 00:26:29.010 [2024-07-16 00:27:47.769457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.010 [2024-07-16 00:27:47.769467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.010 qpair failed and we were unable to recover it. 00:26:29.010 [2024-07-16 00:27:47.769612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.010 [2024-07-16 00:27:47.769622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.010 qpair failed and we were unable to recover it. 00:26:29.010 [2024-07-16 00:27:47.769753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.010 [2024-07-16 00:27:47.769762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.010 qpair failed and we were unable to recover it. 00:26:29.010 [2024-07-16 00:27:47.769948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.010 [2024-07-16 00:27:47.769958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.010 qpair failed and we were unable to recover it. 00:26:29.010 [2024-07-16 00:27:47.770152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.010 [2024-07-16 00:27:47.770161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.010 qpair failed and we were unable to recover it. 00:26:29.011 [2024-07-16 00:27:47.770295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.011 [2024-07-16 00:27:47.770306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.011 qpair failed and we were unable to recover it. 00:26:29.011 [2024-07-16 00:27:47.770526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.011 [2024-07-16 00:27:47.770536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.011 qpair failed and we were unable to recover it. 00:26:29.011 [2024-07-16 00:27:47.770689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.011 [2024-07-16 00:27:47.770699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.011 qpair failed and we were unable to recover it. 00:26:29.011 [2024-07-16 00:27:47.770969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.011 [2024-07-16 00:27:47.770979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.011 qpair failed and we were unable to recover it. 00:26:29.011 [2024-07-16 00:27:47.771189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.011 [2024-07-16 00:27:47.771199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.011 qpair failed and we were unable to recover it. 00:26:29.011 [2024-07-16 00:27:47.771320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.011 [2024-07-16 00:27:47.771330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.011 qpair failed and we were unable to recover it. 00:26:29.011 [2024-07-16 00:27:47.771596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.011 [2024-07-16 00:27:47.771605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.011 qpair failed and we were unable to recover it. 00:26:29.011 [2024-07-16 00:27:47.771765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.011 [2024-07-16 00:27:47.771775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.011 qpair failed and we were unable to recover it. 00:26:29.011 [2024-07-16 00:27:47.771986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.011 [2024-07-16 00:27:47.771996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.011 qpair failed and we were unable to recover it. 00:26:29.011 [2024-07-16 00:27:47.772188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.011 [2024-07-16 00:27:47.772198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.011 qpair failed and we were unable to recover it. 00:26:29.011 [2024-07-16 00:27:47.772391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.011 [2024-07-16 00:27:47.772401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.011 qpair failed and we were unable to recover it. 00:26:29.011 [2024-07-16 00:27:47.772534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.011 [2024-07-16 00:27:47.772544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.011 qpair failed and we were unable to recover it. 00:26:29.011 [2024-07-16 00:27:47.772736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.011 [2024-07-16 00:27:47.772746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.011 qpair failed and we were unable to recover it. 00:26:29.011 [2024-07-16 00:27:47.772939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.011 [2024-07-16 00:27:47.772949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.011 qpair failed and we were unable to recover it. 00:26:29.011 [2024-07-16 00:27:47.773155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.011 [2024-07-16 00:27:47.773166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.011 qpair failed and we were unable to recover it. 00:26:29.011 [2024-07-16 00:27:47.773350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.011 [2024-07-16 00:27:47.773361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.011 qpair failed and we were unable to recover it. 00:26:29.011 [2024-07-16 00:27:47.773516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.011 [2024-07-16 00:27:47.773526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.011 qpair failed and we were unable to recover it. 00:26:29.011 [2024-07-16 00:27:47.773659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.011 [2024-07-16 00:27:47.773668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.011 qpair failed and we were unable to recover it. 00:26:29.011 [2024-07-16 00:27:47.773805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.011 [2024-07-16 00:27:47.773815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.011 qpair failed and we were unable to recover it. 00:26:29.011 [2024-07-16 00:27:47.774028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.011 [2024-07-16 00:27:47.774038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.011 qpair failed and we were unable to recover it. 00:26:29.011 [2024-07-16 00:27:47.774230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.011 [2024-07-16 00:27:47.774240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.011 qpair failed and we were unable to recover it. 00:26:29.011 [2024-07-16 00:27:47.774442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.011 [2024-07-16 00:27:47.774452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.011 qpair failed and we were unable to recover it. 00:26:29.011 [2024-07-16 00:27:47.774641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.011 [2024-07-16 00:27:47.774651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.011 qpair failed and we were unable to recover it. 00:26:29.011 [2024-07-16 00:27:47.774865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.011 [2024-07-16 00:27:47.774875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.011 qpair failed and we were unable to recover it. 00:26:29.011 [2024-07-16 00:27:47.775062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.011 [2024-07-16 00:27:47.775072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.011 qpair failed and we were unable to recover it. 00:26:29.011 [2024-07-16 00:27:47.775294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.011 [2024-07-16 00:27:47.775304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.011 qpair failed and we were unable to recover it. 00:26:29.011 [2024-07-16 00:27:47.775486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.011 [2024-07-16 00:27:47.775496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.011 qpair failed and we were unable to recover it. 00:26:29.011 [2024-07-16 00:27:47.775716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.011 [2024-07-16 00:27:47.775725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.011 qpair failed and we were unable to recover it. 00:26:29.011 [2024-07-16 00:27:47.775958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.011 [2024-07-16 00:27:47.775968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.011 qpair failed and we were unable to recover it. 00:26:29.011 [2024-07-16 00:27:47.776211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.011 [2024-07-16 00:27:47.776221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.011 qpair failed and we were unable to recover it. 00:26:29.011 [2024-07-16 00:27:47.776453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.011 [2024-07-16 00:27:47.776463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.011 qpair failed and we were unable to recover it. 00:26:29.011 [2024-07-16 00:27:47.776765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.011 [2024-07-16 00:27:47.776777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.011 qpair failed and we were unable to recover it. 00:26:29.011 [2024-07-16 00:27:47.777071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.011 [2024-07-16 00:27:47.777081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.011 qpair failed and we were unable to recover it. 00:26:29.011 [2024-07-16 00:27:47.777275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.011 [2024-07-16 00:27:47.777285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.011 qpair failed and we were unable to recover it. 00:26:29.011 [2024-07-16 00:27:47.777558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.011 [2024-07-16 00:27:47.777568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.011 qpair failed and we were unable to recover it. 00:26:29.011 [2024-07-16 00:27:47.777764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.011 [2024-07-16 00:27:47.777774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.012 qpair failed and we were unable to recover it. 00:26:29.012 [2024-07-16 00:27:47.777995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.012 [2024-07-16 00:27:47.778005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.012 qpair failed and we were unable to recover it. 00:26:29.012 [2024-07-16 00:27:47.778278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.012 [2024-07-16 00:27:47.778288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.012 qpair failed and we were unable to recover it. 00:26:29.012 [2024-07-16 00:27:47.778564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.012 [2024-07-16 00:27:47.778574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.012 qpair failed and we were unable to recover it. 00:26:29.012 [2024-07-16 00:27:47.778703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.012 [2024-07-16 00:27:47.778713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.012 qpair failed and we were unable to recover it. 00:26:29.012 [2024-07-16 00:27:47.778978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.012 [2024-07-16 00:27:47.778988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.012 qpair failed and we were unable to recover it. 00:26:29.012 [2024-07-16 00:27:47.779248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.012 [2024-07-16 00:27:47.779258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.012 qpair failed and we were unable to recover it. 00:26:29.012 [2024-07-16 00:27:47.779388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.012 [2024-07-16 00:27:47.779399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.012 qpair failed and we were unable to recover it. 00:26:29.012 [2024-07-16 00:27:47.779588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.012 [2024-07-16 00:27:47.779598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.012 qpair failed and we were unable to recover it. 00:26:29.012 [2024-07-16 00:27:47.779782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.012 [2024-07-16 00:27:47.779792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.012 qpair failed and we were unable to recover it. 00:26:29.012 [2024-07-16 00:27:47.780074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.012 [2024-07-16 00:27:47.780084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.012 qpair failed and we were unable to recover it. 00:26:29.012 [2024-07-16 00:27:47.780284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.012 [2024-07-16 00:27:47.780295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.012 qpair failed and we were unable to recover it. 00:26:29.012 [2024-07-16 00:27:47.780521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.012 [2024-07-16 00:27:47.780530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.012 qpair failed and we were unable to recover it. 00:26:29.012 [2024-07-16 00:27:47.780675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.012 [2024-07-16 00:27:47.780685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.012 qpair failed and we were unable to recover it. 00:26:29.012 [2024-07-16 00:27:47.780878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.012 [2024-07-16 00:27:47.780888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.012 qpair failed and we were unable to recover it. 00:26:29.012 [2024-07-16 00:27:47.781104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.012 [2024-07-16 00:27:47.781114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.012 qpair failed and we were unable to recover it. 00:26:29.012 [2024-07-16 00:27:47.781394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.012 [2024-07-16 00:27:47.781405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.012 qpair failed and we were unable to recover it. 00:26:29.012 [2024-07-16 00:27:47.781705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.012 [2024-07-16 00:27:47.781715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.012 qpair failed and we were unable to recover it. 00:26:29.012 [2024-07-16 00:27:47.781861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.012 [2024-07-16 00:27:47.781872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.012 qpair failed and we were unable to recover it. 00:26:29.012 [2024-07-16 00:27:47.782141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.012 [2024-07-16 00:27:47.782151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.012 qpair failed and we were unable to recover it. 00:26:29.012 [2024-07-16 00:27:47.782418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.012 [2024-07-16 00:27:47.782429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.012 qpair failed and we were unable to recover it. 00:26:29.012 [2024-07-16 00:27:47.782700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.012 [2024-07-16 00:27:47.782710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.012 qpair failed and we were unable to recover it. 00:26:29.012 [2024-07-16 00:27:47.782845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.012 [2024-07-16 00:27:47.782854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.012 qpair failed and we were unable to recover it. 00:26:29.012 [2024-07-16 00:27:47.783130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.012 [2024-07-16 00:27:47.783141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.012 qpair failed and we were unable to recover it. 00:26:29.012 [2024-07-16 00:27:47.783337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.012 [2024-07-16 00:27:47.783347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.012 qpair failed and we were unable to recover it. 00:26:29.012 [2024-07-16 00:27:47.783621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.012 [2024-07-16 00:27:47.783631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.012 qpair failed and we were unable to recover it. 00:26:29.012 [2024-07-16 00:27:47.783871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.012 [2024-07-16 00:27:47.783881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.012 qpair failed and we were unable to recover it. 00:26:29.012 [2024-07-16 00:27:47.784071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.012 [2024-07-16 00:27:47.784081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.012 qpair failed and we were unable to recover it. 00:26:29.012 [2024-07-16 00:27:47.784386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.012 [2024-07-16 00:27:47.784396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.012 qpair failed and we were unable to recover it. 00:26:29.012 [2024-07-16 00:27:47.784515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.012 [2024-07-16 00:27:47.784525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.012 qpair failed and we were unable to recover it. 00:26:29.012 [2024-07-16 00:27:47.784653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.012 [2024-07-16 00:27:47.784663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.012 qpair failed and we were unable to recover it. 00:26:29.012 [2024-07-16 00:27:47.784931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.012 [2024-07-16 00:27:47.784941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.012 qpair failed and we were unable to recover it. 00:26:29.012 [2024-07-16 00:27:47.785158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.012 [2024-07-16 00:27:47.785168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.012 qpair failed and we were unable to recover it. 00:26:29.012 [2024-07-16 00:27:47.785304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.012 [2024-07-16 00:27:47.785314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.012 qpair failed and we were unable to recover it. 00:26:29.012 [2024-07-16 00:27:47.785498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.012 [2024-07-16 00:27:47.785509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.012 qpair failed and we were unable to recover it. 00:26:29.012 [2024-07-16 00:27:47.785776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.012 [2024-07-16 00:27:47.785786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.012 qpair failed and we were unable to recover it. 00:26:29.012 [2024-07-16 00:27:47.786100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.012 [2024-07-16 00:27:47.786110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.012 qpair failed and we were unable to recover it. 00:26:29.012 [2024-07-16 00:27:47.786388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.012 [2024-07-16 00:27:47.786398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.012 qpair failed and we were unable to recover it. 00:26:29.012 [2024-07-16 00:27:47.786531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.012 [2024-07-16 00:27:47.786541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.012 qpair failed and we were unable to recover it. 00:26:29.012 [2024-07-16 00:27:47.786662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.012 [2024-07-16 00:27:47.786672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.012 qpair failed and we were unable to recover it. 00:26:29.012 [2024-07-16 00:27:47.786944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.012 [2024-07-16 00:27:47.786954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.012 qpair failed and we were unable to recover it. 00:26:29.012 [2024-07-16 00:27:47.787099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.013 [2024-07-16 00:27:47.787108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.013 qpair failed and we were unable to recover it. 00:26:29.013 [2024-07-16 00:27:47.787309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.013 [2024-07-16 00:27:47.787319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.013 qpair failed and we were unable to recover it. 00:26:29.013 [2024-07-16 00:27:47.787588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.013 [2024-07-16 00:27:47.787598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.013 qpair failed and we were unable to recover it. 00:26:29.013 [2024-07-16 00:27:47.787798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.013 [2024-07-16 00:27:47.787808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.013 qpair failed and we were unable to recover it. 00:26:29.013 [2024-07-16 00:27:47.788056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.013 [2024-07-16 00:27:47.788066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.013 qpair failed and we were unable to recover it. 00:26:29.013 [2024-07-16 00:27:47.788329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.013 [2024-07-16 00:27:47.788339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.013 qpair failed and we were unable to recover it. 00:26:29.013 [2024-07-16 00:27:47.788617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.013 [2024-07-16 00:27:47.788627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.013 qpair failed and we were unable to recover it. 00:26:29.013 [2024-07-16 00:27:47.788923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.013 [2024-07-16 00:27:47.788933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.013 qpair failed and we were unable to recover it. 00:26:29.013 [2024-07-16 00:27:47.789076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.013 [2024-07-16 00:27:47.789086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.013 qpair failed and we were unable to recover it. 00:26:29.013 [2024-07-16 00:27:47.789293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.013 [2024-07-16 00:27:47.789303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.013 qpair failed and we were unable to recover it. 00:26:29.013 [2024-07-16 00:27:47.789439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.013 [2024-07-16 00:27:47.789449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.013 qpair failed and we were unable to recover it. 00:26:29.013 [2024-07-16 00:27:47.789649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.013 [2024-07-16 00:27:47.789659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.013 qpair failed and we were unable to recover it. 00:26:29.013 [2024-07-16 00:27:47.789950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.013 [2024-07-16 00:27:47.789960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.013 qpair failed and we were unable to recover it. 00:26:29.013 [2024-07-16 00:27:47.790159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.013 [2024-07-16 00:27:47.790169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.013 qpair failed and we were unable to recover it. 00:26:29.013 [2024-07-16 00:27:47.790316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.013 [2024-07-16 00:27:47.790326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.013 qpair failed and we were unable to recover it. 00:26:29.013 [2024-07-16 00:27:47.790607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.013 [2024-07-16 00:27:47.790617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.013 qpair failed and we were unable to recover it. 00:26:29.013 [2024-07-16 00:27:47.790833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.013 [2024-07-16 00:27:47.790843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.013 qpair failed and we were unable to recover it. 00:26:29.013 [2024-07-16 00:27:47.791005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.013 [2024-07-16 00:27:47.791015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.013 qpair failed and we were unable to recover it. 00:26:29.013 [2024-07-16 00:27:47.791212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.013 [2024-07-16 00:27:47.791222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.013 qpair failed and we were unable to recover it. 00:26:29.013 [2024-07-16 00:27:47.791451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.013 [2024-07-16 00:27:47.791462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.013 qpair failed and we were unable to recover it. 00:26:29.013 [2024-07-16 00:27:47.791721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.013 [2024-07-16 00:27:47.791731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.013 qpair failed and we were unable to recover it. 00:26:29.013 [2024-07-16 00:27:47.791952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.013 [2024-07-16 00:27:47.791962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.013 qpair failed and we were unable to recover it. 00:26:29.013 [2024-07-16 00:27:47.792089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.013 [2024-07-16 00:27:47.792101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.013 qpair failed and we were unable to recover it. 00:26:29.013 [2024-07-16 00:27:47.792321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.013 [2024-07-16 00:27:47.792331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.013 qpair failed and we were unable to recover it. 00:26:29.013 [2024-07-16 00:27:47.792629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.013 [2024-07-16 00:27:47.792639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.013 qpair failed and we were unable to recover it. 00:26:29.013 [2024-07-16 00:27:47.792782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.013 [2024-07-16 00:27:47.792792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.013 qpair failed and we were unable to recover it. 00:26:29.013 [2024-07-16 00:27:47.792977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.013 [2024-07-16 00:27:47.792987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.013 qpair failed and we were unable to recover it. 00:26:29.013 [2024-07-16 00:27:47.793195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.013 [2024-07-16 00:27:47.793204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.013 qpair failed and we were unable to recover it. 00:26:29.013 [2024-07-16 00:27:47.793464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.013 [2024-07-16 00:27:47.793474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.013 qpair failed and we were unable to recover it. 00:26:29.013 [2024-07-16 00:27:47.793701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.013 [2024-07-16 00:27:47.793711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.013 qpair failed and we were unable to recover it. 00:26:29.013 [2024-07-16 00:27:47.793932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.013 [2024-07-16 00:27:47.793942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.013 qpair failed and we were unable to recover it. 00:26:29.013 [2024-07-16 00:27:47.794084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.013 [2024-07-16 00:27:47.794093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.013 qpair failed and we were unable to recover it. 00:26:29.013 [2024-07-16 00:27:47.794359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.013 [2024-07-16 00:27:47.794370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.013 qpair failed and we were unable to recover it. 00:26:29.013 [2024-07-16 00:27:47.794616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.013 [2024-07-16 00:27:47.794626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.013 qpair failed and we were unable to recover it. 00:26:29.013 [2024-07-16 00:27:47.794810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.013 [2024-07-16 00:27:47.794820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.013 qpair failed and we were unable to recover it. 00:26:29.013 [2024-07-16 00:27:47.795002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.013 [2024-07-16 00:27:47.795012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.013 qpair failed and we were unable to recover it. 00:26:29.013 [2024-07-16 00:27:47.795264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.013 [2024-07-16 00:27:47.795274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.013 qpair failed and we were unable to recover it. 00:26:29.013 [2024-07-16 00:27:47.795477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.013 [2024-07-16 00:27:47.795487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.013 qpair failed and we were unable to recover it. 00:26:29.013 [2024-07-16 00:27:47.795679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.013 [2024-07-16 00:27:47.795689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.013 qpair failed and we were unable to recover it. 00:26:29.013 [2024-07-16 00:27:47.795962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.013 [2024-07-16 00:27:47.795972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.013 qpair failed and we were unable to recover it. 00:26:29.013 [2024-07-16 00:27:47.796186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.013 [2024-07-16 00:27:47.796198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.013 qpair failed and we were unable to recover it. 00:26:29.014 [2024-07-16 00:27:47.796402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.014 [2024-07-16 00:27:47.796414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.014 qpair failed and we were unable to recover it. 00:26:29.014 [2024-07-16 00:27:47.796647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.014 [2024-07-16 00:27:47.796657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.014 qpair failed and we were unable to recover it. 00:26:29.014 [2024-07-16 00:27:47.796801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.014 [2024-07-16 00:27:47.796811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.014 qpair failed and we were unable to recover it. 00:26:29.014 [2024-07-16 00:27:47.797026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.014 [2024-07-16 00:27:47.797036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.014 qpair failed and we were unable to recover it. 00:26:29.014 [2024-07-16 00:27:47.797236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.014 [2024-07-16 00:27:47.797246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.014 qpair failed and we were unable to recover it. 00:26:29.014 [2024-07-16 00:27:47.797495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.014 [2024-07-16 00:27:47.797505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.014 qpair failed and we were unable to recover it. 00:26:29.014 [2024-07-16 00:27:47.797710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.014 [2024-07-16 00:27:47.797720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.014 qpair failed and we were unable to recover it. 00:26:29.014 [2024-07-16 00:27:47.797947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.014 [2024-07-16 00:27:47.797957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.014 qpair failed and we were unable to recover it. 00:26:29.014 [2024-07-16 00:27:47.798207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.014 [2024-07-16 00:27:47.798218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.014 qpair failed and we were unable to recover it. 00:26:29.014 [2024-07-16 00:27:47.798466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.014 [2024-07-16 00:27:47.798477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.014 qpair failed and we were unable to recover it. 00:26:29.014 [2024-07-16 00:27:47.798688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.014 [2024-07-16 00:27:47.798700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.014 qpair failed and we were unable to recover it. 00:26:29.014 [2024-07-16 00:27:47.798947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.014 [2024-07-16 00:27:47.798958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.014 qpair failed and we were unable to recover it. 00:26:29.014 [2024-07-16 00:27:47.799182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.014 [2024-07-16 00:27:47.799194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.014 qpair failed and we were unable to recover it. 00:26:29.014 [2024-07-16 00:27:47.799456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.014 [2024-07-16 00:27:47.799468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.014 qpair failed and we were unable to recover it. 00:26:29.014 [2024-07-16 00:27:47.799695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.014 [2024-07-16 00:27:47.799705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.014 qpair failed and we were unable to recover it. 00:26:29.014 [2024-07-16 00:27:47.799806] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:29.014 [2024-07-16 00:27:47.799831] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:29.014 [2024-07-16 00:27:47.799838] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:29.014 [2024-07-16 00:27:47.799845] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:29.014 [2024-07-16 00:27:47.799850] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:29.014 [2024-07-16 00:27:47.799953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.014 [2024-07-16 00:27:47.799963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.014 qpair failed and we were unable to recover it. 00:26:29.014 [2024-07-16 00:27:47.800162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.014 [2024-07-16 00:27:47.800171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.014 qpair failed and we were unable to recover it. 00:26:29.014 [2024-07-16 00:27:47.800250] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:26:29.014 [2024-07-16 00:27:47.800424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.014 [2024-07-16 00:27:47.800339] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:26:29.014 [2024-07-16 00:27:47.800436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.014 qpair failed and we were unable to recover it. 00:26:29.014 [2024-07-16 00:27:47.800444] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:26:29.014 [2024-07-16 00:27:47.800445] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:26:29.014 [2024-07-16 00:27:47.800631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.014 [2024-07-16 00:27:47.800642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.014 qpair failed and we were unable to recover it. 00:26:29.014 [2024-07-16 00:27:47.800899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.014 [2024-07-16 00:27:47.800909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.014 qpair failed and we were unable to recover it. 00:26:29.014 [2024-07-16 00:27:47.801105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.014 [2024-07-16 00:27:47.801115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.014 qpair failed and we were unable to recover it. 00:26:29.014 [2024-07-16 00:27:47.801368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.014 [2024-07-16 00:27:47.801378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.014 qpair failed and we were unable to recover it. 00:26:29.014 [2024-07-16 00:27:47.801576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.014 [2024-07-16 00:27:47.801586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.014 qpair failed and we were unable to recover it. 00:26:29.014 [2024-07-16 00:27:47.801839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.014 [2024-07-16 00:27:47.801849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.014 qpair failed and we were unable to recover it. 00:26:29.014 [2024-07-16 00:27:47.802035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.014 [2024-07-16 00:27:47.802045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.014 qpair failed and we were unable to recover it. 00:26:29.014 [2024-07-16 00:27:47.802295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.014 [2024-07-16 00:27:47.802306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.014 qpair failed and we were unable to recover it. 00:26:29.014 [2024-07-16 00:27:47.802440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.014 [2024-07-16 00:27:47.802450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.014 qpair failed and we were unable to recover it. 00:26:29.014 [2024-07-16 00:27:47.802638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.014 [2024-07-16 00:27:47.802649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.014 qpair failed and we were unable to recover it. 00:26:29.014 [2024-07-16 00:27:47.802927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.014 [2024-07-16 00:27:47.802937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.014 qpair failed and we were unable to recover it. 00:26:29.014 [2024-07-16 00:27:47.803232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.014 [2024-07-16 00:27:47.803242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.014 qpair failed and we were unable to recover it. 00:26:29.014 [2024-07-16 00:27:47.803445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.014 [2024-07-16 00:27:47.803456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.014 qpair failed and we were unable to recover it. 00:26:29.014 [2024-07-16 00:27:47.803703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.014 [2024-07-16 00:27:47.803716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.014 qpair failed and we were unable to recover it. 00:26:29.014 [2024-07-16 00:27:47.803978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.014 [2024-07-16 00:27:47.803988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.014 qpair failed and we were unable to recover it. 00:26:29.014 [2024-07-16 00:27:47.804248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.014 [2024-07-16 00:27:47.804259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.014 qpair failed and we were unable to recover it. 00:26:29.014 [2024-07-16 00:27:47.804510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.014 [2024-07-16 00:27:47.804520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.014 qpair failed and we were unable to recover it. 00:26:29.014 [2024-07-16 00:27:47.804781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.014 [2024-07-16 00:27:47.804791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.014 qpair failed and we were unable to recover it. 00:26:29.014 [2024-07-16 00:27:47.805053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.014 [2024-07-16 00:27:47.805063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.014 qpair failed and we were unable to recover it. 00:26:29.014 [2024-07-16 00:27:47.805266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.014 [2024-07-16 00:27:47.805276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.015 qpair failed and we were unable to recover it. 00:26:29.015 [2024-07-16 00:27:47.805527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.015 [2024-07-16 00:27:47.805536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.015 qpair failed and we were unable to recover it. 00:26:29.015 [2024-07-16 00:27:47.805783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.015 [2024-07-16 00:27:47.805794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.015 qpair failed and we were unable to recover it. 00:26:29.015 [2024-07-16 00:27:47.806015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.015 [2024-07-16 00:27:47.806025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.015 qpair failed and we were unable to recover it. 00:26:29.015 [2024-07-16 00:27:47.806245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.015 [2024-07-16 00:27:47.806256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.015 qpair failed and we were unable to recover it. 00:26:29.015 [2024-07-16 00:27:47.806520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.015 [2024-07-16 00:27:47.806530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.015 qpair failed and we were unable to recover it. 00:26:29.015 [2024-07-16 00:27:47.806783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.015 [2024-07-16 00:27:47.806793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.015 qpair failed and we were unable to recover it. 00:26:29.015 [2024-07-16 00:27:47.807063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.015 [2024-07-16 00:27:47.807073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.015 qpair failed and we were unable to recover it. 00:26:29.015 [2024-07-16 00:27:47.807259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.015 [2024-07-16 00:27:47.807270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.015 qpair failed and we were unable to recover it. 00:26:29.015 [2024-07-16 00:27:47.807419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.015 [2024-07-16 00:27:47.807429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.015 qpair failed and we were unable to recover it. 00:26:29.015 [2024-07-16 00:27:47.807632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.015 [2024-07-16 00:27:47.807642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.015 qpair failed and we were unable to recover it. 00:26:29.015 [2024-07-16 00:27:47.807934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.015 [2024-07-16 00:27:47.807944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.015 qpair failed and we were unable to recover it. 00:26:29.015 [2024-07-16 00:27:47.808130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.015 [2024-07-16 00:27:47.808140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.015 qpair failed and we were unable to recover it. 00:26:29.015 [2024-07-16 00:27:47.808342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.015 [2024-07-16 00:27:47.808355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.015 qpair failed and we were unable to recover it. 00:26:29.015 [2024-07-16 00:27:47.808627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.015 [2024-07-16 00:27:47.808638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.015 qpair failed and we were unable to recover it. 00:26:29.015 [2024-07-16 00:27:47.808918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.015 [2024-07-16 00:27:47.808929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.015 qpair failed and we were unable to recover it. 00:26:29.015 [2024-07-16 00:27:47.809184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.015 [2024-07-16 00:27:47.809194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.015 qpair failed and we were unable to recover it. 00:26:29.015 [2024-07-16 00:27:47.809463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.015 [2024-07-16 00:27:47.809474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.015 qpair failed and we were unable to recover it. 00:26:29.015 [2024-07-16 00:27:47.809711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.015 [2024-07-16 00:27:47.809722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.015 qpair failed and we were unable to recover it. 00:26:29.015 [2024-07-16 00:27:47.809845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.015 [2024-07-16 00:27:47.809855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.015 qpair failed and we were unable to recover it. 00:26:29.015 [2024-07-16 00:27:47.810038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.015 [2024-07-16 00:27:47.810048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.015 qpair failed and we were unable to recover it. 00:26:29.015 [2024-07-16 00:27:47.810298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.015 [2024-07-16 00:27:47.810328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:29.015 qpair failed and we were unable to recover it. 00:26:29.015 [2024-07-16 00:27:47.810616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.015 [2024-07-16 00:27:47.810632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:29.015 qpair failed and we were unable to recover it. 00:26:29.015 [2024-07-16 00:27:47.810897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.015 [2024-07-16 00:27:47.810912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:29.015 qpair failed and we were unable to recover it. 00:26:29.015 [2024-07-16 00:27:47.811170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.015 [2024-07-16 00:27:47.811184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:29.015 qpair failed and we were unable to recover it. 00:26:29.015 [2024-07-16 00:27:47.811458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.015 [2024-07-16 00:27:47.811474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:29.015 qpair failed and we were unable to recover it. 00:26:29.015 [2024-07-16 00:27:47.811684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.015 [2024-07-16 00:27:47.811698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:29.015 qpair failed and we were unable to recover it. 00:26:29.015 [2024-07-16 00:27:47.812000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.015 [2024-07-16 00:27:47.812015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:29.015 qpair failed and we were unable to recover it. 00:26:29.015 [2024-07-16 00:27:47.812275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.015 [2024-07-16 00:27:47.812290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:29.015 qpair failed and we were unable to recover it. 00:26:29.015 [2024-07-16 00:27:47.812524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.015 [2024-07-16 00:27:47.812537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:29.015 qpair failed and we were unable to recover it. 00:26:29.015 [2024-07-16 00:27:47.812813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.015 [2024-07-16 00:27:47.812826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:29.015 qpair failed and we were unable to recover it. 00:26:29.015 [2024-07-16 00:27:47.813095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.015 [2024-07-16 00:27:47.813110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:29.015 qpair failed and we were unable to recover it. 00:26:29.015 [2024-07-16 00:27:47.813252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.015 [2024-07-16 00:27:47.813266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:29.015 qpair failed and we were unable to recover it. 00:26:29.015 [2024-07-16 00:27:47.813544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.015 [2024-07-16 00:27:47.813559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:29.015 qpair failed and we were unable to recover it. 00:26:29.015 [2024-07-16 00:27:47.813769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.015 [2024-07-16 00:27:47.813792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:29.015 qpair failed and we were unable to recover it. 00:26:29.015 [2024-07-16 00:27:47.813947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.015 [2024-07-16 00:27:47.813962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:29.015 qpair failed and we were unable to recover it. 00:26:29.015 [2024-07-16 00:27:47.814241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.016 [2024-07-16 00:27:47.814256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:29.016 qpair failed and we were unable to recover it. 00:26:29.016 [2024-07-16 00:27:47.814483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.016 [2024-07-16 00:27:47.814498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:29.016 qpair failed and we were unable to recover it. 00:26:29.276 [2024-07-16 00:27:47.814760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.276 [2024-07-16 00:27:47.814776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:29.276 qpair failed and we were unable to recover it. 00:26:29.276 [2024-07-16 00:27:47.814967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.276 [2024-07-16 00:27:47.814983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:29.276 qpair failed and we were unable to recover it. 00:26:29.276 [2024-07-16 00:27:47.815209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.276 [2024-07-16 00:27:47.815230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:29.276 qpair failed and we were unable to recover it. 00:26:29.276 [2024-07-16 00:27:47.815462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.276 [2024-07-16 00:27:47.815480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.276 qpair failed and we were unable to recover it. 00:26:29.276 [2024-07-16 00:27:47.815704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.276 [2024-07-16 00:27:47.815716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.276 qpair failed and we were unable to recover it. 00:26:29.276 [2024-07-16 00:27:47.815920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.276 [2024-07-16 00:27:47.815932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.276 qpair failed and we were unable to recover it. 00:26:29.276 [2024-07-16 00:27:47.816126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.276 [2024-07-16 00:27:47.816137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.276 qpair failed and we were unable to recover it. 00:26:29.276 [2024-07-16 00:27:47.816357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.276 [2024-07-16 00:27:47.816369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.276 qpair failed and we were unable to recover it. 00:26:29.276 [2024-07-16 00:27:47.816582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.276 [2024-07-16 00:27:47.816593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.276 qpair failed and we were unable to recover it. 00:26:29.276 [2024-07-16 00:27:47.816783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.276 [2024-07-16 00:27:47.816795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.276 qpair failed and we were unable to recover it. 00:26:29.276 [2024-07-16 00:27:47.816985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.276 [2024-07-16 00:27:47.816996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.276 qpair failed and we were unable to recover it. 00:26:29.276 [2024-07-16 00:27:47.817265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.276 [2024-07-16 00:27:47.817278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.276 qpair failed and we were unable to recover it. 00:26:29.276 [2024-07-16 00:27:47.817535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.276 [2024-07-16 00:27:47.817547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.276 qpair failed and we were unable to recover it. 00:26:29.276 [2024-07-16 00:27:47.817821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.276 [2024-07-16 00:27:47.817831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.276 qpair failed and we were unable to recover it. 00:26:29.276 [2024-07-16 00:27:47.818035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.276 [2024-07-16 00:27:47.818045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.276 qpair failed and we were unable to recover it. 00:26:29.276 [2024-07-16 00:27:47.818187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.276 [2024-07-16 00:27:47.818198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.276 qpair failed and we were unable to recover it. 00:26:29.276 [2024-07-16 00:27:47.818449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.276 [2024-07-16 00:27:47.818461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.276 qpair failed and we were unable to recover it. 00:26:29.276 [2024-07-16 00:27:47.818719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.276 [2024-07-16 00:27:47.818729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.276 qpair failed and we were unable to recover it. 00:26:29.276 [2024-07-16 00:27:47.818866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.276 [2024-07-16 00:27:47.818876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.276 qpair failed and we were unable to recover it. 00:26:29.276 [2024-07-16 00:27:47.819079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.276 [2024-07-16 00:27:47.819090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.276 qpair failed and we were unable to recover it. 00:26:29.276 [2024-07-16 00:27:47.819387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.276 [2024-07-16 00:27:47.819398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.276 qpair failed and we were unable to recover it. 00:26:29.276 [2024-07-16 00:27:47.819673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.276 [2024-07-16 00:27:47.819683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.276 qpair failed and we were unable to recover it. 00:26:29.276 [2024-07-16 00:27:47.819883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.276 [2024-07-16 00:27:47.819894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.276 qpair failed and we were unable to recover it. 00:26:29.276 [2024-07-16 00:27:47.820190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.276 [2024-07-16 00:27:47.820202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.276 qpair failed and we were unable to recover it. 00:26:29.276 [2024-07-16 00:27:47.820476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.276 [2024-07-16 00:27:47.820488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.276 qpair failed and we were unable to recover it. 00:26:29.276 [2024-07-16 00:27:47.820686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.276 [2024-07-16 00:27:47.820696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.276 qpair failed and we were unable to recover it. 00:26:29.276 [2024-07-16 00:27:47.820882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.276 [2024-07-16 00:27:47.820892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.276 qpair failed and we were unable to recover it. 00:26:29.276 [2024-07-16 00:27:47.821039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.276 [2024-07-16 00:27:47.821049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.276 qpair failed and we were unable to recover it. 00:26:29.276 [2024-07-16 00:27:47.821244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.276 [2024-07-16 00:27:47.821255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.276 qpair failed and we were unable to recover it. 00:26:29.276 [2024-07-16 00:27:47.821388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.276 [2024-07-16 00:27:47.821399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.276 qpair failed and we were unable to recover it. 00:26:29.276 [2024-07-16 00:27:47.821700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.276 [2024-07-16 00:27:47.821711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.276 qpair failed and we were unable to recover it. 00:26:29.276 [2024-07-16 00:27:47.821899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.276 [2024-07-16 00:27:47.821910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.276 qpair failed and we were unable to recover it. 00:26:29.276 [2024-07-16 00:27:47.822107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.276 [2024-07-16 00:27:47.822117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.276 qpair failed and we were unable to recover it. 00:26:29.276 [2024-07-16 00:27:47.822318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.276 [2024-07-16 00:27:47.822329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.276 qpair failed and we were unable to recover it. 00:26:29.277 [2024-07-16 00:27:47.822591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.277 [2024-07-16 00:27:47.822601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.277 qpair failed and we were unable to recover it. 00:26:29.277 [2024-07-16 00:27:47.822897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.277 [2024-07-16 00:27:47.822908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.277 qpair failed and we were unable to recover it. 00:26:29.277 [2024-07-16 00:27:47.823133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.277 [2024-07-16 00:27:47.823147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.277 qpair failed and we were unable to recover it. 00:26:29.277 [2024-07-16 00:27:47.823347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.277 [2024-07-16 00:27:47.823358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.277 qpair failed and we were unable to recover it. 00:26:29.277 [2024-07-16 00:27:47.823610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.277 [2024-07-16 00:27:47.823620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.277 qpair failed and we were unable to recover it. 00:26:29.277 [2024-07-16 00:27:47.823872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.277 [2024-07-16 00:27:47.823883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.277 qpair failed and we were unable to recover it. 00:26:29.277 [2024-07-16 00:27:47.824084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.277 [2024-07-16 00:27:47.824096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.277 qpair failed and we were unable to recover it. 00:26:29.277 [2024-07-16 00:27:47.824366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.277 [2024-07-16 00:27:47.824377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.277 qpair failed and we were unable to recover it. 00:26:29.277 [2024-07-16 00:27:47.824508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.277 [2024-07-16 00:27:47.824518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.277 qpair failed and we were unable to recover it. 00:26:29.277 [2024-07-16 00:27:47.824791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.277 [2024-07-16 00:27:47.824802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.277 qpair failed and we were unable to recover it. 00:26:29.277 [2024-07-16 00:27:47.825102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.277 [2024-07-16 00:27:47.825114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.277 qpair failed and we were unable to recover it. 00:26:29.277 [2024-07-16 00:27:47.825367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.277 [2024-07-16 00:27:47.825378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.277 qpair failed and we were unable to recover it. 00:26:29.277 [2024-07-16 00:27:47.825692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.277 [2024-07-16 00:27:47.825703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.277 qpair failed and we were unable to recover it. 00:26:29.277 [2024-07-16 00:27:47.825965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.277 [2024-07-16 00:27:47.825975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.277 qpair failed and we were unable to recover it. 00:26:29.277 [2024-07-16 00:27:47.826233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.277 [2024-07-16 00:27:47.826245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.277 qpair failed and we were unable to recover it. 00:26:29.277 [2024-07-16 00:27:47.826437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.277 [2024-07-16 00:27:47.826447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.277 qpair failed and we were unable to recover it. 00:26:29.277 [2024-07-16 00:27:47.826631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.277 [2024-07-16 00:27:47.826642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.277 qpair failed and we were unable to recover it. 00:26:29.277 [2024-07-16 00:27:47.826861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.277 [2024-07-16 00:27:47.826872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.277 qpair failed and we were unable to recover it. 00:26:29.277 [2024-07-16 00:27:47.827148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.277 [2024-07-16 00:27:47.827159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.277 qpair failed and we were unable to recover it. 00:26:29.277 [2024-07-16 00:27:47.827411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.277 [2024-07-16 00:27:47.827423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.277 qpair failed and we were unable to recover it. 00:26:29.277 [2024-07-16 00:27:47.827548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.277 [2024-07-16 00:27:47.827559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.277 qpair failed and we were unable to recover it. 00:26:29.277 [2024-07-16 00:27:47.827696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.277 [2024-07-16 00:27:47.827707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.277 qpair failed and we were unable to recover it. 00:26:29.277 [2024-07-16 00:27:47.828005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.277 [2024-07-16 00:27:47.828016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.277 qpair failed and we were unable to recover it. 00:26:29.277 [2024-07-16 00:27:47.828294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.277 [2024-07-16 00:27:47.828305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.277 qpair failed and we were unable to recover it. 00:26:29.277 [2024-07-16 00:27:47.828502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.277 [2024-07-16 00:27:47.828512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.277 qpair failed and we were unable to recover it. 00:26:29.277 [2024-07-16 00:27:47.828795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.277 [2024-07-16 00:27:47.828806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.277 qpair failed and we were unable to recover it. 00:26:29.277 [2024-07-16 00:27:47.829039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.277 [2024-07-16 00:27:47.829049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.277 qpair failed and we were unable to recover it. 00:26:29.277 [2024-07-16 00:27:47.829308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.277 [2024-07-16 00:27:47.829319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.277 qpair failed and we were unable to recover it. 00:26:29.277 [2024-07-16 00:27:47.829603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.277 [2024-07-16 00:27:47.829613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.277 qpair failed and we were unable to recover it. 00:26:29.277 [2024-07-16 00:27:47.829899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.277 [2024-07-16 00:27:47.829910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.277 qpair failed and we were unable to recover it. 00:26:29.277 [2024-07-16 00:27:47.830095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.277 [2024-07-16 00:27:47.830105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.277 qpair failed and we were unable to recover it. 00:26:29.277 [2024-07-16 00:27:47.830366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.277 [2024-07-16 00:27:47.830377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.277 qpair failed and we were unable to recover it. 00:26:29.277 [2024-07-16 00:27:47.830655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.277 [2024-07-16 00:27:47.830666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.277 qpair failed and we were unable to recover it. 00:26:29.277 [2024-07-16 00:27:47.830891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.277 [2024-07-16 00:27:47.830902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.277 qpair failed and we were unable to recover it. 00:26:29.277 [2024-07-16 00:27:47.831171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.277 [2024-07-16 00:27:47.831181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.277 qpair failed and we were unable to recover it. 00:26:29.277 [2024-07-16 00:27:47.831456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.277 [2024-07-16 00:27:47.831467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.277 qpair failed and we were unable to recover it. 00:26:29.277 [2024-07-16 00:27:47.831682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.277 [2024-07-16 00:27:47.831692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.277 qpair failed and we were unable to recover it. 00:26:29.277 [2024-07-16 00:27:47.831944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.277 [2024-07-16 00:27:47.831954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.277 qpair failed and we were unable to recover it. 00:26:29.277 [2024-07-16 00:27:47.832259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.277 [2024-07-16 00:27:47.832270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.277 qpair failed and we were unable to recover it. 00:26:29.277 [2024-07-16 00:27:47.832487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.277 [2024-07-16 00:27:47.832498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.277 qpair failed and we were unable to recover it. 00:26:29.277 [2024-07-16 00:27:47.832773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.277 [2024-07-16 00:27:47.832785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.278 qpair failed and we were unable to recover it. 00:26:29.278 [2024-07-16 00:27:47.833006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.278 [2024-07-16 00:27:47.833017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.278 qpair failed and we were unable to recover it. 00:26:29.278 [2024-07-16 00:27:47.833230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.278 [2024-07-16 00:27:47.833244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.278 qpair failed and we were unable to recover it. 00:26:29.278 [2024-07-16 00:27:47.833499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.278 [2024-07-16 00:27:47.833510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.278 qpair failed and we were unable to recover it. 00:26:29.278 [2024-07-16 00:27:47.833697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.278 [2024-07-16 00:27:47.833708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.278 qpair failed and we were unable to recover it. 00:26:29.278 [2024-07-16 00:27:47.833910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.278 [2024-07-16 00:27:47.833921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.278 qpair failed and we were unable to recover it. 00:26:29.278 [2024-07-16 00:27:47.834146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.278 [2024-07-16 00:27:47.834156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.278 qpair failed and we were unable to recover it. 00:26:29.278 [2024-07-16 00:27:47.834426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.278 [2024-07-16 00:27:47.834437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.278 qpair failed and we were unable to recover it. 00:26:29.278 [2024-07-16 00:27:47.834688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.278 [2024-07-16 00:27:47.834699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.278 qpair failed and we were unable to recover it. 00:26:29.278 [2024-07-16 00:27:47.834998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.278 [2024-07-16 00:27:47.835009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.278 qpair failed and we were unable to recover it. 00:26:29.278 [2024-07-16 00:27:47.835272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.278 [2024-07-16 00:27:47.835282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.278 qpair failed and we were unable to recover it. 00:26:29.278 [2024-07-16 00:27:47.835485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.278 [2024-07-16 00:27:47.835496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.278 qpair failed and we were unable to recover it. 00:26:29.278 [2024-07-16 00:27:47.835758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.278 [2024-07-16 00:27:47.835769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.278 qpair failed and we were unable to recover it. 00:26:29.278 [2024-07-16 00:27:47.836074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.278 [2024-07-16 00:27:47.836085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.278 qpair failed and we were unable to recover it. 00:26:29.278 [2024-07-16 00:27:47.836301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.278 [2024-07-16 00:27:47.836311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.278 qpair failed and we were unable to recover it. 00:26:29.278 [2024-07-16 00:27:47.836553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.278 [2024-07-16 00:27:47.836564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.278 qpair failed and we were unable to recover it. 00:26:29.278 [2024-07-16 00:27:47.836755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.278 [2024-07-16 00:27:47.836765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.278 qpair failed and we were unable to recover it. 00:26:29.278 [2024-07-16 00:27:47.836983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.278 [2024-07-16 00:27:47.836993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.278 qpair failed and we were unable to recover it. 00:26:29.278 [2024-07-16 00:27:47.837207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.278 [2024-07-16 00:27:47.837217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.278 qpair failed and we were unable to recover it. 00:26:29.278 [2024-07-16 00:27:47.837439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.278 [2024-07-16 00:27:47.837450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.278 qpair failed and we were unable to recover it. 00:26:29.278 [2024-07-16 00:27:47.837646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.278 [2024-07-16 00:27:47.837656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.278 qpair failed and we were unable to recover it. 00:26:29.278 [2024-07-16 00:27:47.837906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.278 [2024-07-16 00:27:47.837916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.278 qpair failed and we were unable to recover it. 00:26:29.278 [2024-07-16 00:27:47.838187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.278 [2024-07-16 00:27:47.838197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.278 qpair failed and we were unable to recover it. 00:26:29.278 [2024-07-16 00:27:47.838397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.278 [2024-07-16 00:27:47.838407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.278 qpair failed and we were unable to recover it. 00:26:29.278 [2024-07-16 00:27:47.838670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.278 [2024-07-16 00:27:47.838680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.278 qpair failed and we were unable to recover it. 00:26:29.278 [2024-07-16 00:27:47.838985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.278 [2024-07-16 00:27:47.838995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.278 qpair failed and we were unable to recover it. 00:26:29.278 [2024-07-16 00:27:47.839264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.278 [2024-07-16 00:27:47.839274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.278 qpair failed and we were unable to recover it. 00:26:29.278 [2024-07-16 00:27:47.839523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.278 [2024-07-16 00:27:47.839532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.278 qpair failed and we were unable to recover it. 00:26:29.278 [2024-07-16 00:27:47.839830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.278 [2024-07-16 00:27:47.839839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.278 qpair failed and we were unable to recover it. 00:26:29.278 [2024-07-16 00:27:47.840114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.278 [2024-07-16 00:27:47.840124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.278 qpair failed and we were unable to recover it. 00:26:29.278 [2024-07-16 00:27:47.840372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.278 [2024-07-16 00:27:47.840382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.278 qpair failed and we were unable to recover it. 00:26:29.278 [2024-07-16 00:27:47.840577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.278 [2024-07-16 00:27:47.840587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.278 qpair failed and we were unable to recover it. 00:26:29.278 [2024-07-16 00:27:47.840861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.278 [2024-07-16 00:27:47.840871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.278 qpair failed and we were unable to recover it. 00:26:29.278 [2024-07-16 00:27:47.841140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.278 [2024-07-16 00:27:47.841151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.278 qpair failed and we were unable to recover it. 00:26:29.278 [2024-07-16 00:27:47.841429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.278 [2024-07-16 00:27:47.841440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.278 qpair failed and we were unable to recover it. 00:26:29.278 [2024-07-16 00:27:47.841704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.278 [2024-07-16 00:27:47.841714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.278 qpair failed and we were unable to recover it. 00:26:29.278 [2024-07-16 00:27:47.841989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.278 [2024-07-16 00:27:47.842000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.278 qpair failed and we were unable to recover it. 00:26:29.278 [2024-07-16 00:27:47.842297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.278 [2024-07-16 00:27:47.842308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.278 qpair failed and we were unable to recover it. 00:26:29.278 [2024-07-16 00:27:47.842570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.278 [2024-07-16 00:27:47.842580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.278 qpair failed and we were unable to recover it. 00:26:29.278 [2024-07-16 00:27:47.842850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.278 [2024-07-16 00:27:47.842860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.278 qpair failed and we were unable to recover it. 00:26:29.278 [2024-07-16 00:27:47.843122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.278 [2024-07-16 00:27:47.843133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.278 qpair failed and we were unable to recover it. 00:26:29.278 [2024-07-16 00:27:47.843386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.279 [2024-07-16 00:27:47.843397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.279 qpair failed and we were unable to recover it. 00:26:29.279 [2024-07-16 00:27:47.843532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.279 [2024-07-16 00:27:47.843545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.279 qpair failed and we were unable to recover it. 00:26:29.279 [2024-07-16 00:27:47.843809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.279 [2024-07-16 00:27:47.843819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.279 qpair failed and we were unable to recover it. 00:26:29.279 [2024-07-16 00:27:47.843972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.279 [2024-07-16 00:27:47.843982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.279 qpair failed and we were unable to recover it. 00:26:29.279 [2024-07-16 00:27:47.844218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.279 [2024-07-16 00:27:47.844232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.279 qpair failed and we were unable to recover it. 00:26:29.279 [2024-07-16 00:27:47.844502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.279 [2024-07-16 00:27:47.844513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.279 qpair failed and we were unable to recover it. 00:26:29.279 [2024-07-16 00:27:47.844763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.279 [2024-07-16 00:27:47.844774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.279 qpair failed and we were unable to recover it. 00:26:29.279 [2024-07-16 00:27:47.845044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.279 [2024-07-16 00:27:47.845054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.279 qpair failed and we were unable to recover it. 00:26:29.279 [2024-07-16 00:27:47.845345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.279 [2024-07-16 00:27:47.845357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.279 qpair failed and we were unable to recover it. 00:26:29.279 [2024-07-16 00:27:47.845639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.279 [2024-07-16 00:27:47.845650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.279 qpair failed and we were unable to recover it. 00:26:29.279 [2024-07-16 00:27:47.845771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.279 [2024-07-16 00:27:47.845783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.279 qpair failed and we were unable to recover it. 00:26:29.279 [2024-07-16 00:27:47.845967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.279 [2024-07-16 00:27:47.845977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.279 qpair failed and we were unable to recover it. 00:26:29.279 [2024-07-16 00:27:47.846253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.279 [2024-07-16 00:27:47.846265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.279 qpair failed and we were unable to recover it. 00:26:29.279 [2024-07-16 00:27:47.846446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.279 [2024-07-16 00:27:47.846457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.279 qpair failed and we were unable to recover it. 00:26:29.279 [2024-07-16 00:27:47.846662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.279 [2024-07-16 00:27:47.846673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.279 qpair failed and we were unable to recover it. 00:26:29.279 [2024-07-16 00:27:47.846937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.279 [2024-07-16 00:27:47.846948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.279 qpair failed and we were unable to recover it. 00:26:29.279 [2024-07-16 00:27:47.847222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.279 [2024-07-16 00:27:47.847236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.279 qpair failed and we were unable to recover it. 00:26:29.279 [2024-07-16 00:27:47.847482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.279 [2024-07-16 00:27:47.847492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.279 qpair failed and we were unable to recover it. 00:26:29.279 [2024-07-16 00:27:47.847759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.279 [2024-07-16 00:27:47.847770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.279 qpair failed and we were unable to recover it. 00:26:29.279 [2024-07-16 00:27:47.848047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.279 [2024-07-16 00:27:47.848058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.279 qpair failed and we were unable to recover it. 00:26:29.279 [2024-07-16 00:27:47.848242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.279 [2024-07-16 00:27:47.848252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.279 qpair failed and we were unable to recover it. 00:26:29.279 [2024-07-16 00:27:47.848521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.279 [2024-07-16 00:27:47.848532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.279 qpair failed and we were unable to recover it. 00:26:29.279 [2024-07-16 00:27:47.848714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.279 [2024-07-16 00:27:47.848724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.279 qpair failed and we were unable to recover it. 00:26:29.279 [2024-07-16 00:27:47.848856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.279 [2024-07-16 00:27:47.848866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.279 qpair failed and we were unable to recover it. 00:26:29.279 [2024-07-16 00:27:47.849147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.279 [2024-07-16 00:27:47.849158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.279 qpair failed and we were unable to recover it. 00:26:29.279 [2024-07-16 00:27:47.849406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.279 [2024-07-16 00:27:47.849417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.279 qpair failed and we were unable to recover it. 00:26:29.279 [2024-07-16 00:27:47.849617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.279 [2024-07-16 00:27:47.849627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.279 qpair failed and we were unable to recover it. 00:26:29.279 [2024-07-16 00:27:47.849902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.279 [2024-07-16 00:27:47.849913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f917c000b90 with addr=10.0.0.2, port=4420 00:26:29.279 qpair failed and we were unable to recover it. 00:26:29.279 [2024-07-16 00:27:47.850163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.279 [2024-07-16 00:27:47.850197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9174000b90 with addr=10.0.0.2, port=4420 00:26:29.279 qpair failed and we were unable to recover it. 00:26:29.279 [2024-07-16 00:27:47.850475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.279 [2024-07-16 00:27:47.850514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9184000b90 with addr=10.0.0.2, port=4420 00:26:29.279 qpair failed and we were unable to recover it. 00:26:29.279 A controller has encountered a failure and is being reset. 00:26:29.279 [2024-07-16 00:27:47.850871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.280 [2024-07-16 00:27:47.850905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6c000 with addr=10.0.0.2, port=4420 00:26:29.280 [2024-07-16 00:27:47.850917] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6c000 is same with the state(5) to be set 00:26:29.280 [2024-07-16 00:27:47.850934] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a6c000 (9): Bad file descriptor 00:26:29.280 [2024-07-16 00:27:47.850947] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.280 [2024-07-16 00:27:47.850965] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.280 [2024-07-16 00:27:47.850976] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.280 Unable to reset the controller. 00:26:29.847 00:27:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:26:29.847 00:27:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@856 -- # return 0 00:26:29.847 00:27:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:29.847 00:27:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:29.847 00:27:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:29.847 00:27:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:29.847 00:27:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:29.847 00:27:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:29.847 00:27:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:29.847 Malloc0 00:26:29.847 00:27:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:29.847 00:27:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:26:29.847 00:27:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:29.847 00:27:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:29.848 [2024-07-16 00:27:48.524316] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:29.848 00:27:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:29.848 00:27:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:29.848 00:27:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:29.848 00:27:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:29.848 00:27:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:29.848 00:27:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:29.848 00:27:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:29.848 00:27:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:29.848 00:27:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:29.848 00:27:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:29.848 00:27:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:29.848 00:27:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:29.848 [2024-07-16 00:27:48.556542] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:29.848 00:27:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:29.848 00:27:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:29.848 00:27:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:29.848 00:27:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:29.848 00:27:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:29.848 00:27:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1664745 00:26:30.105 Controller properly reset. 00:26:35.372 Initializing NVMe Controllers 00:26:35.372 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:35.372 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:35.372 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:26:35.372 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:26:35.372 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:26:35.372 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:26:35.372 Initialization complete. Launching workers. 00:26:35.372 Starting thread on core 1 00:26:35.372 Starting thread on core 2 00:26:35.372 Starting thread on core 3 00:26:35.372 Starting thread on core 0 00:26:35.372 00:27:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:26:35.372 00:26:35.372 real 0m11.138s 00:26:35.372 user 0m37.073s 00:26:35.372 sys 0m5.684s 00:26:35.372 00:27:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1118 -- # xtrace_disable 00:26:35.372 00:27:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:35.372 ************************************ 00:26:35.372 END TEST nvmf_target_disconnect_tc2 00:26:35.372 ************************************ 00:26:35.372 00:27:53 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1136 -- # return 0 00:26:35.372 00:27:53 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:26:35.372 00:27:53 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:26:35.372 00:27:53 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:26:35.372 00:27:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:35.372 00:27:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:26:35.372 00:27:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:35.372 00:27:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:26:35.372 00:27:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:35.372 00:27:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:35.372 rmmod nvme_tcp 00:26:35.372 rmmod nvme_fabrics 00:26:35.372 rmmod nvme_keyring 00:26:35.372 00:27:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:35.372 00:27:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:26:35.372 00:27:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:26:35.372 00:27:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 1665436 ']' 00:26:35.372 00:27:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 1665436 00:26:35.372 00:27:53 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@942 -- # '[' -z 1665436 ']' 00:26:35.372 00:27:53 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@946 -- # kill -0 1665436 00:26:35.372 00:27:53 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@947 -- # uname 00:26:35.372 00:27:53 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:26:35.372 00:27:53 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1665436 00:26:35.372 00:27:53 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@948 -- # process_name=reactor_4 00:26:35.372 00:27:53 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # '[' reactor_4 = sudo ']' 00:26:35.372 00:27:53 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1665436' 00:26:35.372 killing process with pid 1665436 00:26:35.372 00:27:53 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@961 -- # kill 1665436 00:26:35.372 00:27:53 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # wait 1665436 00:26:35.372 00:27:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:35.372 00:27:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:35.372 00:27:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:35.372 00:27:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:35.372 00:27:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:35.372 00:27:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:35.372 00:27:54 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:35.372 00:27:54 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:37.906 00:27:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:37.906 00:26:37.906 real 0m19.485s 00:26:37.906 user 1m3.491s 00:26:37.906 sys 0m10.477s 00:26:37.906 00:27:56 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1118 -- # xtrace_disable 00:26:37.906 00:27:56 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:37.906 ************************************ 00:26:37.906 END TEST nvmf_target_disconnect 00:26:37.906 ************************************ 00:26:37.906 00:27:56 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:26:37.906 00:27:56 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:26:37.906 00:27:56 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:37.906 00:27:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:37.906 00:27:56 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:26:37.906 00:26:37.906 real 20m47.841s 00:26:37.906 user 45m20.658s 00:26:37.906 sys 6m20.909s 00:26:37.906 00:27:56 nvmf_tcp -- common/autotest_common.sh@1118 -- # xtrace_disable 00:26:37.906 00:27:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:37.906 ************************************ 00:26:37.906 END TEST nvmf_tcp 00:26:37.906 ************************************ 00:26:37.906 00:27:56 -- common/autotest_common.sh@1136 -- # return 0 00:26:37.906 00:27:56 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:26:37.906 00:27:56 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:26:37.906 00:27:56 -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:26:37.906 00:27:56 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:26:37.906 00:27:56 -- common/autotest_common.sh@10 -- # set +x 00:26:37.906 ************************************ 00:26:37.906 START TEST spdkcli_nvmf_tcp 00:26:37.906 ************************************ 00:26:37.906 00:27:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:26:37.906 * Looking for test storage... 00:26:37.906 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:26:37.906 00:27:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:26:37.906 00:27:56 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:26:37.906 00:27:56 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:26:37.906 00:27:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:37.906 00:27:56 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:26:37.906 00:27:56 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:37.906 00:27:56 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:37.906 00:27:56 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:37.906 00:27:56 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:37.906 00:27:56 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:37.906 00:27:56 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:37.906 00:27:56 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:37.906 00:27:56 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:37.906 00:27:56 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:37.906 00:27:56 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:37.906 00:27:56 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:37.906 00:27:56 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:37.906 00:27:56 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:37.906 00:27:56 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:37.906 00:27:56 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:37.906 00:27:56 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:37.906 00:27:56 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:37.906 00:27:56 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:37.906 00:27:56 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:37.906 00:27:56 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:37.906 00:27:56 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:37.906 00:27:56 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:37.906 00:27:56 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:37.906 00:27:56 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:26:37.906 00:27:56 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:37.906 00:27:56 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:26:37.906 00:27:56 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:37.906 00:27:56 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:37.906 00:27:56 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:37.906 00:27:56 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:37.906 00:27:56 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:37.906 00:27:56 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:37.906 00:27:56 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:37.906 00:27:56 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:37.906 00:27:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:26:37.906 00:27:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:26:37.906 00:27:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:26:37.906 00:27:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:26:37.906 00:27:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:37.906 00:27:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:37.906 00:27:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:26:37.906 00:27:56 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1666967 00:26:37.906 00:27:56 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 1666967 00:26:37.906 00:27:56 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:26:37.906 00:27:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@823 -- # '[' -z 1666967 ']' 00:26:37.906 00:27:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:37.906 00:27:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@828 -- # local max_retries=100 00:26:37.906 00:27:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:37.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:37.906 00:27:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@832 -- # xtrace_disable 00:26:37.906 00:27:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:37.906 [2024-07-16 00:27:56.479919] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:26:37.906 [2024-07-16 00:27:56.479968] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1666967 ] 00:26:37.906 [2024-07-16 00:27:56.534823] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:37.906 [2024-07-16 00:27:56.610371] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:37.906 [2024-07-16 00:27:56.610375] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:38.474 00:27:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:26:38.474 00:27:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@856 -- # return 0 00:26:38.474 00:27:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:26:38.474 00:27:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:38.474 00:27:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:38.474 00:27:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:26:38.474 00:27:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:26:38.474 00:27:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:26:38.474 00:27:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:38.474 00:27:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:38.474 00:27:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:26:38.474 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:26:38.474 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:26:38.474 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:26:38.474 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:26:38.474 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:26:38.474 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:26:38.474 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:26:38.474 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:26:38.474 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:26:38.474 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:26:38.474 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:26:38.474 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:26:38.474 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:26:38.474 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:26:38.474 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:26:38.474 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:26:38.474 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:26:38.474 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:26:38.474 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:26:38.474 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:26:38.474 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:26:38.474 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:26:38.474 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:26:38.474 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:26:38.474 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:26:38.474 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:26:38.474 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:26:38.474 ' 00:26:41.005 [2024-07-16 00:27:59.697998] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:42.381 [2024-07-16 00:28:00.873893] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:26:44.280 [2024-07-16 00:28:03.036545] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:26:46.193 [2024-07-16 00:28:04.894334] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:26:47.567 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:26:47.567 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:26:47.567 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:26:47.567 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:26:47.567 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:26:47.567 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:26:47.567 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:26:47.567 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:26:47.567 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:26:47.567 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:26:47.567 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:26:47.567 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:26:47.567 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:26:47.567 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:26:47.567 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:26:47.567 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:26:47.567 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:26:47.567 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:26:47.567 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:26:47.567 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:26:47.567 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:26:47.567 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:26:47.567 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:26:47.567 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:26:47.567 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:26:47.567 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:26:47.567 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:26:47.567 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:26:47.825 00:28:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:26:47.825 00:28:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:47.825 00:28:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:47.825 00:28:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:26:47.825 00:28:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:47.825 00:28:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:47.825 00:28:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:26:47.825 00:28:06 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:26:48.083 00:28:06 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:26:48.083 00:28:06 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:26:48.083 00:28:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:26:48.083 00:28:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:48.083 00:28:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:48.083 00:28:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:26:48.083 00:28:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:48.083 00:28:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:48.083 00:28:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:26:48.083 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:26:48.083 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:26:48.083 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:26:48.083 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:26:48.083 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:26:48.083 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:26:48.083 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:26:48.083 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:26:48.083 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:26:48.083 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:26:48.083 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:26:48.083 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:26:48.083 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:26:48.083 ' 00:26:53.355 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:26:53.356 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:26:53.356 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:26:53.356 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:26:53.356 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:26:53.356 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:26:53.356 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:26:53.356 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:26:53.356 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:26:53.356 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:26:53.356 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:26:53.356 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:26:53.356 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:26:53.356 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:26:53.356 00:28:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:26:53.356 00:28:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:53.356 00:28:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:53.356 00:28:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 1666967 00:26:53.356 00:28:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@942 -- # '[' -z 1666967 ']' 00:26:53.356 00:28:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@946 -- # kill -0 1666967 00:26:53.356 00:28:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@947 -- # uname 00:26:53.356 00:28:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:26:53.356 00:28:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1666967 00:26:53.356 00:28:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:26:53.356 00:28:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:26:53.356 00:28:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1666967' 00:26:53.356 killing process with pid 1666967 00:26:53.356 00:28:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@961 -- # kill 1666967 00:26:53.356 00:28:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # wait 1666967 00:26:53.356 00:28:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:26:53.356 00:28:12 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:26:53.356 00:28:12 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 1666967 ']' 00:26:53.356 00:28:12 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 1666967 00:26:53.356 00:28:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@942 -- # '[' -z 1666967 ']' 00:26:53.356 00:28:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@946 -- # kill -0 1666967 00:26:53.356 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 946: kill: (1666967) - No such process 00:26:53.356 00:28:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@969 -- # echo 'Process with pid 1666967 is not found' 00:26:53.356 Process with pid 1666967 is not found 00:26:53.356 00:28:12 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:26:53.356 00:28:12 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:26:53.356 00:28:12 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:26:53.356 00:26:53.356 real 0m15.798s 00:26:53.356 user 0m32.778s 00:26:53.356 sys 0m0.677s 00:26:53.356 00:28:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@1118 -- # xtrace_disable 00:26:53.356 00:28:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:53.356 ************************************ 00:26:53.356 END TEST spdkcli_nvmf_tcp 00:26:53.356 ************************************ 00:26:53.356 00:28:12 -- common/autotest_common.sh@1136 -- # return 0 00:26:53.356 00:28:12 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:26:53.356 00:28:12 -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:26:53.356 00:28:12 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:26:53.356 00:28:12 -- common/autotest_common.sh@10 -- # set +x 00:26:53.356 ************************************ 00:26:53.356 START TEST nvmf_identify_passthru 00:26:53.356 ************************************ 00:26:53.356 00:28:12 nvmf_identify_passthru -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:26:53.614 * Looking for test storage... 00:26:53.614 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:53.614 00:28:12 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:53.614 00:28:12 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:26:53.614 00:28:12 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:53.614 00:28:12 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:53.614 00:28:12 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:53.614 00:28:12 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:53.614 00:28:12 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:53.614 00:28:12 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:53.615 00:28:12 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:53.615 00:28:12 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:53.615 00:28:12 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:53.615 00:28:12 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:53.615 00:28:12 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:53.615 00:28:12 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:53.615 00:28:12 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:53.615 00:28:12 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:53.615 00:28:12 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:53.615 00:28:12 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:53.615 00:28:12 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:53.615 00:28:12 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:53.615 00:28:12 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:53.615 00:28:12 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:53.615 00:28:12 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:53.615 00:28:12 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:53.615 00:28:12 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:53.615 00:28:12 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:26:53.615 00:28:12 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:53.615 00:28:12 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:26:53.615 00:28:12 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:53.615 00:28:12 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:53.615 00:28:12 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:53.615 00:28:12 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:53.615 00:28:12 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:53.615 00:28:12 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:53.615 00:28:12 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:53.615 00:28:12 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:53.615 00:28:12 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:53.615 00:28:12 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:53.615 00:28:12 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:53.615 00:28:12 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:53.615 00:28:12 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:53.615 00:28:12 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:53.615 00:28:12 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:53.615 00:28:12 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:26:53.615 00:28:12 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:53.615 00:28:12 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:26:53.615 00:28:12 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:53.615 00:28:12 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:53.615 00:28:12 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:53.615 00:28:12 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:53.615 00:28:12 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:53.615 00:28:12 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:53.615 00:28:12 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:53.615 00:28:12 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:53.615 00:28:12 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:53.615 00:28:12 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:53.615 00:28:12 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:26:53.615 00:28:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:58.957 00:28:17 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:58.957 00:28:17 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:26:58.957 00:28:17 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:58.957 00:28:17 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:58.957 00:28:17 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:58.957 00:28:17 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:58.957 00:28:17 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:58.957 00:28:17 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:26:58.957 00:28:17 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:58.957 00:28:17 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:26:58.957 00:28:17 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:26:58.957 00:28:17 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:26:58.957 00:28:17 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:26:58.957 00:28:17 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:26:58.957 00:28:17 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:26:58.957 00:28:17 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:58.957 00:28:17 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:58.957 00:28:17 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:58.957 00:28:17 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:58.957 00:28:17 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:58.957 00:28:17 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:58.957 00:28:17 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:58.957 00:28:17 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:58.957 00:28:17 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:58.957 00:28:17 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:58.957 00:28:17 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:58.957 00:28:17 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:58.957 00:28:17 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:58.957 00:28:17 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:58.957 00:28:17 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:58.957 00:28:17 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:58.957 00:28:17 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:58.957 00:28:17 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:58.957 00:28:17 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:58.957 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:58.957 00:28:17 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:58.957 00:28:17 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:58.957 00:28:17 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:58.957 00:28:17 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:58.957 00:28:17 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:58.957 00:28:17 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:58.957 00:28:17 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:58.957 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:58.957 00:28:17 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:58.957 00:28:17 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:58.957 00:28:17 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:58.957 00:28:17 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:58.957 00:28:17 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:58.957 00:28:17 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:58.957 00:28:17 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:58.957 00:28:17 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:58.957 00:28:17 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:58.957 00:28:17 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:58.957 00:28:17 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:58.957 00:28:17 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:58.957 00:28:17 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:58.957 00:28:17 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:58.957 00:28:17 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:58.957 00:28:17 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:58.957 Found net devices under 0000:86:00.0: cvl_0_0 00:26:58.957 00:28:17 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:58.958 00:28:17 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:58.958 00:28:17 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:58.958 00:28:17 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:58.958 00:28:17 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:58.958 00:28:17 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:58.958 00:28:17 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:58.958 00:28:17 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:58.958 00:28:17 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:58.958 Found net devices under 0000:86:00.1: cvl_0_1 00:26:58.958 00:28:17 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:58.958 00:28:17 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:58.958 00:28:17 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:26:58.958 00:28:17 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:58.958 00:28:17 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:58.958 00:28:17 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:58.958 00:28:17 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:58.958 00:28:17 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:58.958 00:28:17 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:58.958 00:28:17 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:58.958 00:28:17 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:58.958 00:28:17 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:58.958 00:28:17 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:58.958 00:28:17 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:58.958 00:28:17 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:58.958 00:28:17 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:58.958 00:28:17 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:58.958 00:28:17 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:58.958 00:28:17 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:58.958 00:28:17 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:58.958 00:28:17 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:58.958 00:28:17 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:58.958 00:28:17 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:58.958 00:28:17 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:58.958 00:28:17 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:58.958 00:28:17 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:58.958 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:58.958 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.172 ms 00:26:58.958 00:26:58.958 --- 10.0.0.2 ping statistics --- 00:26:58.958 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:58.958 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:26:58.958 00:28:17 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:58.958 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:58.958 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.245 ms 00:26:58.958 00:26:58.958 --- 10.0.0.1 ping statistics --- 00:26:58.958 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:58.958 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:26:58.958 00:28:17 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:58.958 00:28:17 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:26:58.958 00:28:17 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:58.958 00:28:17 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:58.958 00:28:17 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:58.958 00:28:17 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:58.958 00:28:17 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:58.958 00:28:17 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:58.958 00:28:17 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:58.958 00:28:17 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:26:58.958 00:28:17 nvmf_identify_passthru -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:58.958 00:28:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:58.958 00:28:17 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:26:58.958 00:28:17 nvmf_identify_passthru -- common/autotest_common.sh@1518 -- # bdfs=() 00:26:58.958 00:28:17 nvmf_identify_passthru -- common/autotest_common.sh@1518 -- # local bdfs 00:26:58.958 00:28:17 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # bdfs=($(get_nvme_bdfs)) 00:26:58.958 00:28:17 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # get_nvme_bdfs 00:26:58.958 00:28:17 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # bdfs=() 00:26:58.958 00:28:17 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # local bdfs 00:26:58.958 00:28:17 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:26:58.958 00:28:17 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:26:58.958 00:28:17 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # jq -r '.config[].params.traddr' 00:26:58.958 00:28:17 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # (( 1 == 0 )) 00:26:58.958 00:28:17 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # printf '%s\n' 0000:5e:00.0 00:26:58.958 00:28:17 nvmf_identify_passthru -- common/autotest_common.sh@1521 -- # echo 0000:5e:00.0 00:26:58.958 00:28:17 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:26:58.958 00:28:17 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:26:58.958 00:28:17 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:26:58.958 00:28:17 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:26:58.958 00:28:17 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:27:03.148 00:28:21 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ72430F0E1P0FGN 00:27:03.148 00:28:21 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:27:03.148 00:28:21 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:27:03.148 00:28:21 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:27:07.336 00:28:25 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:27:07.336 00:28:25 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:27:07.336 00:28:25 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:07.336 00:28:25 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:07.336 00:28:25 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:27:07.336 00:28:25 nvmf_identify_passthru -- common/autotest_common.sh@716 -- # xtrace_disable 00:27:07.336 00:28:25 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:07.336 00:28:25 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1673977 00:27:07.336 00:28:25 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:07.336 00:28:25 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:07.336 00:28:25 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1673977 00:27:07.336 00:28:25 nvmf_identify_passthru -- common/autotest_common.sh@823 -- # '[' -z 1673977 ']' 00:27:07.336 00:28:25 nvmf_identify_passthru -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:07.336 00:28:25 nvmf_identify_passthru -- common/autotest_common.sh@828 -- # local max_retries=100 00:27:07.336 00:28:25 nvmf_identify_passthru -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:07.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:07.336 00:28:25 nvmf_identify_passthru -- common/autotest_common.sh@832 -- # xtrace_disable 00:27:07.336 00:28:25 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:07.336 [2024-07-16 00:28:25.774438] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:27:07.336 [2024-07-16 00:28:25.774486] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:07.336 [2024-07-16 00:28:25.830914] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:07.336 [2024-07-16 00:28:25.910121] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:07.336 [2024-07-16 00:28:25.910159] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:07.336 [2024-07-16 00:28:25.910166] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:07.336 [2024-07-16 00:28:25.910172] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:07.336 [2024-07-16 00:28:25.910177] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:07.336 [2024-07-16 00:28:25.910246] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:07.336 [2024-07-16 00:28:25.910265] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:07.336 [2024-07-16 00:28:25.910351] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:07.336 [2024-07-16 00:28:25.910352] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:07.904 00:28:26 nvmf_identify_passthru -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:27:07.904 00:28:26 nvmf_identify_passthru -- common/autotest_common.sh@856 -- # return 0 00:27:07.904 00:28:26 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:27:07.904 00:28:26 nvmf_identify_passthru -- common/autotest_common.sh@553 -- # xtrace_disable 00:27:07.904 00:28:26 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:07.904 INFO: Log level set to 20 00:27:07.904 INFO: Requests: 00:27:07.904 { 00:27:07.904 "jsonrpc": "2.0", 00:27:07.904 "method": "nvmf_set_config", 00:27:07.904 "id": 1, 00:27:07.904 "params": { 00:27:07.904 "admin_cmd_passthru": { 00:27:07.904 "identify_ctrlr": true 00:27:07.904 } 00:27:07.904 } 00:27:07.904 } 00:27:07.904 00:27:07.904 INFO: response: 00:27:07.904 { 00:27:07.904 "jsonrpc": "2.0", 00:27:07.904 "id": 1, 00:27:07.904 "result": true 00:27:07.904 } 00:27:07.904 00:27:07.904 00:28:26 nvmf_identify_passthru -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:27:07.904 00:28:26 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:27:07.904 00:28:26 nvmf_identify_passthru -- common/autotest_common.sh@553 -- # xtrace_disable 00:27:07.904 00:28:26 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:07.904 INFO: Setting log level to 20 00:27:07.904 INFO: Setting log level to 20 00:27:07.904 INFO: Log level set to 20 00:27:07.904 INFO: Log level set to 20 00:27:07.904 INFO: Requests: 00:27:07.904 { 00:27:07.904 "jsonrpc": "2.0", 00:27:07.904 "method": "framework_start_init", 00:27:07.904 "id": 1 00:27:07.904 } 00:27:07.904 00:27:07.904 INFO: Requests: 00:27:07.904 { 00:27:07.904 "jsonrpc": "2.0", 00:27:07.904 "method": "framework_start_init", 00:27:07.904 "id": 1 00:27:07.904 } 00:27:07.904 00:27:07.904 [2024-07-16 00:28:26.668134] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:27:07.904 INFO: response: 00:27:07.904 { 00:27:07.904 "jsonrpc": "2.0", 00:27:07.904 "id": 1, 00:27:07.904 "result": true 00:27:07.904 } 00:27:07.904 00:27:07.904 INFO: response: 00:27:07.904 { 00:27:07.904 "jsonrpc": "2.0", 00:27:07.904 "id": 1, 00:27:07.904 "result": true 00:27:07.904 } 00:27:07.904 00:27:07.904 00:28:26 nvmf_identify_passthru -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:27:07.904 00:28:26 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:07.904 00:28:26 nvmf_identify_passthru -- common/autotest_common.sh@553 -- # xtrace_disable 00:27:07.904 00:28:26 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:07.904 INFO: Setting log level to 40 00:27:07.904 INFO: Setting log level to 40 00:27:07.904 INFO: Setting log level to 40 00:27:07.904 [2024-07-16 00:28:26.681681] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:07.904 00:28:26 nvmf_identify_passthru -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:27:07.904 00:28:26 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:27:07.904 00:28:26 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:07.904 00:28:26 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:07.904 00:28:26 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:27:07.904 00:28:26 nvmf_identify_passthru -- common/autotest_common.sh@553 -- # xtrace_disable 00:27:07.904 00:28:26 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:11.189 Nvme0n1 00:27:11.189 00:28:29 nvmf_identify_passthru -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:27:11.189 00:28:29 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:27:11.189 00:28:29 nvmf_identify_passthru -- common/autotest_common.sh@553 -- # xtrace_disable 00:27:11.189 00:28:29 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:11.189 00:28:29 nvmf_identify_passthru -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:27:11.189 00:28:29 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:27:11.189 00:28:29 nvmf_identify_passthru -- common/autotest_common.sh@553 -- # xtrace_disable 00:27:11.189 00:28:29 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:11.189 00:28:29 nvmf_identify_passthru -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:27:11.189 00:28:29 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:11.189 00:28:29 nvmf_identify_passthru -- common/autotest_common.sh@553 -- # xtrace_disable 00:27:11.189 00:28:29 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:11.189 [2024-07-16 00:28:29.578464] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:11.189 00:28:29 nvmf_identify_passthru -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:27:11.189 00:28:29 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:27:11.189 00:28:29 nvmf_identify_passthru -- common/autotest_common.sh@553 -- # xtrace_disable 00:27:11.189 00:28:29 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:11.189 [ 00:27:11.189 { 00:27:11.189 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:11.189 "subtype": "Discovery", 00:27:11.189 "listen_addresses": [], 00:27:11.189 "allow_any_host": true, 00:27:11.189 "hosts": [] 00:27:11.189 }, 00:27:11.189 { 00:27:11.189 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:11.189 "subtype": "NVMe", 00:27:11.189 "listen_addresses": [ 00:27:11.189 { 00:27:11.189 "trtype": "TCP", 00:27:11.189 "adrfam": "IPv4", 00:27:11.189 "traddr": "10.0.0.2", 00:27:11.189 "trsvcid": "4420" 00:27:11.189 } 00:27:11.189 ], 00:27:11.189 "allow_any_host": true, 00:27:11.189 "hosts": [], 00:27:11.189 "serial_number": "SPDK00000000000001", 00:27:11.189 "model_number": "SPDK bdev Controller", 00:27:11.189 "max_namespaces": 1, 00:27:11.189 "min_cntlid": 1, 00:27:11.189 "max_cntlid": 65519, 00:27:11.189 "namespaces": [ 00:27:11.189 { 00:27:11.189 "nsid": 1, 00:27:11.189 "bdev_name": "Nvme0n1", 00:27:11.189 "name": "Nvme0n1", 00:27:11.189 "nguid": "9925A47D2E7A492393B6018A01EC999C", 00:27:11.189 "uuid": "9925a47d-2e7a-4923-93b6-018a01ec999c" 00:27:11.189 } 00:27:11.189 ] 00:27:11.189 } 00:27:11.189 ] 00:27:11.189 00:28:29 nvmf_identify_passthru -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:27:11.189 00:28:29 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:11.189 00:28:29 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:27:11.189 00:28:29 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:27:11.189 00:28:29 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ72430F0E1P0FGN 00:27:11.189 00:28:29 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:27:11.189 00:28:29 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:11.189 00:28:29 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:27:11.189 00:28:29 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:27:11.189 00:28:29 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ72430F0E1P0FGN '!=' BTLJ72430F0E1P0FGN ']' 00:27:11.189 00:28:29 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:27:11.189 00:28:29 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:11.189 00:28:29 nvmf_identify_passthru -- common/autotest_common.sh@553 -- # xtrace_disable 00:27:11.189 00:28:29 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:11.189 00:28:29 nvmf_identify_passthru -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:27:11.189 00:28:29 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:27:11.189 00:28:29 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:27:11.189 00:28:29 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:11.189 00:28:29 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:27:11.189 00:28:29 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:11.189 00:28:29 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:27:11.189 00:28:29 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:11.189 00:28:29 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:11.189 rmmod nvme_tcp 00:27:11.189 rmmod nvme_fabrics 00:27:11.189 rmmod nvme_keyring 00:27:11.189 00:28:29 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:11.189 00:28:30 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:27:11.189 00:28:30 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:27:11.189 00:28:30 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 1673977 ']' 00:27:11.189 00:28:30 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 1673977 00:27:11.189 00:28:30 nvmf_identify_passthru -- common/autotest_common.sh@942 -- # '[' -z 1673977 ']' 00:27:11.189 00:28:30 nvmf_identify_passthru -- common/autotest_common.sh@946 -- # kill -0 1673977 00:27:11.189 00:28:30 nvmf_identify_passthru -- common/autotest_common.sh@947 -- # uname 00:27:11.189 00:28:30 nvmf_identify_passthru -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:27:11.189 00:28:30 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1673977 00:27:11.448 00:28:30 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:27:11.448 00:28:30 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:27:11.448 00:28:30 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1673977' 00:27:11.448 killing process with pid 1673977 00:27:11.448 00:28:30 nvmf_identify_passthru -- common/autotest_common.sh@961 -- # kill 1673977 00:27:11.448 00:28:30 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # wait 1673977 00:27:12.824 00:28:31 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:12.824 00:28:31 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:12.824 00:28:31 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:12.824 00:28:31 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:12.824 00:28:31 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:12.824 00:28:31 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:12.824 00:28:31 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:12.824 00:28:31 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:14.727 00:28:33 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:14.727 00:27:14.727 real 0m21.384s 00:27:14.727 user 0m29.661s 00:27:14.727 sys 0m4.563s 00:27:14.727 00:28:33 nvmf_identify_passthru -- common/autotest_common.sh@1118 -- # xtrace_disable 00:27:14.727 00:28:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:14.727 ************************************ 00:27:14.727 END TEST nvmf_identify_passthru 00:27:14.727 ************************************ 00:27:14.985 00:28:33 -- common/autotest_common.sh@1136 -- # return 0 00:27:14.985 00:28:33 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:27:14.985 00:28:33 -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:27:14.985 00:28:33 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:27:14.985 00:28:33 -- common/autotest_common.sh@10 -- # set +x 00:27:14.985 ************************************ 00:27:14.985 START TEST nvmf_dif 00:27:14.985 ************************************ 00:27:14.985 00:28:33 nvmf_dif -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:27:14.985 * Looking for test storage... 00:27:14.985 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:14.985 00:28:33 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:14.985 00:28:33 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:27:14.985 00:28:33 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:14.985 00:28:33 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:14.985 00:28:33 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:14.985 00:28:33 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:14.985 00:28:33 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:14.985 00:28:33 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:14.985 00:28:33 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:14.985 00:28:33 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:14.985 00:28:33 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:14.985 00:28:33 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:14.985 00:28:33 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:14.985 00:28:33 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:14.985 00:28:33 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:14.985 00:28:33 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:14.985 00:28:33 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:14.985 00:28:33 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:14.985 00:28:33 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:14.985 00:28:33 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:14.985 00:28:33 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:14.985 00:28:33 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:14.985 00:28:33 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:14.985 00:28:33 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:14.985 00:28:33 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:14.985 00:28:33 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:27:14.985 00:28:33 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:14.985 00:28:33 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:27:14.985 00:28:33 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:14.985 00:28:33 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:14.985 00:28:33 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:14.985 00:28:33 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:14.985 00:28:33 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:14.985 00:28:33 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:14.985 00:28:33 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:14.985 00:28:33 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:14.985 00:28:33 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:27:14.985 00:28:33 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:27:14.985 00:28:33 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:27:14.986 00:28:33 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:27:14.986 00:28:33 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:27:14.986 00:28:33 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:14.986 00:28:33 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:14.986 00:28:33 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:14.986 00:28:33 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:14.986 00:28:33 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:14.986 00:28:33 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:14.986 00:28:33 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:14.986 00:28:33 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:14.986 00:28:33 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:14.986 00:28:33 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:14.986 00:28:33 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:27:14.986 00:28:33 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:20.253 00:28:38 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:20.253 00:28:38 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:27:20.253 00:28:38 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:20.253 00:28:38 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:20.253 00:28:38 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:20.253 00:28:38 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:20.253 00:28:38 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:20.253 00:28:38 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:27:20.253 00:28:38 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:20.253 00:28:38 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:27:20.253 00:28:38 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:27:20.253 00:28:38 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:27:20.253 00:28:38 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:27:20.253 00:28:38 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:27:20.253 00:28:38 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:27:20.253 00:28:38 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:20.253 00:28:38 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:20.253 00:28:38 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:20.253 00:28:38 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:20.253 00:28:38 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:20.253 00:28:38 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:20.253 00:28:38 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:20.253 00:28:38 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:20.253 00:28:38 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:20.253 00:28:38 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:20.253 00:28:38 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:20.253 00:28:38 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:20.253 00:28:38 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:20.253 00:28:38 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:20.253 00:28:38 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:20.254 00:28:38 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:20.254 00:28:38 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:20.254 00:28:38 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:20.254 00:28:38 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:20.254 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:20.254 00:28:38 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:20.254 00:28:38 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:20.254 00:28:38 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:20.254 00:28:38 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:20.254 00:28:38 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:20.254 00:28:38 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:20.254 00:28:38 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:20.254 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:20.254 00:28:38 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:20.254 00:28:38 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:20.254 00:28:38 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:20.254 00:28:38 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:20.254 00:28:38 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:20.254 00:28:38 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:20.254 00:28:38 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:20.254 00:28:38 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:20.254 00:28:38 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:20.254 00:28:38 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:20.254 00:28:38 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:20.254 00:28:38 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:20.254 00:28:38 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:20.254 00:28:38 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:20.254 00:28:38 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:20.254 00:28:38 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:20.254 Found net devices under 0000:86:00.0: cvl_0_0 00:27:20.254 00:28:38 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:20.254 00:28:38 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:20.254 00:28:38 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:20.254 00:28:38 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:20.254 00:28:38 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:20.254 00:28:38 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:20.254 00:28:38 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:20.254 00:28:38 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:20.254 00:28:38 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:20.254 Found net devices under 0000:86:00.1: cvl_0_1 00:27:20.254 00:28:38 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:20.254 00:28:38 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:20.254 00:28:38 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:27:20.254 00:28:38 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:20.254 00:28:38 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:20.254 00:28:38 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:20.254 00:28:38 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:20.254 00:28:38 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:20.254 00:28:38 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:20.254 00:28:38 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:20.254 00:28:38 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:20.254 00:28:38 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:20.254 00:28:38 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:20.254 00:28:38 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:20.254 00:28:38 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:20.254 00:28:38 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:20.254 00:28:38 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:20.254 00:28:38 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:20.254 00:28:38 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:20.254 00:28:38 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:20.254 00:28:38 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:20.254 00:28:38 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:20.254 00:28:38 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:20.254 00:28:39 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:20.254 00:28:39 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:20.254 00:28:39 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:20.254 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:20.254 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.160 ms 00:27:20.254 00:27:20.254 --- 10.0.0.2 ping statistics --- 00:27:20.254 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:20.254 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:27:20.254 00:28:39 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:20.254 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:20.254 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:27:20.254 00:27:20.254 --- 10.0.0.1 ping statistics --- 00:27:20.254 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:20.254 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:27:20.254 00:28:39 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:20.254 00:28:39 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:27:20.254 00:28:39 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:27:20.254 00:28:39 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:22.821 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:27:22.821 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:27:22.821 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:27:22.821 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:27:22.821 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:27:22.821 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:27:22.821 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:27:22.821 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:27:22.821 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:27:22.821 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:27:22.821 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:27:22.821 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:27:22.821 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:27:22.821 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:27:22.821 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:27:22.821 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:27:22.821 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:27:22.821 00:28:41 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:22.821 00:28:41 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:22.821 00:28:41 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:22.821 00:28:41 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:22.821 00:28:41 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:22.821 00:28:41 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:22.821 00:28:41 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:27:22.821 00:28:41 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:27:22.821 00:28:41 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:22.821 00:28:41 nvmf_dif -- common/autotest_common.sh@716 -- # xtrace_disable 00:27:22.821 00:28:41 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:22.821 00:28:41 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=1679441 00:27:22.821 00:28:41 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 1679441 00:27:22.821 00:28:41 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:27:22.821 00:28:41 nvmf_dif -- common/autotest_common.sh@823 -- # '[' -z 1679441 ']' 00:27:22.821 00:28:41 nvmf_dif -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:22.821 00:28:41 nvmf_dif -- common/autotest_common.sh@828 -- # local max_retries=100 00:27:22.821 00:28:41 nvmf_dif -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:22.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:22.821 00:28:41 nvmf_dif -- common/autotest_common.sh@832 -- # xtrace_disable 00:27:22.821 00:28:41 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:22.821 [2024-07-16 00:28:41.648097] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:27:22.821 [2024-07-16 00:28:41.648134] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:23.080 [2024-07-16 00:28:41.704401] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:23.080 [2024-07-16 00:28:41.782512] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:23.080 [2024-07-16 00:28:41.782555] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:23.080 [2024-07-16 00:28:41.782562] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:23.080 [2024-07-16 00:28:41.782568] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:23.080 [2024-07-16 00:28:41.782573] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:23.080 [2024-07-16 00:28:41.782591] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:23.646 00:28:42 nvmf_dif -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:27:23.646 00:28:42 nvmf_dif -- common/autotest_common.sh@856 -- # return 0 00:27:23.646 00:28:42 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:23.646 00:28:42 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:23.646 00:28:42 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:23.646 00:28:42 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:23.646 00:28:42 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:27:23.646 00:28:42 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:27:23.646 00:28:42 nvmf_dif -- common/autotest_common.sh@553 -- # xtrace_disable 00:27:23.646 00:28:42 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:23.646 [2024-07-16 00:28:42.484379] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:23.646 00:28:42 nvmf_dif -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:27:23.646 00:28:42 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:27:23.646 00:28:42 nvmf_dif -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:27:23.646 00:28:42 nvmf_dif -- common/autotest_common.sh@1099 -- # xtrace_disable 00:27:23.646 00:28:42 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:23.904 ************************************ 00:27:23.904 START TEST fio_dif_1_default 00:27:23.904 ************************************ 00:27:23.904 00:28:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1117 -- # fio_dif_1 00:27:23.904 00:28:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:27:23.904 00:28:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:27:23.904 00:28:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:27:23.904 00:28:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:27:23.904 00:28:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:27:23.904 00:28:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:27:23.904 00:28:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@553 -- # xtrace_disable 00:27:23.904 00:28:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:23.904 bdev_null0 00:27:23.904 00:28:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:27:23.904 00:28:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:23.904 00:28:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@553 -- # xtrace_disable 00:27:23.904 00:28:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:23.904 00:28:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:27:23.904 00:28:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:23.904 00:28:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@553 -- # xtrace_disable 00:27:23.904 00:28:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:23.904 00:28:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:27:23.904 00:28:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:23.904 00:28:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@553 -- # xtrace_disable 00:27:23.904 00:28:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:23.904 [2024-07-16 00:28:42.544639] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:23.904 00:28:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:27:23.904 00:28:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:27:23.904 00:28:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:23.904 00:28:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:23.904 00:28:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1331 -- # local fio_dir=/usr/src/fio 00:27:23.904 00:28:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1333 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:23.904 00:28:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1333 -- # local sanitizers 00:27:23.904 00:28:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:27:23.904 00:28:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1334 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:23.904 00:28:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1335 -- # shift 00:27:23.904 00:28:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local asan_lib= 00:27:23.904 00:28:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1338 -- # for sanitizer in "${sanitizers[@]}" 00:27:23.904 00:28:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:27:23.904 00:28:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:27:23.904 00:28:42 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:27:23.904 00:28:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:27:23.904 00:28:42 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:27:23.904 00:28:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:27:23.904 00:28:42 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:23.904 00:28:42 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:23.904 { 00:27:23.904 "params": { 00:27:23.904 "name": "Nvme$subsystem", 00:27:23.904 "trtype": "$TEST_TRANSPORT", 00:27:23.904 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:23.904 "adrfam": "ipv4", 00:27:23.904 "trsvcid": "$NVMF_PORT", 00:27:23.904 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:23.904 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:23.904 "hdgst": ${hdgst:-false}, 00:27:23.904 "ddgst": ${ddgst:-false} 00:27:23.904 }, 00:27:23.904 "method": "bdev_nvme_attach_controller" 00:27:23.904 } 00:27:23.904 EOF 00:27:23.904 )") 00:27:23.904 00:28:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:23.904 00:28:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # grep libasan 00:27:23.904 00:28:42 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:27:23.904 00:28:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # awk '{print $3}' 00:27:23.904 00:28:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:27:23.904 00:28:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:27:23.904 00:28:42 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:27:23.904 00:28:42 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:27:23.904 00:28:42 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:23.905 "params": { 00:27:23.905 "name": "Nvme0", 00:27:23.905 "trtype": "tcp", 00:27:23.905 "traddr": "10.0.0.2", 00:27:23.905 "adrfam": "ipv4", 00:27:23.905 "trsvcid": "4420", 00:27:23.905 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:23.905 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:23.905 "hdgst": false, 00:27:23.905 "ddgst": false 00:27:23.905 }, 00:27:23.905 "method": "bdev_nvme_attach_controller" 00:27:23.905 }' 00:27:23.905 00:28:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # asan_lib= 00:27:23.905 00:28:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # [[ -n '' ]] 00:27:23.905 00:28:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1338 -- # for sanitizer in "${sanitizers[@]}" 00:27:23.905 00:28:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:23.905 00:28:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # grep libclang_rt.asan 00:27:23.905 00:28:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # awk '{print $3}' 00:27:23.905 00:28:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # asan_lib= 00:27:23.905 00:28:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # [[ -n '' ]] 00:27:23.905 00:28:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:23.905 00:28:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:24.163 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:27:24.163 fio-3.35 00:27:24.163 Starting 1 thread 00:27:36.382 00:27:36.382 filename0: (groupid=0, jobs=1): err= 0: pid=1679820: Tue Jul 16 00:28:53 2024 00:27:36.382 read: IOPS=165, BW=663KiB/s (679kB/s)(6640KiB/10013msec) 00:27:36.382 slat (nsec): min=5906, max=26101, avg=6210.86, stdev=933.11 00:27:36.382 clat (usec): min=432, max=43319, avg=24109.36, stdev=20063.88 00:27:36.382 lat (usec): min=438, max=43345, avg=24115.58, stdev=20063.90 00:27:36.382 clat percentiles (usec): 00:27:36.382 | 1.00th=[ 437], 5.00th=[ 445], 10.00th=[ 453], 20.00th=[ 676], 00:27:36.382 | 30.00th=[ 701], 40.00th=[ 816], 50.00th=[41157], 60.00th=[41157], 00:27:36.382 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[41681], 00:27:36.382 | 99.00th=[42206], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254], 00:27:36.382 | 99.99th=[43254] 00:27:36.382 bw ( KiB/s): min= 480, max= 768, per=99.83%, avg=662.40, stdev=95.21, samples=20 00:27:36.382 iops : min= 120, max= 192, avg=165.60, stdev=23.80, samples=20 00:27:36.382 lat (usec) : 500=16.14%, 750=16.08%, 1000=9.94% 00:27:36.382 lat (msec) : 50=57.83% 00:27:36.382 cpu : usr=94.05%, sys=5.70%, ctx=11, majf=0, minf=223 00:27:36.382 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:36.382 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:36.382 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:36.382 issued rwts: total=1660,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:36.382 latency : target=0, window=0, percentile=100.00%, depth=4 00:27:36.382 00:27:36.382 Run status group 0 (all jobs): 00:27:36.382 READ: bw=663KiB/s (679kB/s), 663KiB/s-663KiB/s (679kB/s-679kB/s), io=6640KiB (6799kB), run=10013-10013msec 00:27:36.382 00:28:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:27:36.382 00:28:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:27:36.382 00:28:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:27:36.382 00:28:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:36.382 00:28:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:27:36.382 00:28:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:36.382 00:28:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@553 -- # xtrace_disable 00:27:36.382 00:28:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:36.382 00:28:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:27:36.382 00:28:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:36.382 00:28:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@553 -- # xtrace_disable 00:27:36.382 00:28:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:36.382 00:28:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:27:36.382 00:27:36.382 real 0m11.129s 00:27:36.382 user 0m16.139s 00:27:36.382 sys 0m0.865s 00:27:36.382 00:28:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1118 -- # xtrace_disable 00:27:36.382 00:28:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:36.382 ************************************ 00:27:36.382 END TEST fio_dif_1_default 00:27:36.382 ************************************ 00:27:36.382 00:28:53 nvmf_dif -- common/autotest_common.sh@1136 -- # return 0 00:27:36.382 00:28:53 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:27:36.382 00:28:53 nvmf_dif -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:27:36.382 00:28:53 nvmf_dif -- common/autotest_common.sh@1099 -- # xtrace_disable 00:27:36.382 00:28:53 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:36.382 ************************************ 00:27:36.382 START TEST fio_dif_1_multi_subsystems 00:27:36.382 ************************************ 00:27:36.382 00:28:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1117 -- # fio_dif_1_multi_subsystems 00:27:36.382 00:28:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:27:36.382 00:28:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:27:36.382 00:28:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:27:36.382 00:28:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:27:36.382 00:28:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:27:36.382 00:28:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:27:36.382 00:28:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:27:36.382 00:28:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@553 -- # xtrace_disable 00:27:36.382 00:28:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:36.382 bdev_null0 00:27:36.382 00:28:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:27:36.382 00:28:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:36.382 00:28:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@553 -- # xtrace_disable 00:27:36.382 00:28:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:36.382 00:28:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:27:36.382 00:28:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:36.382 00:28:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@553 -- # xtrace_disable 00:27:36.382 00:28:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:36.382 00:28:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:27:36.382 00:28:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:36.382 00:28:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@553 -- # xtrace_disable 00:27:36.382 00:28:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:36.382 [2024-07-16 00:28:53.736253] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:36.382 00:28:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:27:36.382 00:28:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:27:36.382 00:28:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:27:36.382 00:28:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:27:36.382 00:28:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:27:36.382 00:28:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@553 -- # xtrace_disable 00:27:36.382 00:28:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:36.382 bdev_null1 00:27:36.382 00:28:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:27:36.382 00:28:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:27:36.382 00:28:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@553 -- # xtrace_disable 00:27:36.382 00:28:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:36.382 00:28:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:27:36.382 00:28:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:27:36.382 00:28:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@553 -- # xtrace_disable 00:27:36.382 00:28:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:36.382 00:28:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:27:36.382 00:28:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:36.382 00:28:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@553 -- # xtrace_disable 00:27:36.382 00:28:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:36.382 00:28:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:27:36.382 00:28:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:27:36.382 00:28:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:27:36.382 00:28:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:27:36.382 00:28:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:36.382 00:28:53 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:27:36.382 00:28:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:36.382 00:28:53 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:27:36.382 00:28:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1331 -- # local fio_dir=/usr/src/fio 00:27:36.382 00:28:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1333 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:36.382 00:28:53 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:36.382 00:28:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1333 -- # local sanitizers 00:27:36.382 00:28:53 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:36.382 { 00:27:36.382 "params": { 00:27:36.382 "name": "Nvme$subsystem", 00:27:36.382 "trtype": "$TEST_TRANSPORT", 00:27:36.382 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:36.382 "adrfam": "ipv4", 00:27:36.382 "trsvcid": "$NVMF_PORT", 00:27:36.382 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:36.382 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:36.382 "hdgst": ${hdgst:-false}, 00:27:36.382 "ddgst": ${ddgst:-false} 00:27:36.382 }, 00:27:36.382 "method": "bdev_nvme_attach_controller" 00:27:36.382 } 00:27:36.382 EOF 00:27:36.382 )") 00:27:36.382 00:28:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1334 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:36.382 00:28:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1335 -- # shift 00:27:36.382 00:28:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local asan_lib= 00:27:36.382 00:28:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1338 -- # for sanitizer in "${sanitizers[@]}" 00:27:36.382 00:28:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:27:36.382 00:28:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:27:36.382 00:28:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:27:36.382 00:28:53 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:27:36.382 00:28:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:36.382 00:28:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # grep libasan 00:27:36.383 00:28:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # awk '{print $3}' 00:27:36.383 00:28:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:27:36.383 00:28:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:27:36.383 00:28:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:27:36.383 00:28:53 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:36.383 00:28:53 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:36.383 { 00:27:36.383 "params": { 00:27:36.383 "name": "Nvme$subsystem", 00:27:36.383 "trtype": "$TEST_TRANSPORT", 00:27:36.383 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:36.383 "adrfam": "ipv4", 00:27:36.383 "trsvcid": "$NVMF_PORT", 00:27:36.383 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:36.383 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:36.383 "hdgst": ${hdgst:-false}, 00:27:36.383 "ddgst": ${ddgst:-false} 00:27:36.383 }, 00:27:36.383 "method": "bdev_nvme_attach_controller" 00:27:36.383 } 00:27:36.383 EOF 00:27:36.383 )") 00:27:36.383 00:28:53 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:27:36.383 00:28:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:27:36.383 00:28:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:27:36.383 00:28:53 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:27:36.383 00:28:53 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:27:36.383 00:28:53 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:36.383 "params": { 00:27:36.383 "name": "Nvme0", 00:27:36.383 "trtype": "tcp", 00:27:36.383 "traddr": "10.0.0.2", 00:27:36.383 "adrfam": "ipv4", 00:27:36.383 "trsvcid": "4420", 00:27:36.383 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:36.383 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:36.383 "hdgst": false, 00:27:36.383 "ddgst": false 00:27:36.383 }, 00:27:36.383 "method": "bdev_nvme_attach_controller" 00:27:36.383 },{ 00:27:36.383 "params": { 00:27:36.383 "name": "Nvme1", 00:27:36.383 "trtype": "tcp", 00:27:36.383 "traddr": "10.0.0.2", 00:27:36.383 "adrfam": "ipv4", 00:27:36.383 "trsvcid": "4420", 00:27:36.383 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:36.383 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:36.383 "hdgst": false, 00:27:36.383 "ddgst": false 00:27:36.383 }, 00:27:36.383 "method": "bdev_nvme_attach_controller" 00:27:36.383 }' 00:27:36.383 00:28:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # asan_lib= 00:27:36.383 00:28:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # [[ -n '' ]] 00:27:36.383 00:28:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1338 -- # for sanitizer in "${sanitizers[@]}" 00:27:36.383 00:28:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:36.383 00:28:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # grep libclang_rt.asan 00:27:36.383 00:28:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # awk '{print $3}' 00:27:36.383 00:28:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # asan_lib= 00:27:36.383 00:28:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # [[ -n '' ]] 00:27:36.383 00:28:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:36.383 00:28:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:36.383 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:27:36.383 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:27:36.383 fio-3.35 00:27:36.383 Starting 2 threads 00:27:46.350 00:27:46.350 filename0: (groupid=0, jobs=1): err= 0: pid=1681788: Tue Jul 16 00:29:04 2024 00:27:46.350 read: IOPS=96, BW=386KiB/s (396kB/s)(3872KiB/10024msec) 00:27:46.350 slat (nsec): min=6183, max=66635, avg=8200.54, stdev=3562.71 00:27:46.350 clat (usec): min=40826, max=42944, avg=41394.44, stdev=515.84 00:27:46.350 lat (usec): min=40833, max=42956, avg=41402.64, stdev=516.16 00:27:46.350 clat percentiles (usec): 00:27:46.350 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:27:46.350 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41681], 00:27:46.350 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:27:46.350 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:27:46.350 | 99.99th=[42730] 00:27:46.350 bw ( KiB/s): min= 384, max= 416, per=49.85%, avg=385.60, stdev= 7.16, samples=20 00:27:46.350 iops : min= 96, max= 104, avg=96.40, stdev= 1.79, samples=20 00:27:46.350 lat (msec) : 50=100.00% 00:27:46.350 cpu : usr=97.79%, sys=1.93%, ctx=13, majf=0, minf=163 00:27:46.350 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:46.350 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:46.350 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:46.351 issued rwts: total=968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:46.351 latency : target=0, window=0, percentile=100.00%, depth=4 00:27:46.351 filename1: (groupid=0, jobs=1): err= 0: pid=1681789: Tue Jul 16 00:29:04 2024 00:27:46.351 read: IOPS=96, BW=386KiB/s (395kB/s)(3872KiB/10027msec) 00:27:46.351 slat (nsec): min=6192, max=31941, avg=8161.91, stdev=3069.97 00:27:46.351 clat (usec): min=40815, max=42090, avg=41405.68, stdev=491.88 00:27:46.351 lat (usec): min=40821, max=42103, avg=41413.85, stdev=492.01 00:27:46.351 clat percentiles (usec): 00:27:46.351 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:27:46.351 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41681], 00:27:46.351 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:27:46.351 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:27:46.351 | 99.99th=[42206] 00:27:46.351 bw ( KiB/s): min= 352, max= 416, per=49.85%, avg=385.60, stdev=12.61, samples=20 00:27:46.351 iops : min= 88, max= 104, avg=96.40, stdev= 3.15, samples=20 00:27:46.351 lat (msec) : 50=100.00% 00:27:46.351 cpu : usr=98.00%, sys=1.71%, ctx=11, majf=0, minf=143 00:27:46.351 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:46.351 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:46.351 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:46.351 issued rwts: total=968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:46.351 latency : target=0, window=0, percentile=100.00%, depth=4 00:27:46.351 00:27:46.351 Run status group 0 (all jobs): 00:27:46.351 READ: bw=772KiB/s (791kB/s), 386KiB/s-386KiB/s (395kB/s-396kB/s), io=7744KiB (7930kB), run=10024-10027msec 00:27:46.351 00:29:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:27:46.351 00:29:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:27:46.351 00:29:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:27:46.351 00:29:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:46.351 00:29:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:27:46.351 00:29:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:46.351 00:29:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@553 -- # xtrace_disable 00:27:46.351 00:29:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:46.351 00:29:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:27:46.351 00:29:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:46.351 00:29:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@553 -- # xtrace_disable 00:27:46.351 00:29:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:46.351 00:29:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:27:46.351 00:29:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:27:46.351 00:29:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:27:46.351 00:29:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:27:46.351 00:29:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:46.351 00:29:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@553 -- # xtrace_disable 00:27:46.351 00:29:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:46.351 00:29:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:27:46.351 00:29:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:27:46.351 00:29:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@553 -- # xtrace_disable 00:27:46.351 00:29:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:46.351 00:29:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:27:46.351 00:27:46.351 real 0m11.448s 00:27:46.351 user 0m27.017s 00:27:46.351 sys 0m0.666s 00:27:46.351 00:29:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1118 -- # xtrace_disable 00:27:46.351 00:29:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:46.351 ************************************ 00:27:46.351 END TEST fio_dif_1_multi_subsystems 00:27:46.351 ************************************ 00:27:46.351 00:29:05 nvmf_dif -- common/autotest_common.sh@1136 -- # return 0 00:27:46.351 00:29:05 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:27:46.351 00:29:05 nvmf_dif -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:27:46.351 00:29:05 nvmf_dif -- common/autotest_common.sh@1099 -- # xtrace_disable 00:27:46.351 00:29:05 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:46.608 ************************************ 00:27:46.608 START TEST fio_dif_rand_params 00:27:46.608 ************************************ 00:27:46.608 00:29:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1117 -- # fio_dif_rand_params 00:27:46.608 00:29:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:27:46.608 00:29:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:27:46.608 00:29:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:27:46.608 00:29:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:27:46.608 00:29:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:27:46.608 00:29:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:27:46.608 00:29:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:27:46.608 00:29:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:27:46.608 00:29:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:27:46.608 00:29:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:46.608 00:29:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:27:46.608 00:29:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:27:46.608 00:29:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:27:46.608 00:29:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:27:46.608 00:29:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:46.608 bdev_null0 00:27:46.608 00:29:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:27:46.608 00:29:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:46.608 00:29:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:27:46.608 00:29:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:46.608 00:29:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:27:46.608 00:29:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:46.608 00:29:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:27:46.608 00:29:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:46.608 00:29:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:27:46.608 00:29:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:46.608 00:29:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:27:46.608 00:29:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:46.608 [2024-07-16 00:29:05.250621] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:46.608 00:29:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:27:46.608 00:29:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:27:46.608 00:29:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:46.608 00:29:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:46.608 00:29:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:27:46.608 00:29:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1331 -- # local fio_dir=/usr/src/fio 00:27:46.608 00:29:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:46.608 00:29:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:27:46.608 00:29:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local sanitizers 00:27:46.608 00:29:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1334 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:46.608 00:29:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # shift 00:27:46.608 00:29:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:27:46.608 00:29:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:27:46.608 00:29:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local asan_lib= 00:27:46.608 00:29:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # for sanitizer in "${sanitizers[@]}" 00:27:46.608 00:29:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:27:46.608 00:29:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:27:46.608 00:29:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:46.608 00:29:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:27:46.608 00:29:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:46.608 { 00:27:46.608 "params": { 00:27:46.608 "name": "Nvme$subsystem", 00:27:46.608 "trtype": "$TEST_TRANSPORT", 00:27:46.608 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:46.608 "adrfam": "ipv4", 00:27:46.608 "trsvcid": "$NVMF_PORT", 00:27:46.608 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:46.608 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:46.608 "hdgst": ${hdgst:-false}, 00:27:46.608 "ddgst": ${ddgst:-false} 00:27:46.608 }, 00:27:46.608 "method": "bdev_nvme_attach_controller" 00:27:46.608 } 00:27:46.608 EOF 00:27:46.608 )") 00:27:46.608 00:29:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:27:46.608 00:29:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:46.608 00:29:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # grep libasan 00:27:46.608 00:29:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # awk '{print $3}' 00:27:46.608 00:29:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:27:46.608 00:29:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:46.608 00:29:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:27:46.608 00:29:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:27:46.608 00:29:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:46.609 "params": { 00:27:46.609 "name": "Nvme0", 00:27:46.609 "trtype": "tcp", 00:27:46.609 "traddr": "10.0.0.2", 00:27:46.609 "adrfam": "ipv4", 00:27:46.609 "trsvcid": "4420", 00:27:46.609 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:46.609 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:46.609 "hdgst": false, 00:27:46.609 "ddgst": false 00:27:46.609 }, 00:27:46.609 "method": "bdev_nvme_attach_controller" 00:27:46.609 }' 00:27:46.609 00:29:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # asan_lib= 00:27:46.609 00:29:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # [[ -n '' ]] 00:27:46.609 00:29:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # for sanitizer in "${sanitizers[@]}" 00:27:46.609 00:29:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:46.609 00:29:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # grep libclang_rt.asan 00:27:46.609 00:29:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # awk '{print $3}' 00:27:46.609 00:29:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # asan_lib= 00:27:46.609 00:29:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # [[ -n '' ]] 00:27:46.609 00:29:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:46.609 00:29:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:46.866 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:27:46.866 ... 00:27:46.866 fio-3.35 00:27:46.866 Starting 3 threads 00:27:53.422 00:27:53.422 filename0: (groupid=0, jobs=1): err= 0: pid=1683747: Tue Jul 16 00:29:11 2024 00:27:53.422 read: IOPS=254, BW=31.8MiB/s (33.3MB/s)(160MiB/5046msec) 00:27:53.422 slat (nsec): min=6245, max=45656, avg=9429.89, stdev=2712.35 00:27:53.422 clat (usec): min=4061, max=90562, avg=11759.60, stdev=13377.02 00:27:53.422 lat (usec): min=4068, max=90574, avg=11769.03, stdev=13377.34 00:27:53.422 clat percentiles (usec): 00:27:53.422 | 1.00th=[ 4293], 5.00th=[ 4621], 10.00th=[ 4948], 20.00th=[ 5669], 00:27:53.422 | 30.00th=[ 6325], 40.00th=[ 6652], 50.00th=[ 7177], 60.00th=[ 7701], 00:27:53.422 | 70.00th=[ 8586], 80.00th=[ 9634], 90.00th=[47449], 95.00th=[49021], 00:27:53.422 | 99.00th=[50594], 99.50th=[51643], 99.90th=[53216], 99.95th=[90702], 00:27:53.422 | 99.99th=[90702] 00:27:53.422 bw ( KiB/s): min=23040, max=39168, per=32.88%, avg=32762.20, stdev=5739.13, samples=10 00:27:53.422 iops : min= 180, max= 306, avg=255.90, stdev=44.88, samples=10 00:27:53.422 lat (msec) : 10=82.92%, 20=6.01%, 50=9.20%, 100=1.87% 00:27:53.422 cpu : usr=96.43%, sys=3.21%, ctx=12, majf=0, minf=58 00:27:53.422 IO depths : 1=0.7%, 2=99.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:53.422 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:53.422 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:53.422 issued rwts: total=1282,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:53.422 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:53.422 filename0: (groupid=0, jobs=1): err= 0: pid=1683748: Tue Jul 16 00:29:11 2024 00:27:53.422 read: IOPS=257, BW=32.2MiB/s (33.8MB/s)(163MiB/5043msec) 00:27:53.422 slat (nsec): min=6224, max=56147, avg=9338.90, stdev=2933.06 00:27:53.422 clat (usec): min=3875, max=91060, avg=11624.40, stdev=13571.86 00:27:53.422 lat (usec): min=3882, max=91070, avg=11633.74, stdev=13571.96 00:27:53.422 clat percentiles (usec): 00:27:53.422 | 1.00th=[ 4080], 5.00th=[ 4424], 10.00th=[ 4686], 20.00th=[ 5342], 00:27:53.422 | 30.00th=[ 6194], 40.00th=[ 6718], 50.00th=[ 7177], 60.00th=[ 7635], 00:27:53.422 | 70.00th=[ 8586], 80.00th=[ 9634], 90.00th=[47449], 95.00th=[49546], 00:27:53.422 | 99.00th=[51119], 99.50th=[51643], 99.90th=[90702], 99.95th=[90702], 00:27:53.422 | 99.99th=[90702] 00:27:53.422 bw ( KiB/s): min=21504, max=48542, per=33.31%, avg=33193.40, stdev=7532.60, samples=10 00:27:53.422 iops : min= 168, max= 379, avg=259.30, stdev=58.80, samples=10 00:27:53.422 lat (msec) : 4=0.31%, 10=81.62%, 20=7.46%, 50=7.15%, 100=3.46% 00:27:53.422 cpu : usr=95.72%, sys=3.93%, ctx=11, majf=0, minf=191 00:27:53.422 IO depths : 1=1.2%, 2=98.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:53.422 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:53.422 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:53.422 issued rwts: total=1300,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:53.422 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:53.422 filename0: (groupid=0, jobs=1): err= 0: pid=1683749: Tue Jul 16 00:29:11 2024 00:27:53.422 read: IOPS=266, BW=33.3MiB/s (35.0MB/s)(168MiB/5045msec) 00:27:53.422 slat (nsec): min=6251, max=25723, avg=9392.10, stdev=2580.22 00:27:53.422 clat (usec): min=3962, max=94484, avg=11199.52, stdev=12849.20 00:27:53.422 lat (usec): min=3969, max=94494, avg=11208.91, stdev=12849.56 00:27:53.422 clat percentiles (usec): 00:27:53.422 | 1.00th=[ 4359], 5.00th=[ 4621], 10.00th=[ 4817], 20.00th=[ 5473], 00:27:53.422 | 30.00th=[ 6325], 40.00th=[ 6718], 50.00th=[ 7046], 60.00th=[ 7504], 00:27:53.422 | 70.00th=[ 8455], 80.00th=[ 9503], 90.00th=[11994], 95.00th=[49021], 00:27:53.422 | 99.00th=[52167], 99.50th=[52167], 99.90th=[89654], 99.95th=[94897], 00:27:53.422 | 99.99th=[94897] 00:27:53.422 bw ( KiB/s): min=23040, max=42752, per=34.53%, avg=34406.10, stdev=5444.15, samples=10 00:27:53.422 iops : min= 180, max= 334, avg=268.70, stdev=42.52, samples=10 00:27:53.422 lat (msec) : 4=0.07%, 10=83.80%, 20=6.54%, 50=6.69%, 100=2.90% 00:27:53.422 cpu : usr=95.46%, sys=4.20%, ctx=13, majf=0, minf=107 00:27:53.422 IO depths : 1=1.8%, 2=98.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:53.422 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:53.422 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:53.422 issued rwts: total=1346,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:53.422 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:53.422 00:27:53.422 Run status group 0 (all jobs): 00:27:53.422 READ: bw=97.3MiB/s (102MB/s), 31.8MiB/s-33.3MiB/s (33.3MB/s-35.0MB/s), io=491MiB (515MB), run=5043-5046msec 00:27:53.422 00:29:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:27:53.422 00:29:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:27:53.422 00:29:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:27:53.422 00:29:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:53.422 00:29:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:27:53.422 00:29:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:53.422 00:29:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:27:53.422 00:29:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:53.422 00:29:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:27:53.422 00:29:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:53.422 00:29:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:27:53.422 00:29:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:53.422 00:29:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:27:53.422 00:29:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:27:53.422 00:29:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:27:53.422 00:29:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:27:53.422 00:29:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:27:53.422 00:29:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:27:53.422 00:29:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:27:53.422 00:29:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:27:53.422 00:29:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:27:53.422 00:29:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:53.422 00:29:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:27:53.422 00:29:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:27:53.422 00:29:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:27:53.422 00:29:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:27:53.422 00:29:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:53.422 bdev_null0 00:27:53.422 00:29:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:27:53.422 00:29:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:53.422 00:29:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:27:53.422 00:29:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:53.422 00:29:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:27:53.422 00:29:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:53.422 00:29:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:27:53.422 00:29:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:53.422 00:29:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:27:53.422 00:29:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:53.422 00:29:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:27:53.422 00:29:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:53.422 [2024-07-16 00:29:11.359149] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:53.422 00:29:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:27:53.422 00:29:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:53.422 00:29:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:27:53.422 00:29:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:27:53.422 00:29:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:27:53.422 00:29:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:27:53.422 00:29:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:53.422 bdev_null1 00:27:53.422 00:29:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:27:53.422 00:29:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:27:53.423 00:29:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:27:53.423 00:29:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:53.423 00:29:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:27:53.423 00:29:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:27:53.423 00:29:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:27:53.423 00:29:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:53.423 00:29:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:27:53.423 00:29:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:53.423 00:29:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:27:53.423 00:29:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:53.423 00:29:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:27:53.423 00:29:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:53.423 00:29:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:27:53.423 00:29:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:27:53.423 00:29:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:27:53.423 00:29:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:27:53.423 00:29:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:53.423 bdev_null2 00:27:53.423 00:29:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:27:53.423 00:29:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:27:53.423 00:29:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:27:53.423 00:29:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:53.423 00:29:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:27:53.423 00:29:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:27:53.423 00:29:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:27:53.423 00:29:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:53.423 00:29:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:27:53.423 00:29:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:27:53.423 00:29:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:27:53.423 00:29:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:53.423 00:29:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:27:53.423 00:29:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:27:53.423 00:29:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:27:53.423 00:29:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:27:53.423 00:29:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:53.423 00:29:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:27:53.423 00:29:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:53.423 00:29:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:27:53.423 00:29:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1331 -- # local fio_dir=/usr/src/fio 00:27:53.423 00:29:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:53.423 00:29:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:53.423 00:29:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:53.423 { 00:27:53.423 "params": { 00:27:53.423 "name": "Nvme$subsystem", 00:27:53.423 "trtype": "$TEST_TRANSPORT", 00:27:53.423 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:53.423 "adrfam": "ipv4", 00:27:53.423 "trsvcid": "$NVMF_PORT", 00:27:53.423 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:53.423 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:53.423 "hdgst": ${hdgst:-false}, 00:27:53.423 "ddgst": ${ddgst:-false} 00:27:53.423 }, 00:27:53.423 "method": "bdev_nvme_attach_controller" 00:27:53.423 } 00:27:53.423 EOF 00:27:53.423 )") 00:27:53.423 00:29:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local sanitizers 00:27:53.423 00:29:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1334 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:53.423 00:29:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # shift 00:27:53.423 00:29:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local asan_lib= 00:27:53.423 00:29:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # for sanitizer in "${sanitizers[@]}" 00:27:53.423 00:29:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:27:53.423 00:29:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:27:53.423 00:29:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:27:53.423 00:29:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:27:53.423 00:29:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:53.423 00:29:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # grep libasan 00:27:53.423 00:29:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # awk '{print $3}' 00:27:53.423 00:29:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:27:53.423 00:29:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:53.423 00:29:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:27:53.423 00:29:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:53.423 00:29:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:53.423 { 00:27:53.423 "params": { 00:27:53.423 "name": "Nvme$subsystem", 00:27:53.423 "trtype": "$TEST_TRANSPORT", 00:27:53.423 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:53.423 "adrfam": "ipv4", 00:27:53.423 "trsvcid": "$NVMF_PORT", 00:27:53.423 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:53.423 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:53.423 "hdgst": ${hdgst:-false}, 00:27:53.423 "ddgst": ${ddgst:-false} 00:27:53.423 }, 00:27:53.423 "method": "bdev_nvme_attach_controller" 00:27:53.423 } 00:27:53.423 EOF 00:27:53.423 )") 00:27:53.423 00:29:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:27:53.423 00:29:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:27:53.423 00:29:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:53.423 00:29:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:27:53.423 00:29:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:53.423 00:29:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:53.423 { 00:27:53.423 "params": { 00:27:53.423 "name": "Nvme$subsystem", 00:27:53.423 "trtype": "$TEST_TRANSPORT", 00:27:53.423 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:53.423 "adrfam": "ipv4", 00:27:53.423 "trsvcid": "$NVMF_PORT", 00:27:53.423 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:53.423 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:53.423 "hdgst": ${hdgst:-false}, 00:27:53.423 "ddgst": ${ddgst:-false} 00:27:53.423 }, 00:27:53.423 "method": "bdev_nvme_attach_controller" 00:27:53.423 } 00:27:53.423 EOF 00:27:53.423 )") 00:27:53.423 00:29:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:27:53.423 00:29:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:53.423 00:29:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:27:53.423 00:29:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:27:53.423 00:29:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:27:53.423 00:29:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:53.423 "params": { 00:27:53.423 "name": "Nvme0", 00:27:53.423 "trtype": "tcp", 00:27:53.423 "traddr": "10.0.0.2", 00:27:53.423 "adrfam": "ipv4", 00:27:53.423 "trsvcid": "4420", 00:27:53.423 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:53.423 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:53.423 "hdgst": false, 00:27:53.423 "ddgst": false 00:27:53.423 }, 00:27:53.423 "method": "bdev_nvme_attach_controller" 00:27:53.423 },{ 00:27:53.423 "params": { 00:27:53.423 "name": "Nvme1", 00:27:53.423 "trtype": "tcp", 00:27:53.423 "traddr": "10.0.0.2", 00:27:53.423 "adrfam": "ipv4", 00:27:53.423 "trsvcid": "4420", 00:27:53.423 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:53.423 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:53.423 "hdgst": false, 00:27:53.423 "ddgst": false 00:27:53.423 }, 00:27:53.423 "method": "bdev_nvme_attach_controller" 00:27:53.423 },{ 00:27:53.423 "params": { 00:27:53.423 "name": "Nvme2", 00:27:53.423 "trtype": "tcp", 00:27:53.423 "traddr": "10.0.0.2", 00:27:53.423 "adrfam": "ipv4", 00:27:53.423 "trsvcid": "4420", 00:27:53.423 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:53.423 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:53.423 "hdgst": false, 00:27:53.423 "ddgst": false 00:27:53.423 }, 00:27:53.423 "method": "bdev_nvme_attach_controller" 00:27:53.423 }' 00:27:53.423 00:29:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # asan_lib= 00:27:53.423 00:29:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # [[ -n '' ]] 00:27:53.423 00:29:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # for sanitizer in "${sanitizers[@]}" 00:27:53.423 00:29:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:53.423 00:29:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # grep libclang_rt.asan 00:27:53.424 00:29:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # awk '{print $3}' 00:27:53.424 00:29:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # asan_lib= 00:27:53.424 00:29:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # [[ -n '' ]] 00:27:53.424 00:29:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:53.424 00:29:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:53.424 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:27:53.424 ... 00:27:53.424 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:27:53.424 ... 00:27:53.424 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:27:53.424 ... 00:27:53.424 fio-3.35 00:27:53.424 Starting 24 threads 00:28:05.679 00:28:05.679 filename0: (groupid=0, jobs=1): err= 0: pid=1685018: Tue Jul 16 00:29:22 2024 00:28:05.679 read: IOPS=583, BW=2335KiB/s (2391kB/s)(22.8MiB/10004msec) 00:28:05.679 slat (nsec): min=4202, max=89150, avg=11067.65, stdev=6335.62 00:28:05.679 clat (usec): min=1629, max=40398, avg=27308.49, stdev=3678.14 00:28:05.679 lat (usec): min=1636, max=40411, avg=27319.55, stdev=3678.53 00:28:05.679 clat percentiles (usec): 00:28:05.679 | 1.00th=[ 3884], 5.00th=[26870], 10.00th=[27132], 20.00th=[27657], 00:28:05.679 | 30.00th=[27657], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:28:05.679 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28181], 95.00th=[28181], 00:28:05.679 | 99.00th=[28705], 99.50th=[28705], 99.90th=[40109], 99.95th=[40633], 00:28:05.679 | 99.99th=[40633] 00:28:05.679 bw ( KiB/s): min= 2176, max= 3072, per=4.27%, avg=2337.68, stdev=180.22, samples=19 00:28:05.679 iops : min= 544, max= 768, avg=584.42, stdev=45.06, samples=19 00:28:05.679 lat (msec) : 2=0.82%, 4=0.24%, 10=1.13%, 20=0.27%, 50=97.53% 00:28:05.679 cpu : usr=98.94%, sys=0.64%, ctx=12, majf=0, minf=80 00:28:05.679 IO depths : 1=6.1%, 2=12.3%, 4=24.7%, 8=50.5%, 16=6.4%, 32=0.0%, >=64=0.0% 00:28:05.679 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:05.679 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:05.679 issued rwts: total=5840,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:05.679 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:05.679 filename0: (groupid=0, jobs=1): err= 0: pid=1685019: Tue Jul 16 00:29:22 2024 00:28:05.679 read: IOPS=572, BW=2291KiB/s (2345kB/s)(22.4MiB/10003msec) 00:28:05.679 slat (usec): min=7, max=107, avg=41.57, stdev=18.50 00:28:05.679 clat (usec): min=16680, max=43798, avg=27542.94, stdev=1101.64 00:28:05.679 lat (usec): min=16703, max=43817, avg=27584.51, stdev=1102.05 00:28:05.679 clat percentiles (usec): 00:28:05.679 | 1.00th=[26346], 5.00th=[26870], 10.00th=[27132], 20.00th=[27395], 00:28:05.679 | 30.00th=[27395], 40.00th=[27395], 50.00th=[27657], 60.00th=[27657], 00:28:05.679 | 70.00th=[27657], 80.00th=[27919], 90.00th=[27919], 95.00th=[28181], 00:28:05.679 | 99.00th=[28443], 99.50th=[28705], 99.90th=[43779], 99.95th=[43779], 00:28:05.679 | 99.99th=[43779] 00:28:05.679 bw ( KiB/s): min= 2176, max= 2432, per=4.17%, avg=2283.79, stdev=64.19, samples=19 00:28:05.679 iops : min= 544, max= 608, avg=570.95, stdev=16.05, samples=19 00:28:05.679 lat (msec) : 20=0.28%, 50=99.72% 00:28:05.679 cpu : usr=98.65%, sys=0.83%, ctx=102, majf=0, minf=51 00:28:05.679 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:28:05.679 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:05.679 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:05.679 issued rwts: total=5728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:05.679 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:05.679 filename0: (groupid=0, jobs=1): err= 0: pid=1685020: Tue Jul 16 00:29:22 2024 00:28:05.679 read: IOPS=577, BW=2311KiB/s (2366kB/s)(22.6MiB/10016msec) 00:28:05.679 slat (nsec): min=6787, max=90774, avg=22592.46, stdev=18794.14 00:28:05.679 clat (usec): min=4827, max=41625, avg=27505.86, stdev=2545.35 00:28:05.679 lat (usec): min=4839, max=41684, avg=27528.45, stdev=2546.57 00:28:05.679 clat percentiles (usec): 00:28:05.679 | 1.00th=[17171], 5.00th=[26870], 10.00th=[27132], 20.00th=[27395], 00:28:05.679 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27919], 60.00th=[27919], 00:28:05.679 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28181], 00:28:05.679 | 99.00th=[34866], 99.50th=[34866], 99.90th=[41157], 99.95th=[41681], 00:28:05.679 | 99.99th=[41681] 00:28:05.679 bw ( KiB/s): min= 2192, max= 2432, per=4.21%, avg=2308.21, stdev=42.95, samples=19 00:28:05.679 iops : min= 548, max= 608, avg=577.05, stdev=10.74, samples=19 00:28:05.679 lat (msec) : 10=0.55%, 20=1.11%, 50=98.34% 00:28:05.679 cpu : usr=98.71%, sys=0.90%, ctx=15, majf=0, minf=81 00:28:05.679 IO depths : 1=3.8%, 2=8.9%, 4=22.1%, 8=56.6%, 16=8.7%, 32=0.0%, >=64=0.0% 00:28:05.679 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:05.679 complete : 0=0.0%, 4=93.4%, 8=0.8%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:05.679 issued rwts: total=5786,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:05.679 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:05.679 filename0: (groupid=0, jobs=1): err= 0: pid=1685021: Tue Jul 16 00:29:22 2024 00:28:05.679 read: IOPS=574, BW=2296KiB/s (2352kB/s)(22.4MiB/10005msec) 00:28:05.679 slat (nsec): min=5174, max=68140, avg=19896.01, stdev=5624.06 00:28:05.679 clat (usec): min=6052, max=58152, avg=27692.36, stdev=2294.89 00:28:05.679 lat (usec): min=6064, max=58167, avg=27712.26, stdev=2294.88 00:28:05.679 clat percentiles (usec): 00:28:05.679 | 1.00th=[25035], 5.00th=[26870], 10.00th=[27132], 20.00th=[27657], 00:28:05.679 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27919], 60.00th=[27919], 00:28:05.679 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28181], 00:28:05.679 | 99.00th=[28443], 99.50th=[28705], 99.90th=[57934], 99.95th=[57934], 00:28:05.680 | 99.99th=[57934] 00:28:05.680 bw ( KiB/s): min= 2048, max= 2304, per=4.16%, avg=2277.05, stdev=68.52, samples=19 00:28:05.680 iops : min= 512, max= 576, avg=569.26, stdev=17.13, samples=19 00:28:05.680 lat (msec) : 10=0.31%, 20=0.52%, 50=98.89%, 100=0.28% 00:28:05.680 cpu : usr=98.95%, sys=0.64%, ctx=13, majf=0, minf=74 00:28:05.680 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:05.680 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:05.680 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:05.680 issued rwts: total=5744,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:05.680 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:05.680 filename0: (groupid=0, jobs=1): err= 0: pid=1685022: Tue Jul 16 00:29:22 2024 00:28:05.680 read: IOPS=572, BW=2289KiB/s (2344kB/s)(22.4MiB/10008msec) 00:28:05.680 slat (nsec): min=5092, max=41243, avg=19588.74, stdev=5490.91 00:28:05.680 clat (usec): min=9001, max=61960, avg=27769.47, stdev=2175.28 00:28:05.680 lat (usec): min=9009, max=61974, avg=27789.06, stdev=2174.98 00:28:05.680 clat percentiles (usec): 00:28:05.680 | 1.00th=[26346], 5.00th=[26870], 10.00th=[27132], 20.00th=[27657], 00:28:05.680 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27919], 60.00th=[27919], 00:28:05.680 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28181], 00:28:05.680 | 99.00th=[28705], 99.50th=[28967], 99.90th=[62129], 99.95th=[62129], 00:28:05.680 | 99.99th=[62129] 00:28:05.680 bw ( KiB/s): min= 2048, max= 2432, per=4.16%, avg=2277.05, stdev=80.72, samples=19 00:28:05.680 iops : min= 512, max= 608, avg=569.26, stdev=20.18, samples=19 00:28:05.680 lat (msec) : 10=0.03%, 20=0.52%, 50=99.16%, 100=0.28% 00:28:05.680 cpu : usr=98.65%, sys=0.97%, ctx=9, majf=0, minf=51 00:28:05.680 IO depths : 1=5.7%, 2=11.9%, 4=25.0%, 8=50.6%, 16=6.8%, 32=0.0%, >=64=0.0% 00:28:05.680 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:05.680 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:05.680 issued rwts: total=5728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:05.680 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:05.680 filename0: (groupid=0, jobs=1): err= 0: pid=1685023: Tue Jul 16 00:29:22 2024 00:28:05.680 read: IOPS=572, BW=2291KiB/s (2345kB/s)(22.4MiB/10003msec) 00:28:05.680 slat (usec): min=5, max=100, avg=44.54, stdev=22.17 00:28:05.680 clat (usec): min=16705, max=43859, avg=27491.34, stdev=1114.06 00:28:05.680 lat (usec): min=16731, max=43872, avg=27535.88, stdev=1114.82 00:28:05.680 clat percentiles (usec): 00:28:05.680 | 1.00th=[26346], 5.00th=[26608], 10.00th=[26870], 20.00th=[27132], 00:28:05.680 | 30.00th=[27395], 40.00th=[27395], 50.00th=[27395], 60.00th=[27657], 00:28:05.680 | 70.00th=[27657], 80.00th=[27919], 90.00th=[27919], 95.00th=[27919], 00:28:05.680 | 99.00th=[28443], 99.50th=[28705], 99.90th=[43779], 99.95th=[43779], 00:28:05.680 | 99.99th=[43779] 00:28:05.680 bw ( KiB/s): min= 2176, max= 2432, per=4.17%, avg=2283.79, stdev=64.19, samples=19 00:28:05.680 iops : min= 544, max= 608, avg=570.95, stdev=16.05, samples=19 00:28:05.680 lat (msec) : 20=0.28%, 50=99.72% 00:28:05.680 cpu : usr=98.69%, sys=0.92%, ctx=9, majf=0, minf=77 00:28:05.680 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:28:05.680 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:05.680 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:05.680 issued rwts: total=5728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:05.680 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:05.680 filename0: (groupid=0, jobs=1): err= 0: pid=1685024: Tue Jul 16 00:29:22 2024 00:28:05.680 read: IOPS=586, BW=2344KiB/s (2400kB/s)(22.9MiB/10003msec) 00:28:05.680 slat (usec): min=6, max=100, avg=18.43, stdev=12.82 00:28:05.680 clat (usec): min=8966, max=46893, avg=27137.75, stdev=2860.70 00:28:05.680 lat (usec): min=8975, max=46902, avg=27156.19, stdev=2862.34 00:28:05.680 clat percentiles (usec): 00:28:05.680 | 1.00th=[16319], 5.00th=[20579], 10.00th=[26346], 20.00th=[27132], 00:28:05.680 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27919], 60.00th=[27919], 00:28:05.680 | 70.00th=[27919], 80.00th=[27919], 90.00th=[27919], 95.00th=[28181], 00:28:05.680 | 99.00th=[30278], 99.50th=[41681], 99.90th=[46400], 99.95th=[46400], 00:28:05.680 | 99.99th=[46924] 00:28:05.680 bw ( KiB/s): min= 2176, max= 2816, per=4.27%, avg=2340.21, stdev=163.59, samples=19 00:28:05.680 iops : min= 544, max= 704, avg=585.05, stdev=40.90, samples=19 00:28:05.680 lat (msec) : 10=0.14%, 20=4.61%, 50=95.26% 00:28:05.680 cpu : usr=98.83%, sys=0.78%, ctx=8, majf=0, minf=52 00:28:05.680 IO depths : 1=5.3%, 2=10.6%, 4=21.9%, 8=54.6%, 16=7.6%, 32=0.0%, >=64=0.0% 00:28:05.680 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:05.680 complete : 0=0.0%, 4=93.4%, 8=1.2%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:05.680 issued rwts: total=5862,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:05.680 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:05.680 filename0: (groupid=0, jobs=1): err= 0: pid=1685025: Tue Jul 16 00:29:22 2024 00:28:05.680 read: IOPS=572, BW=2290KiB/s (2345kB/s)(22.4MiB/10007msec) 00:28:05.680 slat (nsec): min=6944, max=48332, avg=14500.63, stdev=5125.34 00:28:05.680 clat (usec): min=23481, max=45227, avg=27827.64, stdev=1022.15 00:28:05.680 lat (usec): min=23497, max=45245, avg=27842.14, stdev=1022.45 00:28:05.680 clat percentiles (usec): 00:28:05.680 | 1.00th=[26608], 5.00th=[26870], 10.00th=[27132], 20.00th=[27657], 00:28:05.680 | 30.00th=[27657], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:28:05.680 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28181], 00:28:05.680 | 99.00th=[28705], 99.50th=[28705], 99.90th=[45351], 99.95th=[45351], 00:28:05.680 | 99.99th=[45351] 00:28:05.680 bw ( KiB/s): min= 2176, max= 2304, per=4.17%, avg=2283.79, stdev=47.95, samples=19 00:28:05.680 iops : min= 544, max= 576, avg=570.95, stdev=11.99, samples=19 00:28:05.680 lat (msec) : 50=100.00% 00:28:05.680 cpu : usr=98.74%, sys=0.86%, ctx=13, majf=0, minf=77 00:28:05.680 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:28:05.680 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:05.680 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:05.680 issued rwts: total=5728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:05.680 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:05.680 filename1: (groupid=0, jobs=1): err= 0: pid=1685026: Tue Jul 16 00:29:22 2024 00:28:05.680 read: IOPS=572, BW=2291KiB/s (2346kB/s)(22.4MiB/10002msec) 00:28:05.680 slat (nsec): min=9247, max=85985, avg=36288.64, stdev=13833.97 00:28:05.680 clat (usec): min=16906, max=43603, avg=27629.03, stdev=1092.05 00:28:05.680 lat (usec): min=16934, max=43652, avg=27665.32, stdev=1090.72 00:28:05.680 clat percentiles (usec): 00:28:05.680 | 1.00th=[26346], 5.00th=[26870], 10.00th=[27132], 20.00th=[27395], 00:28:05.680 | 30.00th=[27395], 40.00th=[27657], 50.00th=[27657], 60.00th=[27657], 00:28:05.680 | 70.00th=[27919], 80.00th=[27919], 90.00th=[27919], 95.00th=[28181], 00:28:05.680 | 99.00th=[28443], 99.50th=[28705], 99.90th=[43254], 99.95th=[43779], 00:28:05.680 | 99.99th=[43779] 00:28:05.680 bw ( KiB/s): min= 2176, max= 2432, per=4.17%, avg=2284.00, stdev=63.82, samples=19 00:28:05.680 iops : min= 544, max= 608, avg=571.00, stdev=15.95, samples=19 00:28:05.680 lat (msec) : 20=0.28%, 50=99.72% 00:28:05.680 cpu : usr=96.77%, sys=1.70%, ctx=179, majf=0, minf=51 00:28:05.680 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:28:05.680 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:05.680 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:05.680 issued rwts: total=5728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:05.680 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:05.680 filename1: (groupid=0, jobs=1): err= 0: pid=1685027: Tue Jul 16 00:29:22 2024 00:28:05.680 read: IOPS=572, BW=2291KiB/s (2346kB/s)(22.4MiB/10002msec) 00:28:05.680 slat (nsec): min=6788, max=82055, avg=14483.19, stdev=6537.80 00:28:05.680 clat (usec): min=5045, max=64342, avg=27882.51, stdev=2322.96 00:28:05.680 lat (usec): min=5053, max=64363, avg=27896.99, stdev=2323.39 00:28:05.680 clat percentiles (usec): 00:28:05.680 | 1.00th=[24773], 5.00th=[27132], 10.00th=[27395], 20.00th=[27657], 00:28:05.680 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:28:05.680 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28443], 00:28:05.680 | 99.00th=[34341], 99.50th=[35914], 99.90th=[53740], 99.95th=[64226], 00:28:05.680 | 99.99th=[64226] 00:28:05.680 bw ( KiB/s): min= 2016, max= 2304, per=4.16%, avg=2277.05, stdev=66.41, samples=19 00:28:05.680 iops : min= 504, max= 576, avg=569.26, stdev=16.60, samples=19 00:28:05.680 lat (msec) : 10=0.30%, 20=0.45%, 50=98.97%, 100=0.28% 00:28:05.680 cpu : usr=98.98%, sys=0.64%, ctx=9, majf=0, minf=75 00:28:05.680 IO depths : 1=0.1%, 2=0.2%, 4=1.0%, 8=80.4%, 16=18.4%, 32=0.0%, >=64=0.0% 00:28:05.680 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:05.680 complete : 0=0.0%, 4=89.6%, 8=10.1%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:05.680 issued rwts: total=5728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:05.680 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:05.680 filename1: (groupid=0, jobs=1): err= 0: pid=1685028: Tue Jul 16 00:29:22 2024 00:28:05.680 read: IOPS=572, BW=2290KiB/s (2345kB/s)(22.4MiB/10007msec) 00:28:05.680 slat (usec): min=5, max=100, avg=26.93, stdev=16.78 00:28:05.680 clat (usec): min=5759, max=59964, avg=27690.82, stdev=2210.45 00:28:05.680 lat (usec): min=5767, max=59981, avg=27717.75, stdev=2209.73 00:28:05.680 clat percentiles (usec): 00:28:05.680 | 1.00th=[23725], 5.00th=[26870], 10.00th=[27132], 20.00th=[27395], 00:28:05.680 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27657], 60.00th=[27919], 00:28:05.680 | 70.00th=[27919], 80.00th=[27919], 90.00th=[27919], 95.00th=[28181], 00:28:05.680 | 99.00th=[28705], 99.50th=[32637], 99.90th=[60031], 99.95th=[60031], 00:28:05.680 | 99.99th=[60031] 00:28:05.680 bw ( KiB/s): min= 2048, max= 2304, per=4.16%, avg=2277.05, stdev=68.52, samples=19 00:28:05.680 iops : min= 512, max= 576, avg=569.26, stdev=17.13, samples=19 00:28:05.680 lat (msec) : 10=0.16%, 20=0.40%, 50=99.16%, 100=0.28% 00:28:05.680 cpu : usr=98.80%, sys=0.81%, ctx=8, majf=0, minf=52 00:28:05.680 IO depths : 1=6.1%, 2=12.3%, 4=24.8%, 8=50.4%, 16=6.4%, 32=0.0%, >=64=0.0% 00:28:05.680 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:05.680 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:05.680 issued rwts: total=5728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:05.680 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:05.680 filename1: (groupid=0, jobs=1): err= 0: pid=1685029: Tue Jul 16 00:29:22 2024 00:28:05.680 read: IOPS=570, BW=2283KiB/s (2338kB/s)(22.3MiB/10007msec) 00:28:05.680 slat (usec): min=6, max=100, avg=34.02, stdev=21.45 00:28:05.680 clat (usec): min=12639, max=94974, avg=27703.84, stdev=3171.18 00:28:05.680 lat (usec): min=12653, max=94995, avg=27737.86, stdev=3169.67 00:28:05.680 clat percentiles (usec): 00:28:05.680 | 1.00th=[23725], 5.00th=[26608], 10.00th=[26870], 20.00th=[27132], 00:28:05.680 | 30.00th=[27395], 40.00th=[27657], 50.00th=[27657], 60.00th=[27919], 00:28:05.680 | 70.00th=[27919], 80.00th=[27919], 90.00th=[27919], 95.00th=[28181], 00:28:05.680 | 99.00th=[30278], 99.50th=[35390], 99.90th=[80217], 99.95th=[80217], 00:28:05.680 | 99.99th=[94897] 00:28:05.680 bw ( KiB/s): min= 2048, max= 2304, per=4.16%, avg=2277.05, stdev=68.52, samples=19 00:28:05.680 iops : min= 512, max= 576, avg=569.26, stdev=17.13, samples=19 00:28:05.681 lat (msec) : 20=0.56%, 50=99.16%, 100=0.28% 00:28:05.681 cpu : usr=98.78%, sys=0.82%, ctx=9, majf=0, minf=51 00:28:05.681 IO depths : 1=6.0%, 2=12.1%, 4=24.4%, 8=50.9%, 16=6.6%, 32=0.0%, >=64=0.0% 00:28:05.681 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:05.681 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:05.681 issued rwts: total=5712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:05.681 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:05.681 filename1: (groupid=0, jobs=1): err= 0: pid=1685030: Tue Jul 16 00:29:22 2024 00:28:05.681 read: IOPS=572, BW=2291KiB/s (2345kB/s)(22.4MiB/10003msec) 00:28:05.681 slat (usec): min=7, max=100, avg=45.73, stdev=21.64 00:28:05.681 clat (usec): min=16668, max=43771, avg=27503.18, stdev=1110.78 00:28:05.681 lat (usec): min=16695, max=43787, avg=27548.91, stdev=1110.89 00:28:05.681 clat percentiles (usec): 00:28:05.681 | 1.00th=[26346], 5.00th=[26608], 10.00th=[26870], 20.00th=[27132], 00:28:05.681 | 30.00th=[27395], 40.00th=[27395], 50.00th=[27395], 60.00th=[27657], 00:28:05.681 | 70.00th=[27657], 80.00th=[27919], 90.00th=[27919], 95.00th=[28181], 00:28:05.681 | 99.00th=[28443], 99.50th=[28705], 99.90th=[43779], 99.95th=[43779], 00:28:05.681 | 99.99th=[43779] 00:28:05.681 bw ( KiB/s): min= 2176, max= 2432, per=4.17%, avg=2283.79, stdev=64.19, samples=19 00:28:05.681 iops : min= 544, max= 608, avg=570.95, stdev=16.05, samples=19 00:28:05.681 lat (msec) : 20=0.28%, 50=99.72% 00:28:05.681 cpu : usr=98.98%, sys=0.62%, ctx=14, majf=0, minf=64 00:28:05.681 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:28:05.681 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:05.681 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:05.681 issued rwts: total=5728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:05.681 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:05.681 filename1: (groupid=0, jobs=1): err= 0: pid=1685031: Tue Jul 16 00:29:22 2024 00:28:05.681 read: IOPS=572, BW=2289KiB/s (2344kB/s)(22.4MiB/10003msec) 00:28:05.681 slat (nsec): min=6295, max=89683, avg=18808.89, stdev=8904.94 00:28:05.681 clat (usec): min=2905, max=54048, avg=27835.86, stdev=2546.16 00:28:05.681 lat (usec): min=2917, max=54067, avg=27854.67, stdev=2546.49 00:28:05.681 clat percentiles (usec): 00:28:05.681 | 1.00th=[21365], 5.00th=[27132], 10.00th=[27395], 20.00th=[27657], 00:28:05.681 | 30.00th=[27657], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:28:05.681 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28443], 00:28:05.681 | 99.00th=[35390], 99.50th=[43779], 99.90th=[54264], 99.95th=[54264], 00:28:05.681 | 99.99th=[54264] 00:28:05.681 bw ( KiB/s): min= 2048, max= 2336, per=4.15%, avg=2275.37, stdev=64.13, samples=19 00:28:05.681 iops : min= 512, max= 584, avg=568.84, stdev=16.03, samples=19 00:28:05.681 lat (msec) : 4=0.07%, 10=0.21%, 20=0.66%, 50=98.78%, 100=0.28% 00:28:05.681 cpu : usr=98.87%, sys=0.74%, ctx=16, majf=0, minf=62 00:28:05.681 IO depths : 1=1.3%, 2=5.1%, 4=16.3%, 8=64.1%, 16=13.2%, 32=0.0%, >=64=0.0% 00:28:05.681 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:05.681 complete : 0=0.0%, 4=92.6%, 8=3.7%, 16=3.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:05.681 issued rwts: total=5724,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:05.681 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:05.681 filename1: (groupid=0, jobs=1): err= 0: pid=1685032: Tue Jul 16 00:29:22 2024 00:28:05.681 read: IOPS=576, BW=2305KiB/s (2361kB/s)(22.6MiB/10022msec) 00:28:05.681 slat (usec): min=7, max=100, avg=42.76, stdev=22.22 00:28:05.681 clat (usec): min=2803, max=41227, avg=27406.31, stdev=2113.39 00:28:05.681 lat (usec): min=2814, max=41272, avg=27449.06, stdev=2114.64 00:28:05.681 clat percentiles (usec): 00:28:05.681 | 1.00th=[16909], 5.00th=[26608], 10.00th=[26870], 20.00th=[27132], 00:28:05.681 | 30.00th=[27395], 40.00th=[27395], 50.00th=[27657], 60.00th=[27657], 00:28:05.681 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28181], 00:28:05.681 | 99.00th=[28443], 99.50th=[28705], 99.90th=[40633], 99.95th=[41157], 00:28:05.681 | 99.99th=[41157] 00:28:05.681 bw ( KiB/s): min= 2176, max= 2432, per=4.20%, avg=2304.00, stdev=42.67, samples=19 00:28:05.681 iops : min= 544, max= 608, avg=576.00, stdev=10.67, samples=19 00:28:05.681 lat (msec) : 4=0.05%, 10=0.50%, 20=0.55%, 50=98.89% 00:28:05.681 cpu : usr=99.00%, sys=0.61%, ctx=15, majf=0, minf=37 00:28:05.681 IO depths : 1=6.1%, 2=12.3%, 4=24.9%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:28:05.681 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:05.681 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:05.681 issued rwts: total=5776,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:05.681 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:05.681 filename1: (groupid=0, jobs=1): err= 0: pid=1685033: Tue Jul 16 00:29:22 2024 00:28:05.681 read: IOPS=572, BW=2291KiB/s (2345kB/s)(22.4MiB/10003msec) 00:28:05.681 slat (usec): min=7, max=100, avg=45.66, stdev=22.13 00:28:05.681 clat (usec): min=16700, max=43832, avg=27478.55, stdev=1114.69 00:28:05.681 lat (usec): min=16717, max=43845, avg=27524.21, stdev=1115.87 00:28:05.681 clat percentiles (usec): 00:28:05.681 | 1.00th=[26346], 5.00th=[26608], 10.00th=[26870], 20.00th=[27132], 00:28:05.681 | 30.00th=[27395], 40.00th=[27395], 50.00th=[27395], 60.00th=[27657], 00:28:05.681 | 70.00th=[27657], 80.00th=[27919], 90.00th=[27919], 95.00th=[27919], 00:28:05.681 | 99.00th=[28443], 99.50th=[28705], 99.90th=[43779], 99.95th=[43779], 00:28:05.681 | 99.99th=[43779] 00:28:05.681 bw ( KiB/s): min= 2176, max= 2432, per=4.17%, avg=2283.79, stdev=64.19, samples=19 00:28:05.681 iops : min= 544, max= 608, avg=570.95, stdev=16.05, samples=19 00:28:05.681 lat (msec) : 20=0.28%, 50=99.72% 00:28:05.681 cpu : usr=98.85%, sys=0.75%, ctx=14, majf=0, minf=63 00:28:05.681 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:28:05.681 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:05.681 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:05.681 issued rwts: total=5728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:05.681 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:05.681 filename2: (groupid=0, jobs=1): err= 0: pid=1685034: Tue Jul 16 00:29:22 2024 00:28:05.681 read: IOPS=572, BW=2290KiB/s (2345kB/s)(22.4MiB/10005msec) 00:28:05.681 slat (nsec): min=5735, max=95298, avg=17099.18, stdev=13888.21 00:28:05.681 clat (usec): min=5285, max=67586, avg=27839.29, stdev=2291.69 00:28:05.681 lat (usec): min=5292, max=67602, avg=27856.39, stdev=2291.33 00:28:05.681 clat percentiles (usec): 00:28:05.681 | 1.00th=[23725], 5.00th=[27132], 10.00th=[27132], 20.00th=[27657], 00:28:05.681 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:28:05.681 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28181], 95.00th=[28443], 00:28:05.681 | 99.00th=[28967], 99.50th=[35390], 99.90th=[56886], 99.95th=[67634], 00:28:05.681 | 99.99th=[67634] 00:28:05.681 bw ( KiB/s): min= 2052, max= 2352, per=4.16%, avg=2276.42, stdev=65.82, samples=19 00:28:05.681 iops : min= 513, max= 588, avg=569.11, stdev=16.45, samples=19 00:28:05.681 lat (msec) : 10=0.14%, 20=0.45%, 50=99.13%, 100=0.28% 00:28:05.681 cpu : usr=98.77%, sys=0.84%, ctx=6, majf=0, minf=75 00:28:05.681 IO depths : 1=1.3%, 2=3.1%, 4=7.3%, 8=72.7%, 16=15.6%, 32=0.0%, >=64=0.0% 00:28:05.681 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:05.681 complete : 0=0.0%, 4=90.8%, 8=7.6%, 16=1.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:05.681 issued rwts: total=5728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:05.681 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:05.681 filename2: (groupid=0, jobs=1): err= 0: pid=1685035: Tue Jul 16 00:29:22 2024 00:28:05.681 read: IOPS=572, BW=2289KiB/s (2344kB/s)(22.4MiB/10008msec) 00:28:05.681 slat (nsec): min=5153, max=48484, avg=19655.97, stdev=5683.53 00:28:05.681 clat (usec): min=10153, max=61659, avg=27777.82, stdev=2145.62 00:28:05.681 lat (usec): min=10160, max=61673, avg=27797.48, stdev=2145.36 00:28:05.681 clat percentiles (usec): 00:28:05.681 | 1.00th=[26346], 5.00th=[27132], 10.00th=[27132], 20.00th=[27657], 00:28:05.681 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27919], 60.00th=[27919], 00:28:05.681 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28181], 00:28:05.681 | 99.00th=[28705], 99.50th=[32637], 99.90th=[61604], 99.95th=[61604], 00:28:05.681 | 99.99th=[61604] 00:28:05.681 bw ( KiB/s): min= 2048, max= 2432, per=4.16%, avg=2277.05, stdev=80.72, samples=19 00:28:05.681 iops : min= 512, max= 608, avg=569.26, stdev=20.18, samples=19 00:28:05.681 lat (msec) : 20=0.56%, 50=99.16%, 100=0.28% 00:28:05.681 cpu : usr=98.64%, sys=0.97%, ctx=15, majf=0, minf=49 00:28:05.681 IO depths : 1=6.0%, 2=12.2%, 4=24.8%, 8=50.5%, 16=6.5%, 32=0.0%, >=64=0.0% 00:28:05.681 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:05.681 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:05.681 issued rwts: total=5728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:05.681 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:05.681 filename2: (groupid=0, jobs=1): err= 0: pid=1685036: Tue Jul 16 00:29:22 2024 00:28:05.681 read: IOPS=572, BW=2291KiB/s (2345kB/s)(22.4MiB/10003msec) 00:28:05.681 slat (usec): min=6, max=100, avg=45.62, stdev=21.88 00:28:05.681 clat (usec): min=17120, max=43702, avg=27487.10, stdev=1049.25 00:28:05.681 lat (usec): min=17202, max=43715, avg=27532.71, stdev=1050.71 00:28:05.681 clat percentiles (usec): 00:28:05.681 | 1.00th=[26346], 5.00th=[26608], 10.00th=[26870], 20.00th=[27132], 00:28:05.681 | 30.00th=[27395], 40.00th=[27395], 50.00th=[27395], 60.00th=[27657], 00:28:05.681 | 70.00th=[27657], 80.00th=[27919], 90.00th=[27919], 95.00th=[27919], 00:28:05.681 | 99.00th=[28443], 99.50th=[28705], 99.90th=[43254], 99.95th=[43779], 00:28:05.681 | 99.99th=[43779] 00:28:05.681 bw ( KiB/s): min= 2176, max= 2432, per=4.17%, avg=2283.79, stdev=64.19, samples=19 00:28:05.681 iops : min= 544, max= 608, avg=570.95, stdev=16.05, samples=19 00:28:05.681 lat (msec) : 20=0.28%, 50=99.72% 00:28:05.681 cpu : usr=98.85%, sys=0.75%, ctx=12, majf=0, minf=48 00:28:05.681 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:05.681 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:05.681 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:05.681 issued rwts: total=5728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:05.681 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:05.681 filename2: (groupid=0, jobs=1): err= 0: pid=1685037: Tue Jul 16 00:29:22 2024 00:28:05.681 read: IOPS=513, BW=2052KiB/s (2101kB/s)(20.0MiB/10003msec) 00:28:05.681 slat (nsec): min=6543, max=96026, avg=19363.73, stdev=14700.58 00:28:05.681 clat (usec): min=2715, max=68375, avg=31039.10, stdev=5630.32 00:28:05.681 lat (usec): min=2722, max=68395, avg=31058.46, stdev=5632.58 00:28:05.681 clat percentiles (usec): 00:28:05.681 | 1.00th=[26084], 5.00th=[27657], 10.00th=[27657], 20.00th=[27919], 00:28:05.681 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[28181], 00:28:05.681 | 70.00th=[32637], 80.00th=[34866], 90.00th=[39584], 95.00th=[42730], 00:28:05.681 | 99.00th=[44303], 99.50th=[44827], 99.90th=[68682], 99.95th=[68682], 00:28:05.681 | 99.99th=[68682] 00:28:05.681 bw ( KiB/s): min= 1664, max= 2304, per=3.74%, avg=2048.00, stdev=252.70, samples=19 00:28:05.682 iops : min= 416, max= 576, avg=512.00, stdev=63.18, samples=19 00:28:05.682 lat (msec) : 4=0.12%, 10=0.19%, 20=0.58%, 50=98.79%, 100=0.31% 00:28:05.682 cpu : usr=98.61%, sys=0.99%, ctx=10, majf=0, minf=65 00:28:05.682 IO depths : 1=2.2%, 2=5.8%, 4=21.1%, 8=60.2%, 16=10.7%, 32=0.0%, >=64=0.0% 00:28:05.682 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:05.682 complete : 0=0.0%, 4=94.0%, 8=0.8%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:05.682 issued rwts: total=5132,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:05.682 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:05.682 filename2: (groupid=0, jobs=1): err= 0: pid=1685038: Tue Jul 16 00:29:22 2024 00:28:05.682 read: IOPS=571, BW=2284KiB/s (2339kB/s)(22.3MiB/10003msec) 00:28:05.682 slat (usec): min=4, max=101, avg=31.17, stdev=22.36 00:28:05.682 clat (usec): min=16676, max=68879, avg=27792.61, stdev=2281.07 00:28:05.682 lat (usec): min=16758, max=68892, avg=27823.79, stdev=2277.88 00:28:05.682 clat percentiles (usec): 00:28:05.682 | 1.00th=[26346], 5.00th=[26870], 10.00th=[27132], 20.00th=[27395], 00:28:05.682 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27919], 60.00th=[27919], 00:28:05.682 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28181], 00:28:05.682 | 99.00th=[28705], 99.50th=[28705], 99.90th=[68682], 99.95th=[68682], 00:28:05.682 | 99.99th=[68682] 00:28:05.682 bw ( KiB/s): min= 2048, max= 2304, per=4.17%, avg=2283.79, stdev=64.19, samples=19 00:28:05.682 iops : min= 512, max= 576, avg=570.95, stdev=16.05, samples=19 00:28:05.682 lat (msec) : 20=0.28%, 50=99.44%, 100=0.28% 00:28:05.682 cpu : usr=98.91%, sys=0.69%, ctx=15, majf=0, minf=56 00:28:05.682 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:05.682 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:05.682 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:05.682 issued rwts: total=5712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:05.682 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:05.682 filename2: (groupid=0, jobs=1): err= 0: pid=1685039: Tue Jul 16 00:29:22 2024 00:28:05.682 read: IOPS=570, BW=2281KiB/s (2336kB/s)(22.3MiB/10003msec) 00:28:05.682 slat (usec): min=6, max=100, avg=18.67, stdev=11.62 00:28:05.682 clat (usec): min=2630, max=82200, avg=27954.04, stdev=3569.00 00:28:05.682 lat (usec): min=2636, max=82218, avg=27972.71, stdev=3569.18 00:28:05.682 clat percentiles (usec): 00:28:05.682 | 1.00th=[15139], 5.00th=[26870], 10.00th=[27132], 20.00th=[27657], 00:28:05.682 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:28:05.682 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28181], 95.00th=[28443], 00:28:05.682 | 99.00th=[40633], 99.50th=[43254], 99.90th=[68682], 99.95th=[82314], 00:28:05.682 | 99.99th=[82314] 00:28:05.682 bw ( KiB/s): min= 1968, max= 2320, per=4.13%, avg=2262.74, stdev=80.59, samples=19 00:28:05.682 iops : min= 492, max= 580, avg=565.68, stdev=20.15, samples=19 00:28:05.682 lat (msec) : 4=0.18%, 10=0.25%, 20=1.03%, 50=98.26%, 100=0.28% 00:28:05.682 cpu : usr=98.83%, sys=0.76%, ctx=14, majf=0, minf=62 00:28:05.682 IO depths : 1=0.6%, 2=2.7%, 4=9.2%, 8=71.9%, 16=15.6%, 32=0.0%, >=64=0.0% 00:28:05.682 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:05.682 complete : 0=0.0%, 4=91.1%, 8=6.7%, 16=2.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:05.682 issued rwts: total=5704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:05.682 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:05.682 filename2: (groupid=0, jobs=1): err= 0: pid=1685040: Tue Jul 16 00:29:22 2024 00:28:05.682 read: IOPS=572, BW=2291KiB/s (2345kB/s)(22.4MiB/10003msec) 00:28:05.682 slat (nsec): min=6878, max=86042, avg=15429.71, stdev=7140.93 00:28:05.682 clat (usec): min=13797, max=59093, avg=27815.33, stdev=2289.11 00:28:05.682 lat (usec): min=13807, max=59111, avg=27830.76, stdev=2289.56 00:28:05.682 clat percentiles (usec): 00:28:05.682 | 1.00th=[17695], 5.00th=[26870], 10.00th=[27132], 20.00th=[27657], 00:28:05.682 | 30.00th=[27657], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:28:05.682 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28443], 00:28:05.682 | 99.00th=[40109], 99.50th=[40633], 99.90th=[45351], 99.95th=[47973], 00:28:05.682 | 99.99th=[58983] 00:28:05.682 bw ( KiB/s): min= 2160, max= 2384, per=4.17%, avg=2283.79, stdev=57.15, samples=19 00:28:05.682 iops : min= 540, max= 596, avg=570.95, stdev=14.29, samples=19 00:28:05.682 lat (msec) : 20=1.50%, 50=98.48%, 100=0.02% 00:28:05.682 cpu : usr=98.81%, sys=0.80%, ctx=14, majf=0, minf=76 00:28:05.682 IO depths : 1=5.1%, 2=10.6%, 4=23.0%, 8=53.8%, 16=7.5%, 32=0.0%, >=64=0.0% 00:28:05.682 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:05.682 complete : 0=0.0%, 4=93.6%, 8=0.6%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:05.682 issued rwts: total=5728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:05.682 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:05.682 filename2: (groupid=0, jobs=1): err= 0: pid=1685041: Tue Jul 16 00:29:22 2024 00:28:05.682 read: IOPS=577, BW=2309KiB/s (2364kB/s)(22.6MiB/10002msec) 00:28:05.682 slat (usec): min=3, max=100, avg=38.71, stdev=22.99 00:28:05.682 clat (usec): min=4759, max=41462, avg=27412.89, stdev=2093.10 00:28:05.682 lat (usec): min=4767, max=41504, avg=27451.60, stdev=2094.53 00:28:05.682 clat percentiles (usec): 00:28:05.682 | 1.00th=[16909], 5.00th=[26608], 10.00th=[27132], 20.00th=[27132], 00:28:05.682 | 30.00th=[27395], 40.00th=[27657], 50.00th=[27657], 60.00th=[27919], 00:28:05.682 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28181], 00:28:05.682 | 99.00th=[28705], 99.50th=[28967], 99.90th=[39584], 99.95th=[41157], 00:28:05.682 | 99.99th=[41681] 00:28:05.682 bw ( KiB/s): min= 2176, max= 2541, per=4.22%, avg=2309.74, stdev=63.21, samples=19 00:28:05.682 iops : min= 544, max= 635, avg=577.42, stdev=15.75, samples=19 00:28:05.682 lat (msec) : 10=0.55%, 20=0.88%, 50=98.56% 00:28:05.682 cpu : usr=98.78%, sys=0.83%, ctx=14, majf=0, minf=52 00:28:05.682 IO depths : 1=6.1%, 2=12.3%, 4=24.7%, 8=50.5%, 16=6.4%, 32=0.0%, >=64=0.0% 00:28:05.682 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:05.682 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:05.682 issued rwts: total=5773,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:05.682 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:05.682 00:28:05.682 Run status group 0 (all jobs): 00:28:05.682 READ: bw=53.5MiB/s (56.1MB/s), 2052KiB/s-2344KiB/s (2101kB/s-2400kB/s), io=536MiB (562MB), run=10002-10022msec 00:28:05.682 00:29:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:28:05.682 00:29:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:28:05.682 00:29:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:05.682 00:29:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:05.682 00:29:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:28:05.682 00:29:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:05.682 00:29:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:05.682 00:29:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:05.682 00:29:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:05.682 00:29:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:05.682 00:29:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:05.682 00:29:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:05.682 00:29:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:05.682 00:29:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:05.682 00:29:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:28:05.682 00:29:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:28:05.682 00:29:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:05.682 00:29:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:05.682 00:29:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:05.682 00:29:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:05.682 00:29:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:28:05.682 00:29:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:05.682 00:29:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:05.682 00:29:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:05.682 00:29:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:05.682 00:29:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:28:05.682 00:29:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:28:05.682 00:29:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:28:05.682 00:29:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:05.682 00:29:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:05.682 00:29:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:05.682 00:29:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:28:05.682 00:29:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:05.682 00:29:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:05.682 00:29:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:05.682 00:29:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:28:05.682 00:29:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:28:05.682 00:29:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:28:05.682 00:29:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:28:05.682 00:29:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:28:05.682 00:29:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:28:05.682 00:29:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:28:05.682 00:29:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:28:05.682 00:29:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:05.682 00:29:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:28:05.682 00:29:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:28:05.682 00:29:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:28:05.682 00:29:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:05.682 00:29:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:05.682 bdev_null0 00:28:05.682 00:29:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:05.682 00:29:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:05.682 00:29:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:05.682 00:29:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:05.682 00:29:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:05.682 00:29:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:05.682 00:29:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:05.682 00:29:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:05.683 00:29:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:05.683 00:29:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:05.683 00:29:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:05.683 00:29:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:05.683 [2024-07-16 00:29:23.191020] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:05.683 00:29:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:05.683 00:29:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:05.683 00:29:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:28:05.683 00:29:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:28:05.683 00:29:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:28:05.683 00:29:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:05.683 00:29:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:05.683 bdev_null1 00:28:05.683 00:29:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:05.683 00:29:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:28:05.683 00:29:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:05.683 00:29:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:05.683 00:29:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:05.683 00:29:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:28:05.683 00:29:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:05.683 00:29:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:05.683 00:29:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:05.683 00:29:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:05.683 00:29:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:05.683 00:29:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:05.683 00:29:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:05.683 00:29:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:28:05.683 00:29:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:05.683 00:29:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:05.683 00:29:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1331 -- # local fio_dir=/usr/src/fio 00:28:05.683 00:29:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:05.683 00:29:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:28:05.683 00:29:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local sanitizers 00:28:05.683 00:29:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1334 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:05.683 00:29:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # shift 00:28:05.683 00:29:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local asan_lib= 00:28:05.683 00:29:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:28:05.683 00:29:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # for sanitizer in "${sanitizers[@]}" 00:28:05.683 00:29:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:28:05.683 00:29:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:28:05.683 00:29:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:28:05.683 00:29:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:28:05.683 00:29:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:05.683 00:29:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:28:05.683 00:29:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:05.683 { 00:28:05.683 "params": { 00:28:05.683 "name": "Nvme$subsystem", 00:28:05.683 "trtype": "$TEST_TRANSPORT", 00:28:05.683 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:05.683 "adrfam": "ipv4", 00:28:05.683 "trsvcid": "$NVMF_PORT", 00:28:05.683 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:05.683 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:05.683 "hdgst": ${hdgst:-false}, 00:28:05.683 "ddgst": ${ddgst:-false} 00:28:05.683 }, 00:28:05.683 "method": "bdev_nvme_attach_controller" 00:28:05.683 } 00:28:05.683 EOF 00:28:05.683 )") 00:28:05.683 00:29:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:05.683 00:29:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # grep libasan 00:28:05.683 00:29:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:05.683 00:29:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # awk '{print $3}' 00:28:05.683 00:29:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:28:05.683 00:29:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:05.683 00:29:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:28:05.683 00:29:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:05.683 00:29:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:05.683 { 00:28:05.683 "params": { 00:28:05.683 "name": "Nvme$subsystem", 00:28:05.683 "trtype": "$TEST_TRANSPORT", 00:28:05.683 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:05.683 "adrfam": "ipv4", 00:28:05.683 "trsvcid": "$NVMF_PORT", 00:28:05.683 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:05.683 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:05.683 "hdgst": ${hdgst:-false}, 00:28:05.683 "ddgst": ${ddgst:-false} 00:28:05.683 }, 00:28:05.683 "method": "bdev_nvme_attach_controller" 00:28:05.683 } 00:28:05.683 EOF 00:28:05.683 )") 00:28:05.683 00:29:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:28:05.683 00:29:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:05.683 00:29:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:05.683 00:29:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:28:05.683 00:29:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:28:05.683 00:29:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:05.683 "params": { 00:28:05.683 "name": "Nvme0", 00:28:05.683 "trtype": "tcp", 00:28:05.683 "traddr": "10.0.0.2", 00:28:05.683 "adrfam": "ipv4", 00:28:05.683 "trsvcid": "4420", 00:28:05.683 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:05.683 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:05.683 "hdgst": false, 00:28:05.683 "ddgst": false 00:28:05.683 }, 00:28:05.683 "method": "bdev_nvme_attach_controller" 00:28:05.683 },{ 00:28:05.683 "params": { 00:28:05.683 "name": "Nvme1", 00:28:05.683 "trtype": "tcp", 00:28:05.683 "traddr": "10.0.0.2", 00:28:05.683 "adrfam": "ipv4", 00:28:05.683 "trsvcid": "4420", 00:28:05.683 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:05.683 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:05.683 "hdgst": false, 00:28:05.683 "ddgst": false 00:28:05.683 }, 00:28:05.683 "method": "bdev_nvme_attach_controller" 00:28:05.683 }' 00:28:05.683 00:29:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # asan_lib= 00:28:05.683 00:29:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # [[ -n '' ]] 00:28:05.683 00:29:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # for sanitizer in "${sanitizers[@]}" 00:28:05.683 00:29:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:05.683 00:29:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # grep libclang_rt.asan 00:28:05.683 00:29:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # awk '{print $3}' 00:28:05.683 00:29:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # asan_lib= 00:28:05.683 00:29:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # [[ -n '' ]] 00:28:05.683 00:29:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:05.683 00:29:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:05.683 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:28:05.683 ... 00:28:05.683 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:28:05.683 ... 00:28:05.683 fio-3.35 00:28:05.683 Starting 4 threads 00:28:10.943 00:28:10.943 filename0: (groupid=0, jobs=1): err= 0: pid=1686985: Tue Jul 16 00:29:29 2024 00:28:10.943 read: IOPS=2596, BW=20.3MiB/s (21.3MB/s)(101MiB/5003msec) 00:28:10.943 slat (nsec): min=6177, max=26801, avg=8805.91, stdev=2769.49 00:28:10.943 clat (usec): min=1323, max=42889, avg=3055.72, stdev=1102.61 00:28:10.943 lat (usec): min=1330, max=42916, avg=3064.52, stdev=1102.54 00:28:10.943 clat percentiles (usec): 00:28:10.943 | 1.00th=[ 2057], 5.00th=[ 2474], 10.00th=[ 2671], 20.00th=[ 2769], 00:28:10.943 | 30.00th=[ 2835], 40.00th=[ 2900], 50.00th=[ 2933], 60.00th=[ 2999], 00:28:10.943 | 70.00th=[ 3032], 80.00th=[ 3064], 90.00th=[ 3818], 95.00th=[ 4359], 00:28:10.943 | 99.00th=[ 4424], 99.50th=[ 4555], 99.90th=[ 4948], 99.95th=[42730], 00:28:10.943 | 99.99th=[42730] 00:28:10.943 bw ( KiB/s): min=19760, max=21600, per=25.06%, avg=20772.80, stdev=606.13, samples=10 00:28:10.943 iops : min= 2470, max= 2700, avg=2596.60, stdev=75.77, samples=10 00:28:10.943 lat (msec) : 2=0.69%, 4=90.18%, 10=9.07%, 50=0.06% 00:28:10.943 cpu : usr=96.26%, sys=3.40%, ctx=8, majf=0, minf=0 00:28:10.943 IO depths : 1=0.1%, 2=2.7%, 4=69.5%, 8=27.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:10.943 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:10.943 complete : 0=0.0%, 4=92.4%, 8=7.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:10.943 issued rwts: total=12988,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:10.943 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:10.943 filename0: (groupid=0, jobs=1): err= 0: pid=1686986: Tue Jul 16 00:29:29 2024 00:28:10.943 read: IOPS=2644, BW=20.7MiB/s (21.7MB/s)(103MiB/5001msec) 00:28:10.943 slat (nsec): min=6193, max=27416, avg=8720.93, stdev=2763.69 00:28:10.943 clat (usec): min=1451, max=44114, avg=3000.03, stdev=1076.13 00:28:10.943 lat (usec): min=1459, max=44132, avg=3008.75, stdev=1076.13 00:28:10.943 clat percentiles (usec): 00:28:10.943 | 1.00th=[ 2245], 5.00th=[ 2540], 10.00th=[ 2671], 20.00th=[ 2802], 00:28:10.943 | 30.00th=[ 2835], 40.00th=[ 2868], 50.00th=[ 2933], 60.00th=[ 2999], 00:28:10.943 | 70.00th=[ 3032], 80.00th=[ 3064], 90.00th=[ 3294], 95.00th=[ 3752], 00:28:10.943 | 99.00th=[ 4424], 99.50th=[ 4621], 99.90th=[ 5276], 99.95th=[44303], 00:28:10.943 | 99.99th=[44303] 00:28:10.943 bw ( KiB/s): min=19264, max=21888, per=25.51%, avg=21148.80, stdev=873.20, samples=10 00:28:10.943 iops : min= 2408, max= 2736, avg=2643.60, stdev=109.15, samples=10 00:28:10.943 lat (msec) : 2=0.27%, 4=96.04%, 10=3.62%, 50=0.06% 00:28:10.943 cpu : usr=96.34%, sys=3.34%, ctx=10, majf=0, minf=0 00:28:10.943 IO depths : 1=0.2%, 2=1.9%, 4=71.4%, 8=26.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:10.943 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:10.943 complete : 0=0.0%, 4=91.3%, 8=8.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:10.943 issued rwts: total=13223,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:10.943 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:10.943 filename1: (groupid=0, jobs=1): err= 0: pid=1686987: Tue Jul 16 00:29:29 2024 00:28:10.943 read: IOPS=2579, BW=20.1MiB/s (21.1MB/s)(101MiB/5001msec) 00:28:10.943 slat (nsec): min=6167, max=26399, avg=8684.40, stdev=2800.69 00:28:10.943 clat (usec): min=1082, max=5122, avg=3077.57, stdev=567.61 00:28:10.943 lat (usec): min=1088, max=5129, avg=3086.26, stdev=567.16 00:28:10.943 clat percentiles (usec): 00:28:10.943 | 1.00th=[ 1205], 5.00th=[ 2540], 10.00th=[ 2704], 20.00th=[ 2769], 00:28:10.943 | 30.00th=[ 2868], 40.00th=[ 2933], 50.00th=[ 2966], 60.00th=[ 2999], 00:28:10.943 | 70.00th=[ 3032], 80.00th=[ 3195], 90.00th=[ 4178], 95.00th=[ 4359], 00:28:10.943 | 99.00th=[ 4621], 99.50th=[ 4621], 99.90th=[ 5014], 99.95th=[ 5014], 00:28:10.943 | 99.99th=[ 5145] 00:28:10.943 bw ( KiB/s): min=19760, max=22592, per=24.92%, avg=20654.22, stdev=836.47, samples=9 00:28:10.943 iops : min= 2470, max= 2824, avg=2581.78, stdev=104.56, samples=9 00:28:10.943 lat (msec) : 2=1.96%, 4=86.64%, 10=11.40% 00:28:10.943 cpu : usr=95.98%, sys=3.70%, ctx=8, majf=0, minf=9 00:28:10.943 IO depths : 1=0.1%, 2=1.1%, 4=71.2%, 8=27.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:10.943 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:10.943 complete : 0=0.0%, 4=92.7%, 8=7.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:10.943 issued rwts: total=12898,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:10.943 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:10.943 filename1: (groupid=0, jobs=1): err= 0: pid=1686988: Tue Jul 16 00:29:29 2024 00:28:10.943 read: IOPS=2604, BW=20.4MiB/s (21.3MB/s)(103MiB/5042msec) 00:28:10.943 slat (nsec): min=6194, max=28678, avg=9038.89, stdev=2843.92 00:28:10.943 clat (usec): min=1672, max=42941, avg=3030.17, stdev=1205.88 00:28:10.943 lat (usec): min=1684, max=42967, avg=3039.21, stdev=1205.88 00:28:10.943 clat percentiles (usec): 00:28:10.943 | 1.00th=[ 2278], 5.00th=[ 2573], 10.00th=[ 2671], 20.00th=[ 2802], 00:28:10.943 | 30.00th=[ 2835], 40.00th=[ 2868], 50.00th=[ 2966], 60.00th=[ 2999], 00:28:10.943 | 70.00th=[ 3032], 80.00th=[ 3064], 90.00th=[ 3392], 95.00th=[ 3851], 00:28:10.943 | 99.00th=[ 4555], 99.50th=[ 4621], 99.90th=[ 5211], 99.95th=[42730], 00:28:10.943 | 99.99th=[42730] 00:28:10.943 bw ( KiB/s): min=19408, max=21888, per=25.34%, avg=21009.60, stdev=659.54, samples=10 00:28:10.943 iops : min= 2426, max= 2736, avg=2626.20, stdev=82.44, samples=10 00:28:10.943 lat (msec) : 2=0.11%, 4=95.48%, 10=4.32%, 50=0.08% 00:28:10.943 cpu : usr=95.93%, sys=3.73%, ctx=11, majf=0, minf=9 00:28:10.943 IO depths : 1=0.1%, 2=1.3%, 4=70.5%, 8=28.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:10.943 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:10.943 complete : 0=0.0%, 4=92.8%, 8=7.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:10.943 issued rwts: total=13134,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:10.943 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:10.943 00:28:10.943 Run status group 0 (all jobs): 00:28:10.943 READ: bw=80.9MiB/s (84.9MB/s), 20.1MiB/s-20.7MiB/s (21.1MB/s-21.7MB/s), io=408MiB (428MB), run=5001-5042msec 00:28:10.943 00:29:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:28:10.943 00:29:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:28:10.943 00:29:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:10.943 00:29:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:10.943 00:29:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:28:10.943 00:29:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:10.943 00:29:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:10.943 00:29:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:10.943 00:29:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:10.943 00:29:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:10.943 00:29:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:10.943 00:29:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:10.943 00:29:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:10.943 00:29:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:10.944 00:29:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:28:10.944 00:29:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:28:10.944 00:29:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:10.944 00:29:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:10.944 00:29:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:10.944 00:29:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:10.944 00:29:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:28:10.944 00:29:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:10.944 00:29:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:10.944 00:29:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:10.944 00:28:10.944 real 0m24.280s 00:28:10.944 user 4m52.061s 00:28:10.944 sys 0m4.185s 00:28:10.944 00:29:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1118 -- # xtrace_disable 00:28:10.944 00:29:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:10.944 ************************************ 00:28:10.944 END TEST fio_dif_rand_params 00:28:10.944 ************************************ 00:28:10.944 00:29:29 nvmf_dif -- common/autotest_common.sh@1136 -- # return 0 00:28:10.944 00:29:29 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:28:10.944 00:29:29 nvmf_dif -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:28:10.944 00:29:29 nvmf_dif -- common/autotest_common.sh@1099 -- # xtrace_disable 00:28:10.944 00:29:29 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:10.944 ************************************ 00:28:10.944 START TEST fio_dif_digest 00:28:10.944 ************************************ 00:28:10.944 00:29:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1117 -- # fio_dif_digest 00:28:10.944 00:29:29 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:28:10.944 00:29:29 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:28:10.944 00:29:29 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:28:10.944 00:29:29 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:28:10.944 00:29:29 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:28:10.944 00:29:29 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:28:10.944 00:29:29 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:28:10.944 00:29:29 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:28:10.944 00:29:29 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:28:10.944 00:29:29 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:28:10.944 00:29:29 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:28:10.944 00:29:29 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:28:10.944 00:29:29 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:28:10.944 00:29:29 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:28:10.944 00:29:29 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:28:10.944 00:29:29 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:28:10.944 00:29:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:10.944 00:29:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:10.944 bdev_null0 00:28:10.944 00:29:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:10.944 00:29:29 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:10.944 00:29:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:10.944 00:29:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:10.944 00:29:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:10.944 00:29:29 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:10.944 00:29:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:10.944 00:29:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:10.944 00:29:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:10.944 00:29:29 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:10.944 00:29:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:10.944 00:29:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:10.944 [2024-07-16 00:29:29.600912] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:10.944 00:29:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:10.944 00:29:29 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:28:10.944 00:29:29 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:10.944 00:29:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:10.944 00:29:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1331 -- # local fio_dir=/usr/src/fio 00:28:10.944 00:29:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1333 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:10.944 00:29:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1333 -- # local sanitizers 00:28:10.944 00:29:29 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:28:10.944 00:29:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1334 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:10.944 00:29:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1335 -- # shift 00:28:10.944 00:29:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local asan_lib= 00:28:10.944 00:29:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1338 -- # for sanitizer in "${sanitizers[@]}" 00:28:10.944 00:29:29 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:28:10.944 00:29:29 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:28:10.944 00:29:29 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:28:10.944 00:29:29 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:28:10.944 00:29:29 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:28:10.944 00:29:29 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:28:10.944 00:29:29 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:10.944 00:29:29 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:10.944 { 00:28:10.944 "params": { 00:28:10.944 "name": "Nvme$subsystem", 00:28:10.944 "trtype": "$TEST_TRANSPORT", 00:28:10.944 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:10.944 "adrfam": "ipv4", 00:28:10.944 "trsvcid": "$NVMF_PORT", 00:28:10.944 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:10.944 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:10.944 "hdgst": ${hdgst:-false}, 00:28:10.944 "ddgst": ${ddgst:-false} 00:28:10.944 }, 00:28:10.944 "method": "bdev_nvme_attach_controller" 00:28:10.944 } 00:28:10.944 EOF 00:28:10.944 )") 00:28:10.944 00:29:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:10.944 00:29:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # grep libasan 00:28:10.944 00:29:29 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:28:10.944 00:29:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # awk '{print $3}' 00:28:10.944 00:29:29 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:28:10.944 00:29:29 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:28:10.944 00:29:29 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:28:10.944 00:29:29 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:28:10.944 00:29:29 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:10.944 "params": { 00:28:10.944 "name": "Nvme0", 00:28:10.944 "trtype": "tcp", 00:28:10.944 "traddr": "10.0.0.2", 00:28:10.944 "adrfam": "ipv4", 00:28:10.944 "trsvcid": "4420", 00:28:10.944 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:10.944 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:10.944 "hdgst": true, 00:28:10.944 "ddgst": true 00:28:10.944 }, 00:28:10.944 "method": "bdev_nvme_attach_controller" 00:28:10.944 }' 00:28:10.944 00:29:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # asan_lib= 00:28:10.944 00:29:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # [[ -n '' ]] 00:28:10.944 00:29:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1338 -- # for sanitizer in "${sanitizers[@]}" 00:28:10.944 00:29:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:10.944 00:29:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # grep libclang_rt.asan 00:28:10.944 00:29:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # awk '{print $3}' 00:28:10.944 00:29:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # asan_lib= 00:28:10.944 00:29:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # [[ -n '' ]] 00:28:10.944 00:29:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:10.944 00:29:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:11.203 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:28:11.203 ... 00:28:11.203 fio-3.35 00:28:11.203 Starting 3 threads 00:28:23.401 00:28:23.402 filename0: (groupid=0, jobs=1): err= 0: pid=1688048: Tue Jul 16 00:29:40 2024 00:28:23.402 read: IOPS=264, BW=33.0MiB/s (34.6MB/s)(332MiB/10045msec) 00:28:23.402 slat (usec): min=6, max=131, avg=23.35, stdev= 8.43 00:28:23.402 clat (usec): min=8620, max=53681, avg=11309.56, stdev=2378.02 00:28:23.402 lat (usec): min=8632, max=53714, avg=11332.92, stdev=2378.01 00:28:23.402 clat percentiles (usec): 00:28:23.402 | 1.00th=[ 9372], 5.00th=[ 9896], 10.00th=[10159], 20.00th=[10552], 00:28:23.402 | 30.00th=[10814], 40.00th=[10945], 50.00th=[11076], 60.00th=[11338], 00:28:23.402 | 70.00th=[11600], 80.00th=[11863], 90.00th=[12256], 95.00th=[12649], 00:28:23.402 | 99.00th=[13829], 99.50th=[14353], 99.90th=[52691], 99.95th=[52691], 00:28:23.402 | 99.99th=[53740] 00:28:23.402 bw ( KiB/s): min=31744, max=35584, per=32.56%, avg=33865.10, stdev=850.93, samples=20 00:28:23.402 iops : min= 248, max= 278, avg=264.55, stdev= 6.69, samples=20 00:28:23.402 lat (msec) : 10=6.37%, 20=93.33%, 50=0.04%, 100=0.26% 00:28:23.402 cpu : usr=96.20%, sys=3.44%, ctx=36, majf=0, minf=194 00:28:23.402 IO depths : 1=1.0%, 2=99.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:23.402 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.402 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.402 issued rwts: total=2654,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:23.402 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:23.402 filename0: (groupid=0, jobs=1): err= 0: pid=1688049: Tue Jul 16 00:29:40 2024 00:28:23.402 read: IOPS=277, BW=34.7MiB/s (36.4MB/s)(349MiB/10048msec) 00:28:23.402 slat (nsec): min=6602, max=46761, avg=17540.22, stdev=8065.84 00:28:23.402 clat (usec): min=6412, max=51127, avg=10770.22, stdev=1355.37 00:28:23.402 lat (usec): min=6420, max=51155, avg=10787.76, stdev=1355.66 00:28:23.402 clat percentiles (usec): 00:28:23.402 | 1.00th=[ 8160], 5.00th=[ 9372], 10.00th=[ 9765], 20.00th=[10028], 00:28:23.402 | 30.00th=[10290], 40.00th=[10552], 50.00th=[10814], 60.00th=[10945], 00:28:23.402 | 70.00th=[11207], 80.00th=[11469], 90.00th=[11731], 95.00th=[12125], 00:28:23.402 | 99.00th=[12911], 99.50th=[13173], 99.90th=[14222], 99.95th=[47973], 00:28:23.402 | 99.99th=[51119] 00:28:23.402 bw ( KiB/s): min=34560, max=37120, per=34.30%, avg=35673.60, stdev=606.23, samples=20 00:28:23.402 iops : min= 270, max= 290, avg=278.70, stdev= 4.74, samples=20 00:28:23.402 lat (msec) : 10=17.17%, 20=82.75%, 50=0.04%, 100=0.04% 00:28:23.402 cpu : usr=95.79%, sys=3.87%, ctx=15, majf=0, minf=113 00:28:23.402 IO depths : 1=1.5%, 2=98.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:23.402 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.402 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.402 issued rwts: total=2789,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:23.402 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:23.402 filename0: (groupid=0, jobs=1): err= 0: pid=1688050: Tue Jul 16 00:29:40 2024 00:28:23.402 read: IOPS=271, BW=34.0MiB/s (35.7MB/s)(340MiB/10004msec) 00:28:23.402 slat (usec): min=6, max=136, avg=17.19, stdev= 7.96 00:28:23.402 clat (usec): min=7134, max=15791, avg=11008.65, stdev=886.13 00:28:23.402 lat (usec): min=7149, max=15820, avg=11025.84, stdev=886.09 00:28:23.402 clat percentiles (usec): 00:28:23.402 | 1.00th=[ 8586], 5.00th=[ 9634], 10.00th=[10028], 20.00th=[10290], 00:28:23.402 | 30.00th=[10552], 40.00th=[10814], 50.00th=[10945], 60.00th=[11207], 00:28:23.402 | 70.00th=[11338], 80.00th=[11731], 90.00th=[12125], 95.00th=[12518], 00:28:23.402 | 99.00th=[13173], 99.50th=[13435], 99.90th=[14484], 99.95th=[15139], 00:28:23.402 | 99.99th=[15795] 00:28:23.402 bw ( KiB/s): min=33280, max=35840, per=33.47%, avg=34812.47, stdev=695.04, samples=19 00:28:23.402 iops : min= 260, max= 280, avg=271.95, stdev= 5.48, samples=19 00:28:23.402 lat (msec) : 10=10.18%, 20=89.82% 00:28:23.402 cpu : usr=95.68%, sys=3.98%, ctx=26, majf=0, minf=193 00:28:23.402 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:23.402 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.402 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.402 issued rwts: total=2721,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:23.402 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:23.402 00:28:23.402 Run status group 0 (all jobs): 00:28:23.402 READ: bw=102MiB/s (106MB/s), 33.0MiB/s-34.7MiB/s (34.6MB/s-36.4MB/s), io=1021MiB (1070MB), run=10004-10048msec 00:28:23.402 00:29:40 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:28:23.402 00:29:40 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:28:23.402 00:29:40 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:28:23.402 00:29:40 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:23.402 00:29:40 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:28:23.402 00:29:40 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:23.402 00:29:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:23.402 00:29:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:23.402 00:29:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:23.402 00:29:40 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:23.402 00:29:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:23.402 00:29:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:23.402 00:29:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:23.402 00:28:23.402 real 0m11.089s 00:28:23.402 user 0m35.106s 00:28:23.402 sys 0m1.457s 00:28:23.402 00:29:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1118 -- # xtrace_disable 00:28:23.402 00:29:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:23.402 ************************************ 00:28:23.402 END TEST fio_dif_digest 00:28:23.402 ************************************ 00:28:23.402 00:29:40 nvmf_dif -- common/autotest_common.sh@1136 -- # return 0 00:28:23.402 00:29:40 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:28:23.402 00:29:40 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:28:23.402 00:29:40 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:23.402 00:29:40 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:28:23.402 00:29:40 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:23.402 00:29:40 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:28:23.402 00:29:40 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:23.402 00:29:40 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:23.402 rmmod nvme_tcp 00:28:23.402 rmmod nvme_fabrics 00:28:23.402 rmmod nvme_keyring 00:28:23.402 00:29:40 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:23.402 00:29:40 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:28:23.402 00:29:40 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:28:23.402 00:29:40 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 1679441 ']' 00:28:23.402 00:29:40 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 1679441 00:28:23.402 00:29:40 nvmf_dif -- common/autotest_common.sh@942 -- # '[' -z 1679441 ']' 00:28:23.402 00:29:40 nvmf_dif -- common/autotest_common.sh@946 -- # kill -0 1679441 00:28:23.402 00:29:40 nvmf_dif -- common/autotest_common.sh@947 -- # uname 00:28:23.402 00:29:40 nvmf_dif -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:28:23.402 00:29:40 nvmf_dif -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1679441 00:28:23.402 00:29:40 nvmf_dif -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:28:23.402 00:29:40 nvmf_dif -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:28:23.402 00:29:40 nvmf_dif -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1679441' 00:28:23.402 killing process with pid 1679441 00:28:23.402 00:29:40 nvmf_dif -- common/autotest_common.sh@961 -- # kill 1679441 00:28:23.402 00:29:40 nvmf_dif -- common/autotest_common.sh@966 -- # wait 1679441 00:28:23.402 00:29:40 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:28:23.402 00:29:40 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:28:24.339 Waiting for block devices as requested 00:28:24.339 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:28:24.339 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:28:24.339 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:28:24.339 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:28:24.598 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:28:24.598 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:28:24.598 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:28:24.598 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:28:24.855 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:28:24.855 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:28:24.855 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:28:24.855 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:28:25.114 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:28:25.114 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:28:25.114 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:28:25.114 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:28:25.372 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:28:25.372 00:29:44 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:25.372 00:29:44 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:25.372 00:29:44 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:25.372 00:29:44 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:25.372 00:29:44 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:25.372 00:29:44 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:25.372 00:29:44 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:27.907 00:29:46 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:27.907 00:28:27.907 real 1m12.508s 00:28:27.907 user 7m9.672s 00:28:27.907 sys 0m17.484s 00:28:27.907 00:29:46 nvmf_dif -- common/autotest_common.sh@1118 -- # xtrace_disable 00:28:27.907 00:29:46 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:27.907 ************************************ 00:28:27.907 END TEST nvmf_dif 00:28:27.907 ************************************ 00:28:27.907 00:29:46 -- common/autotest_common.sh@1136 -- # return 0 00:28:27.907 00:29:46 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:28:27.907 00:29:46 -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:28:27.907 00:29:46 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:28:27.907 00:29:46 -- common/autotest_common.sh@10 -- # set +x 00:28:27.907 ************************************ 00:28:27.907 START TEST nvmf_abort_qd_sizes 00:28:27.907 ************************************ 00:28:27.907 00:29:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:28:27.907 * Looking for test storage... 00:28:27.907 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:27.907 00:29:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:27.907 00:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:28:27.907 00:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:27.907 00:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:27.907 00:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:27.907 00:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:27.907 00:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:27.907 00:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:27.907 00:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:27.907 00:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:27.907 00:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:27.907 00:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:27.907 00:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:27.907 00:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:27.907 00:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:27.907 00:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:27.907 00:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:27.907 00:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:27.907 00:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:27.907 00:29:46 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:27.907 00:29:46 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:27.907 00:29:46 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:27.907 00:29:46 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:27.907 00:29:46 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:27.907 00:29:46 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:27.907 00:29:46 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:28:27.907 00:29:46 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:27.907 00:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:28:27.907 00:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:27.907 00:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:27.907 00:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:27.907 00:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:27.907 00:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:27.907 00:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:27.907 00:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:27.907 00:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:27.907 00:29:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:28:27.908 00:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:27.908 00:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:27.908 00:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:27.908 00:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:27.908 00:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:27.908 00:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:27.908 00:29:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:27.908 00:29:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:27.908 00:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:27.908 00:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:27.908 00:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:28:27.908 00:29:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:33.176 00:29:51 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:33.176 00:29:51 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:28:33.176 00:29:51 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:33.176 00:29:51 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:33.176 00:29:51 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:33.176 00:29:51 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:33.176 00:29:51 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:33.176 00:29:51 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:28:33.176 00:29:51 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:33.176 00:29:51 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:28:33.176 00:29:51 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:28:33.176 00:29:51 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:28:33.176 00:29:51 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:28:33.176 00:29:51 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:28:33.176 00:29:51 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:28:33.176 00:29:51 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:33.176 00:29:51 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:33.176 00:29:51 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:33.176 00:29:51 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:33.176 00:29:51 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:33.176 00:29:51 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:33.176 00:29:51 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:33.176 00:29:51 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:33.176 00:29:51 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:33.176 00:29:51 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:33.176 00:29:51 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:33.176 00:29:51 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:33.176 00:29:51 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:33.176 00:29:51 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:33.176 00:29:51 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:33.176 00:29:51 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:33.176 00:29:51 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:33.176 00:29:51 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:33.176 00:29:51 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:33.176 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:33.176 00:29:51 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:33.176 00:29:51 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:33.176 00:29:51 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:33.176 00:29:51 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:33.176 00:29:51 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:33.176 00:29:51 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:33.176 00:29:51 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:33.176 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:33.176 00:29:51 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:33.176 00:29:51 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:33.176 00:29:51 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:33.176 00:29:51 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:33.176 00:29:51 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:33.176 00:29:51 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:33.176 00:29:51 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:33.176 00:29:51 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:33.176 00:29:51 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:33.176 00:29:51 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:33.176 00:29:51 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:33.176 00:29:51 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:33.176 00:29:51 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:33.176 00:29:51 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:33.176 00:29:51 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:33.176 00:29:51 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:33.176 Found net devices under 0000:86:00.0: cvl_0_0 00:28:33.176 00:29:51 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:33.176 00:29:51 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:33.176 00:29:51 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:33.176 00:29:51 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:33.176 00:29:51 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:33.176 00:29:51 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:33.176 00:29:51 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:33.176 00:29:51 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:33.176 00:29:51 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:33.176 Found net devices under 0000:86:00.1: cvl_0_1 00:28:33.176 00:29:51 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:33.176 00:29:51 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:33.176 00:29:51 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:28:33.176 00:29:51 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:33.176 00:29:51 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:33.176 00:29:51 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:33.176 00:29:51 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:33.176 00:29:51 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:33.176 00:29:51 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:33.176 00:29:51 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:33.176 00:29:51 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:33.176 00:29:51 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:33.176 00:29:51 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:33.176 00:29:51 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:33.176 00:29:51 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:33.176 00:29:51 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:33.176 00:29:51 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:33.176 00:29:51 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:33.176 00:29:51 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:33.176 00:29:51 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:33.176 00:29:51 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:33.176 00:29:51 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:33.176 00:29:51 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:33.176 00:29:51 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:33.176 00:29:51 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:33.176 00:29:51 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:33.176 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:33.176 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.239 ms 00:28:33.176 00:28:33.176 --- 10.0.0.2 ping statistics --- 00:28:33.176 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:33.176 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:28:33.176 00:29:51 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:33.176 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:33.176 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.249 ms 00:28:33.176 00:28:33.176 --- 10.0.0.1 ping statistics --- 00:28:33.176 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:33.176 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:28:33.176 00:29:51 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:33.176 00:29:51 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:28:33.176 00:29:51 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:28:33.176 00:29:51 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:35.086 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:28:35.087 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:28:35.087 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:28:35.087 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:28:35.087 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:28:35.087 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:28:35.087 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:28:35.087 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:28:35.087 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:28:35.087 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:28:35.087 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:28:35.087 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:28:35.087 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:28:35.087 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:28:35.087 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:28:35.087 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:28:35.687 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:28:35.946 00:29:54 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:35.946 00:29:54 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:35.946 00:29:54 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:35.946 00:29:54 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:35.946 00:29:54 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:35.946 00:29:54 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:35.946 00:29:54 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:28:35.946 00:29:54 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:35.946 00:29:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@716 -- # xtrace_disable 00:28:35.946 00:29:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:35.946 00:29:54 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=1695730 00:28:35.946 00:29:54 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 1695730 00:28:35.946 00:29:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@823 -- # '[' -z 1695730 ']' 00:28:35.946 00:29:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:35.946 00:29:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@828 -- # local max_retries=100 00:28:35.946 00:29:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:35.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:35.946 00:29:54 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:28:35.946 00:29:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@832 -- # xtrace_disable 00:28:35.946 00:29:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:35.946 [2024-07-16 00:29:54.744597] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:28:35.946 [2024-07-16 00:29:54.744642] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:36.204 [2024-07-16 00:29:54.800779] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:36.204 [2024-07-16 00:29:54.884245] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:36.204 [2024-07-16 00:29:54.884278] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:36.204 [2024-07-16 00:29:54.884285] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:36.204 [2024-07-16 00:29:54.884291] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:36.204 [2024-07-16 00:29:54.884296] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:36.204 [2024-07-16 00:29:54.884338] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:36.204 [2024-07-16 00:29:54.884456] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:36.204 [2024-07-16 00:29:54.884518] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:36.204 [2024-07-16 00:29:54.884519] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:36.770 00:29:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:28:36.770 00:29:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@856 -- # return 0 00:28:36.770 00:29:55 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:36.770 00:29:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:36.770 00:29:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:36.770 00:29:55 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:36.770 00:29:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:28:36.770 00:29:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:28:36.770 00:29:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:28:36.770 00:29:55 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:28:36.770 00:29:55 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:28:36.770 00:29:55 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:5e:00.0 ]] 00:28:36.770 00:29:55 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:28:36.770 00:29:55 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:28:36.770 00:29:55 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:28:36.770 00:29:55 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:28:36.770 00:29:55 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:28:36.770 00:29:55 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:28:36.770 00:29:55 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:28:36.770 00:29:55 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:5e:00.0 00:28:36.770 00:29:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:28:36.770 00:29:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5e:00.0 00:28:36.770 00:29:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:28:36.770 00:29:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:28:36.770 00:29:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # xtrace_disable 00:28:36.770 00:29:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:37.028 ************************************ 00:28:37.028 START TEST spdk_target_abort 00:28:37.028 ************************************ 00:28:37.028 00:29:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1117 -- # spdk_target 00:28:37.028 00:29:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:28:37.028 00:29:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:28:37.028 00:29:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:37.028 00:29:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:40.307 spdk_targetn1 00:28:40.307 00:29:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:40.307 00:29:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:40.307 00:29:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:40.307 00:29:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:40.307 [2024-07-16 00:29:58.459890] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:40.307 00:29:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:40.307 00:29:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:28:40.307 00:29:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:40.307 00:29:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:40.307 00:29:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:40.307 00:29:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:28:40.307 00:29:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:40.307 00:29:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:40.307 00:29:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:40.307 00:29:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:28:40.307 00:29:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:40.307 00:29:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:40.307 [2024-07-16 00:29:58.492766] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:40.307 00:29:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:40.307 00:29:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:28:40.307 00:29:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:28:40.307 00:29:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:28:40.307 00:29:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:28:40.307 00:29:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:28:40.307 00:29:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:28:40.307 00:29:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:28:40.307 00:29:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:28:40.307 00:29:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:28:40.307 00:29:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:40.307 00:29:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:28:40.307 00:29:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:40.307 00:29:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:28:40.307 00:29:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:40.307 00:29:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:28:40.307 00:29:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:40.307 00:29:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:40.307 00:29:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:40.307 00:29:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:40.307 00:29:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:40.307 00:29:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:42.832 Initializing NVMe Controllers 00:28:42.832 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:28:42.832 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:42.832 Initialization complete. Launching workers. 00:28:42.832 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 12852, failed: 0 00:28:42.832 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1463, failed to submit 11389 00:28:42.832 success 808, unsuccess 655, failed 0 00:28:42.832 00:30:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:42.832 00:30:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:46.114 Initializing NVMe Controllers 00:28:46.114 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:28:46.114 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:46.114 Initialization complete. Launching workers. 00:28:46.114 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8837, failed: 0 00:28:46.114 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1240, failed to submit 7597 00:28:46.114 success 331, unsuccess 909, failed 0 00:28:46.114 00:30:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:46.114 00:30:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:49.407 Initializing NVMe Controllers 00:28:49.407 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:28:49.407 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:49.407 Initialization complete. Launching workers. 00:28:49.407 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 37834, failed: 0 00:28:49.407 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2769, failed to submit 35065 00:28:49.407 success 585, unsuccess 2184, failed 0 00:28:49.407 00:30:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:28:49.407 00:30:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:49.407 00:30:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:49.407 00:30:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:49.407 00:30:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:28:49.407 00:30:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:49.407 00:30:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:50.783 00:30:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:50.783 00:30:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1695730 00:28:50.783 00:30:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@942 -- # '[' -z 1695730 ']' 00:28:50.783 00:30:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@946 -- # kill -0 1695730 00:28:50.783 00:30:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@947 -- # uname 00:28:50.783 00:30:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:28:50.783 00:30:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1695730 00:28:50.783 00:30:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:28:50.783 00:30:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:28:50.783 00:30:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1695730' 00:28:50.783 killing process with pid 1695730 00:28:50.783 00:30:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@961 -- # kill 1695730 00:28:50.783 00:30:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # wait 1695730 00:28:51.043 00:28:51.043 real 0m14.041s 00:28:51.043 user 0m55.936s 00:28:51.043 sys 0m2.264s 00:28:51.043 00:30:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1118 -- # xtrace_disable 00:28:51.043 00:30:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:51.043 ************************************ 00:28:51.043 END TEST spdk_target_abort 00:28:51.043 ************************************ 00:28:51.043 00:30:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@1136 -- # return 0 00:28:51.043 00:30:09 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:28:51.043 00:30:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:28:51.043 00:30:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # xtrace_disable 00:28:51.043 00:30:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:51.043 ************************************ 00:28:51.043 START TEST kernel_target_abort 00:28:51.043 ************************************ 00:28:51.043 00:30:09 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1117 -- # kernel_target 00:28:51.043 00:30:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:28:51.043 00:30:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:28:51.043 00:30:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:51.043 00:30:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:51.043 00:30:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:51.043 00:30:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:51.043 00:30:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:51.043 00:30:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:51.043 00:30:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:51.043 00:30:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:51.043 00:30:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:51.043 00:30:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:28:51.043 00:30:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:28:51.043 00:30:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:28:51.043 00:30:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:51.043 00:30:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:51.043 00:30:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:28:51.043 00:30:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:28:51.043 00:30:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:28:51.043 00:30:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:28:51.043 00:30:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:28:51.043 00:30:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:28:53.580 Waiting for block devices as requested 00:28:53.580 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:28:53.580 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:28:53.839 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:28:53.839 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:28:53.839 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:28:53.839 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:28:54.098 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:28:54.098 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:28:54.098 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:28:54.098 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:28:54.357 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:28:54.357 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:28:54.357 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:28:54.357 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:28:54.615 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:28:54.615 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:28:54.615 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:28:54.615 00:30:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:28:54.615 00:30:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:28:54.615 00:30:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:28:54.615 00:30:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1656 -- # local device=nvme0n1 00:28:54.615 00:30:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1658 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:28:54.615 00:30:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1659 -- # [[ none != none ]] 00:28:54.615 00:30:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:28:54.615 00:30:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:28:54.615 00:30:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:28:54.874 No valid GPT data, bailing 00:28:54.874 00:30:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:28:54.874 00:30:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:28:54.874 00:30:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:28:54.874 00:30:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:28:54.874 00:30:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:28:54.874 00:30:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:54.874 00:30:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:54.874 00:30:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:28:54.874 00:30:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:28:54.874 00:30:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:28:54.874 00:30:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:28:54.874 00:30:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:28:54.874 00:30:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:28:54.874 00:30:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:28:54.874 00:30:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:28:54.874 00:30:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:28:54.874 00:30:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:28:54.874 00:30:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:28:54.874 00:28:54.874 Discovery Log Number of Records 2, Generation counter 2 00:28:54.874 =====Discovery Log Entry 0====== 00:28:54.874 trtype: tcp 00:28:54.874 adrfam: ipv4 00:28:54.874 subtype: current discovery subsystem 00:28:54.874 treq: not specified, sq flow control disable supported 00:28:54.874 portid: 1 00:28:54.874 trsvcid: 4420 00:28:54.874 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:28:54.874 traddr: 10.0.0.1 00:28:54.874 eflags: none 00:28:54.874 sectype: none 00:28:54.874 =====Discovery Log Entry 1====== 00:28:54.874 trtype: tcp 00:28:54.874 adrfam: ipv4 00:28:54.874 subtype: nvme subsystem 00:28:54.874 treq: not specified, sq flow control disable supported 00:28:54.874 portid: 1 00:28:54.874 trsvcid: 4420 00:28:54.874 subnqn: nqn.2016-06.io.spdk:testnqn 00:28:54.874 traddr: 10.0.0.1 00:28:54.874 eflags: none 00:28:54.874 sectype: none 00:28:54.875 00:30:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:28:54.875 00:30:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:28:54.875 00:30:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:28:54.875 00:30:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:28:54.875 00:30:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:28:54.875 00:30:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:28:54.875 00:30:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:28:54.875 00:30:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:28:54.875 00:30:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:28:54.875 00:30:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:54.875 00:30:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:28:54.875 00:30:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:54.875 00:30:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:28:54.875 00:30:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:54.875 00:30:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:28:54.875 00:30:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:54.875 00:30:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:28:54.875 00:30:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:54.875 00:30:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:54.875 00:30:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:54.875 00:30:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:58.205 Initializing NVMe Controllers 00:28:58.205 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:58.205 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:58.205 Initialization complete. Launching workers. 00:28:58.205 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 72496, failed: 0 00:28:58.205 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 72496, failed to submit 0 00:28:58.205 success 0, unsuccess 72496, failed 0 00:28:58.205 00:30:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:58.205 00:30:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:01.489 Initializing NVMe Controllers 00:29:01.489 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:29:01.489 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:01.489 Initialization complete. Launching workers. 00:29:01.489 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 123325, failed: 0 00:29:01.489 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 30930, failed to submit 92395 00:29:01.489 success 0, unsuccess 30930, failed 0 00:29:01.489 00:30:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:01.489 00:30:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:04.018 Initializing NVMe Controllers 00:29:04.018 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:29:04.018 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:04.018 Initialization complete. Launching workers. 00:29:04.018 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 117522, failed: 0 00:29:04.018 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 29382, failed to submit 88140 00:29:04.018 success 0, unsuccess 29382, failed 0 00:29:04.018 00:30:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:29:04.018 00:30:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:29:04.018 00:30:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:29:04.275 00:30:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:04.275 00:30:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:04.275 00:30:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:29:04.275 00:30:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:04.275 00:30:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:29:04.275 00:30:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:29:04.275 00:30:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:06.851 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:29:06.851 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:29:06.851 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:29:06.851 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:29:06.851 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:29:06.851 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:29:06.851 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:29:06.851 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:29:06.851 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:29:06.851 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:29:06.851 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:29:06.851 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:29:06.851 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:29:06.851 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:29:06.851 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:29:06.851 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:29:07.789 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:29:07.789 00:29:07.789 real 0m16.746s 00:29:07.789 user 0m7.589s 00:29:07.789 sys 0m4.919s 00:29:07.789 00:30:26 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1118 -- # xtrace_disable 00:29:07.789 00:30:26 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:07.789 ************************************ 00:29:07.789 END TEST kernel_target_abort 00:29:07.789 ************************************ 00:29:07.789 00:30:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@1136 -- # return 0 00:29:07.789 00:30:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:29:07.789 00:30:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:29:07.789 00:30:26 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:07.789 00:30:26 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:29:07.789 00:30:26 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:07.789 00:30:26 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:29:07.789 00:30:26 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:07.789 00:30:26 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:07.789 rmmod nvme_tcp 00:29:07.789 rmmod nvme_fabrics 00:29:07.789 rmmod nvme_keyring 00:29:07.789 00:30:26 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:07.789 00:30:26 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:29:07.789 00:30:26 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:29:07.789 00:30:26 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 1695730 ']' 00:29:07.789 00:30:26 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 1695730 00:29:07.789 00:30:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@942 -- # '[' -z 1695730 ']' 00:29:07.789 00:30:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@946 -- # kill -0 1695730 00:29:07.789 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 946: kill: (1695730) - No such process 00:29:07.789 00:30:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@969 -- # echo 'Process with pid 1695730 is not found' 00:29:07.789 Process with pid 1695730 is not found 00:29:07.789 00:30:26 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:29:07.789 00:30:26 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:29:10.323 Waiting for block devices as requested 00:29:10.323 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:29:10.323 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:29:10.323 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:29:10.323 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:29:10.582 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:29:10.582 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:29:10.582 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:29:10.582 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:29:10.841 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:29:10.841 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:29:10.841 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:29:11.100 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:29:11.100 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:29:11.100 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:29:11.100 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:29:11.358 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:29:11.358 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:29:11.358 00:30:30 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:11.358 00:30:30 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:11.358 00:30:30 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:11.358 00:30:30 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:11.358 00:30:30 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:11.358 00:30:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:11.358 00:30:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:13.893 00:30:32 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:13.893 00:29:13.893 real 0m45.995s 00:29:13.893 user 1m7.143s 00:29:13.893 sys 0m14.617s 00:29:13.893 00:30:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1118 -- # xtrace_disable 00:29:13.893 00:30:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:13.893 ************************************ 00:29:13.893 END TEST nvmf_abort_qd_sizes 00:29:13.893 ************************************ 00:29:13.893 00:30:32 -- common/autotest_common.sh@1136 -- # return 0 00:29:13.893 00:30:32 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:29:13.893 00:30:32 -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:29:13.893 00:30:32 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:29:13.893 00:30:32 -- common/autotest_common.sh@10 -- # set +x 00:29:13.893 ************************************ 00:29:13.893 START TEST keyring_file 00:29:13.893 ************************************ 00:29:13.893 00:30:32 keyring_file -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:29:13.893 * Looking for test storage... 00:29:13.893 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:29:13.893 00:30:32 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:29:13.893 00:30:32 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:13.893 00:30:32 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:29:13.893 00:30:32 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:13.893 00:30:32 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:13.893 00:30:32 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:13.893 00:30:32 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:13.893 00:30:32 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:13.893 00:30:32 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:13.894 00:30:32 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:13.894 00:30:32 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:13.894 00:30:32 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:13.894 00:30:32 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:13.894 00:30:32 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:13.894 00:30:32 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:13.894 00:30:32 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:13.894 00:30:32 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:13.894 00:30:32 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:13.894 00:30:32 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:13.894 00:30:32 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:13.894 00:30:32 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:13.894 00:30:32 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:13.894 00:30:32 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:13.894 00:30:32 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:13.894 00:30:32 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:13.894 00:30:32 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:13.894 00:30:32 keyring_file -- paths/export.sh@5 -- # export PATH 00:29:13.894 00:30:32 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:13.894 00:30:32 keyring_file -- nvmf/common.sh@47 -- # : 0 00:29:13.894 00:30:32 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:13.894 00:30:32 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:13.894 00:30:32 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:13.894 00:30:32 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:13.894 00:30:32 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:13.894 00:30:32 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:13.894 00:30:32 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:13.894 00:30:32 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:13.894 00:30:32 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:29:13.894 00:30:32 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:29:13.894 00:30:32 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:29:13.894 00:30:32 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:29:13.894 00:30:32 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:29:13.894 00:30:32 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:29:13.894 00:30:32 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:29:13.894 00:30:32 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:29:13.894 00:30:32 keyring_file -- keyring/common.sh@17 -- # name=key0 00:29:13.894 00:30:32 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:29:13.894 00:30:32 keyring_file -- keyring/common.sh@17 -- # digest=0 00:29:13.894 00:30:32 keyring_file -- keyring/common.sh@18 -- # mktemp 00:29:13.894 00:30:32 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.Ly2pp4mlXP 00:29:13.894 00:30:32 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:29:13.894 00:30:32 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:29:13.894 00:30:32 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:29:13.894 00:30:32 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:29:13.894 00:30:32 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:29:13.894 00:30:32 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:29:13.894 00:30:32 keyring_file -- nvmf/common.sh@705 -- # python - 00:29:13.894 00:30:32 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.Ly2pp4mlXP 00:29:13.894 00:30:32 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.Ly2pp4mlXP 00:29:13.894 00:30:32 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.Ly2pp4mlXP 00:29:13.894 00:30:32 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:29:13.894 00:30:32 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:29:13.894 00:30:32 keyring_file -- keyring/common.sh@17 -- # name=key1 00:29:13.894 00:30:32 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:29:13.894 00:30:32 keyring_file -- keyring/common.sh@17 -- # digest=0 00:29:13.894 00:30:32 keyring_file -- keyring/common.sh@18 -- # mktemp 00:29:13.894 00:30:32 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.NL6j7IY9Hq 00:29:13.894 00:30:32 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:29:13.894 00:30:32 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:29:13.894 00:30:32 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:29:13.894 00:30:32 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:29:13.894 00:30:32 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:29:13.894 00:30:32 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:29:13.894 00:30:32 keyring_file -- nvmf/common.sh@705 -- # python - 00:29:13.894 00:30:32 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.NL6j7IY9Hq 00:29:13.894 00:30:32 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.NL6j7IY9Hq 00:29:13.894 00:30:32 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.NL6j7IY9Hq 00:29:13.894 00:30:32 keyring_file -- keyring/file.sh@30 -- # tgtpid=1704932 00:29:13.894 00:30:32 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1704932 00:29:13.894 00:30:32 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:29:13.894 00:30:32 keyring_file -- common/autotest_common.sh@823 -- # '[' -z 1704932 ']' 00:29:13.894 00:30:32 keyring_file -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:13.894 00:30:32 keyring_file -- common/autotest_common.sh@828 -- # local max_retries=100 00:29:13.894 00:30:32 keyring_file -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:13.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:13.894 00:30:32 keyring_file -- common/autotest_common.sh@832 -- # xtrace_disable 00:29:13.894 00:30:32 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:13.894 [2024-07-16 00:30:32.532081] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:29:13.894 [2024-07-16 00:30:32.532132] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1704932 ] 00:29:13.894 [2024-07-16 00:30:32.584537] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:13.894 [2024-07-16 00:30:32.663368] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:14.830 00:30:33 keyring_file -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:29:14.830 00:30:33 keyring_file -- common/autotest_common.sh@856 -- # return 0 00:29:14.830 00:30:33 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:29:14.830 00:30:33 keyring_file -- common/autotest_common.sh@553 -- # xtrace_disable 00:29:14.830 00:30:33 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:14.830 [2024-07-16 00:30:33.321746] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:14.830 null0 00:29:14.830 [2024-07-16 00:30:33.353792] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:29:14.830 [2024-07-16 00:30:33.354032] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:14.830 [2024-07-16 00:30:33.361804] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:29:14.830 00:30:33 keyring_file -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:29:14.830 00:30:33 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:29:14.830 00:30:33 keyring_file -- common/autotest_common.sh@642 -- # local es=0 00:29:14.830 00:30:33 keyring_file -- common/autotest_common.sh@644 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:29:14.830 00:30:33 keyring_file -- common/autotest_common.sh@630 -- # local arg=rpc_cmd 00:29:14.830 00:30:33 keyring_file -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:29:14.830 00:30:33 keyring_file -- common/autotest_common.sh@634 -- # type -t rpc_cmd 00:29:14.830 00:30:33 keyring_file -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:29:14.830 00:30:33 keyring_file -- common/autotest_common.sh@645 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:29:14.830 00:30:33 keyring_file -- common/autotest_common.sh@553 -- # xtrace_disable 00:29:14.830 00:30:33 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:14.830 [2024-07-16 00:30:33.373836] nvmf_rpc.c: 788:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:29:14.830 request: 00:29:14.830 { 00:29:14.830 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:29:14.830 "secure_channel": false, 00:29:14.830 "listen_address": { 00:29:14.830 "trtype": "tcp", 00:29:14.830 "traddr": "127.0.0.1", 00:29:14.830 "trsvcid": "4420" 00:29:14.830 }, 00:29:14.830 "method": "nvmf_subsystem_add_listener", 00:29:14.830 "req_id": 1 00:29:14.830 } 00:29:14.830 Got JSON-RPC error response 00:29:14.830 response: 00:29:14.830 { 00:29:14.830 "code": -32602, 00:29:14.830 "message": "Invalid parameters" 00:29:14.830 } 00:29:14.831 00:30:33 keyring_file -- common/autotest_common.sh@581 -- # [[ 1 == 0 ]] 00:29:14.831 00:30:33 keyring_file -- common/autotest_common.sh@645 -- # es=1 00:29:14.831 00:30:33 keyring_file -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:29:14.831 00:30:33 keyring_file -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:29:14.831 00:30:33 keyring_file -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:29:14.831 00:30:33 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:29:14.831 00:30:33 keyring_file -- keyring/file.sh@46 -- # bperfpid=1705102 00:29:14.831 00:30:33 keyring_file -- keyring/file.sh@48 -- # waitforlisten 1705102 /var/tmp/bperf.sock 00:29:14.831 00:30:33 keyring_file -- common/autotest_common.sh@823 -- # '[' -z 1705102 ']' 00:29:14.831 00:30:33 keyring_file -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:14.831 00:30:33 keyring_file -- common/autotest_common.sh@828 -- # local max_retries=100 00:29:14.831 00:30:33 keyring_file -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:14.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:14.831 00:30:33 keyring_file -- common/autotest_common.sh@832 -- # xtrace_disable 00:29:14.831 00:30:33 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:14.831 [2024-07-16 00:30:33.412846] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:29:14.831 [2024-07-16 00:30:33.412888] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1705102 ] 00:29:14.831 [2024-07-16 00:30:33.460703] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:14.831 [2024-07-16 00:30:33.532862] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:14.831 00:30:33 keyring_file -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:29:14.831 00:30:33 keyring_file -- common/autotest_common.sh@856 -- # return 0 00:29:14.831 00:30:33 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Ly2pp4mlXP 00:29:14.831 00:30:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Ly2pp4mlXP 00:29:15.089 00:30:33 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.NL6j7IY9Hq 00:29:15.089 00:30:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.NL6j7IY9Hq 00:29:15.347 00:30:33 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:29:15.347 00:30:33 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:29:15.347 00:30:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:15.347 00:30:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:15.347 00:30:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:15.347 00:30:34 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.Ly2pp4mlXP == \/\t\m\p\/\t\m\p\.\L\y\2\p\p\4\m\l\X\P ]] 00:29:15.347 00:30:34 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:29:15.347 00:30:34 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:29:15.347 00:30:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:15.347 00:30:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:15.347 00:30:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:15.605 00:30:34 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.NL6j7IY9Hq == \/\t\m\p\/\t\m\p\.\N\L\6\j\7\I\Y\9\H\q ]] 00:29:15.605 00:30:34 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:29:15.605 00:30:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:15.605 00:30:34 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:15.605 00:30:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:15.605 00:30:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:15.605 00:30:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:15.865 00:30:34 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:29:15.865 00:30:34 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:29:15.865 00:30:34 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:29:15.865 00:30:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:15.865 00:30:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:15.865 00:30:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:15.865 00:30:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:15.865 00:30:34 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:29:15.865 00:30:34 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:15.865 00:30:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:16.125 [2024-07-16 00:30:34.877328] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:16.125 nvme0n1 00:29:16.125 00:30:34 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:29:16.125 00:30:34 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:16.125 00:30:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:16.125 00:30:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:16.125 00:30:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:16.125 00:30:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:16.384 00:30:35 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:29:16.384 00:30:35 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:29:16.384 00:30:35 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:29:16.384 00:30:35 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:16.384 00:30:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:16.384 00:30:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:16.384 00:30:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:16.643 00:30:35 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:29:16.643 00:30:35 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:16.643 Running I/O for 1 seconds... 00:29:17.611 00:29:17.611 Latency(us) 00:29:17.611 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:17.611 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:29:17.611 nvme0n1 : 1.01 11217.81 43.82 0.00 0.00 11347.04 5584.81 17780.20 00:29:17.611 =================================================================================================================== 00:29:17.611 Total : 11217.81 43.82 0.00 0.00 11347.04 5584.81 17780.20 00:29:17.611 0 00:29:17.611 00:30:36 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:29:17.611 00:30:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:29:17.869 00:30:36 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:29:17.869 00:30:36 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:17.869 00:30:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:17.869 00:30:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:17.869 00:30:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:17.869 00:30:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:18.128 00:30:36 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:29:18.128 00:30:36 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:29:18.128 00:30:36 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:29:18.128 00:30:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:18.128 00:30:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:18.128 00:30:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:18.128 00:30:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:18.400 00:30:36 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:29:18.400 00:30:36 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:18.400 00:30:36 keyring_file -- common/autotest_common.sh@642 -- # local es=0 00:29:18.400 00:30:36 keyring_file -- common/autotest_common.sh@644 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:18.400 00:30:36 keyring_file -- common/autotest_common.sh@630 -- # local arg=bperf_cmd 00:29:18.400 00:30:36 keyring_file -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:29:18.400 00:30:36 keyring_file -- common/autotest_common.sh@634 -- # type -t bperf_cmd 00:29:18.400 00:30:36 keyring_file -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:29:18.400 00:30:36 keyring_file -- common/autotest_common.sh@645 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:18.400 00:30:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:18.400 [2024-07-16 00:30:37.159050] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:29:18.400 [2024-07-16 00:30:37.159542] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1011770 (107): Transport endpoint is not connected 00:29:18.400 [2024-07-16 00:30:37.160537] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1011770 (9): Bad file descriptor 00:29:18.400 [2024-07-16 00:30:37.161538] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:18.400 [2024-07-16 00:30:37.161550] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:29:18.400 [2024-07-16 00:30:37.161557] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:18.400 request: 00:29:18.400 { 00:29:18.400 "name": "nvme0", 00:29:18.400 "trtype": "tcp", 00:29:18.400 "traddr": "127.0.0.1", 00:29:18.400 "adrfam": "ipv4", 00:29:18.400 "trsvcid": "4420", 00:29:18.400 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:18.400 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:18.400 "prchk_reftag": false, 00:29:18.400 "prchk_guard": false, 00:29:18.400 "hdgst": false, 00:29:18.400 "ddgst": false, 00:29:18.400 "psk": "key1", 00:29:18.400 "method": "bdev_nvme_attach_controller", 00:29:18.400 "req_id": 1 00:29:18.400 } 00:29:18.400 Got JSON-RPC error response 00:29:18.400 response: 00:29:18.400 { 00:29:18.400 "code": -5, 00:29:18.400 "message": "Input/output error" 00:29:18.400 } 00:29:18.400 00:30:37 keyring_file -- common/autotest_common.sh@645 -- # es=1 00:29:18.400 00:30:37 keyring_file -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:29:18.400 00:30:37 keyring_file -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:29:18.400 00:30:37 keyring_file -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:29:18.400 00:30:37 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:29:18.400 00:30:37 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:18.400 00:30:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:18.400 00:30:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:18.400 00:30:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:18.400 00:30:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:18.657 00:30:37 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:29:18.657 00:30:37 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:29:18.657 00:30:37 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:29:18.657 00:30:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:18.658 00:30:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:18.658 00:30:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:18.658 00:30:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:18.916 00:30:37 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:29:18.916 00:30:37 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:29:18.916 00:30:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:29:18.916 00:30:37 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:29:18.916 00:30:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:29:19.174 00:30:37 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:29:19.174 00:30:37 keyring_file -- keyring/file.sh@77 -- # jq length 00:29:19.174 00:30:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:19.433 00:30:38 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:29:19.433 00:30:38 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.Ly2pp4mlXP 00:29:19.433 00:30:38 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.Ly2pp4mlXP 00:29:19.433 00:30:38 keyring_file -- common/autotest_common.sh@642 -- # local es=0 00:29:19.433 00:30:38 keyring_file -- common/autotest_common.sh@644 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.Ly2pp4mlXP 00:29:19.433 00:30:38 keyring_file -- common/autotest_common.sh@630 -- # local arg=bperf_cmd 00:29:19.433 00:30:38 keyring_file -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:29:19.433 00:30:38 keyring_file -- common/autotest_common.sh@634 -- # type -t bperf_cmd 00:29:19.433 00:30:38 keyring_file -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:29:19.433 00:30:38 keyring_file -- common/autotest_common.sh@645 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Ly2pp4mlXP 00:29:19.433 00:30:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Ly2pp4mlXP 00:29:19.433 [2024-07-16 00:30:38.189523] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.Ly2pp4mlXP': 0100660 00:29:19.433 [2024-07-16 00:30:38.189546] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:29:19.433 request: 00:29:19.433 { 00:29:19.433 "name": "key0", 00:29:19.433 "path": "/tmp/tmp.Ly2pp4mlXP", 00:29:19.433 "method": "keyring_file_add_key", 00:29:19.433 "req_id": 1 00:29:19.433 } 00:29:19.433 Got JSON-RPC error response 00:29:19.433 response: 00:29:19.433 { 00:29:19.433 "code": -1, 00:29:19.433 "message": "Operation not permitted" 00:29:19.433 } 00:29:19.433 00:30:38 keyring_file -- common/autotest_common.sh@645 -- # es=1 00:29:19.433 00:30:38 keyring_file -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:29:19.433 00:30:38 keyring_file -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:29:19.433 00:30:38 keyring_file -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:29:19.433 00:30:38 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.Ly2pp4mlXP 00:29:19.433 00:30:38 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Ly2pp4mlXP 00:29:19.433 00:30:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Ly2pp4mlXP 00:29:19.691 00:30:38 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.Ly2pp4mlXP 00:29:19.691 00:30:38 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:29:19.691 00:30:38 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:19.691 00:30:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:19.691 00:30:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:19.691 00:30:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:19.691 00:30:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:19.950 00:30:38 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:29:19.950 00:30:38 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:19.950 00:30:38 keyring_file -- common/autotest_common.sh@642 -- # local es=0 00:29:19.950 00:30:38 keyring_file -- common/autotest_common.sh@644 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:19.950 00:30:38 keyring_file -- common/autotest_common.sh@630 -- # local arg=bperf_cmd 00:29:19.950 00:30:38 keyring_file -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:29:19.950 00:30:38 keyring_file -- common/autotest_common.sh@634 -- # type -t bperf_cmd 00:29:19.950 00:30:38 keyring_file -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:29:19.950 00:30:38 keyring_file -- common/autotest_common.sh@645 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:19.950 00:30:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:19.950 [2024-07-16 00:30:38.726951] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.Ly2pp4mlXP': No such file or directory 00:29:19.950 [2024-07-16 00:30:38.726973] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:29:19.950 [2024-07-16 00:30:38.726992] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:29:19.950 [2024-07-16 00:30:38.726998] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:19.950 [2024-07-16 00:30:38.727003] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:29:19.950 request: 00:29:19.950 { 00:29:19.950 "name": "nvme0", 00:29:19.950 "trtype": "tcp", 00:29:19.950 "traddr": "127.0.0.1", 00:29:19.950 "adrfam": "ipv4", 00:29:19.950 "trsvcid": "4420", 00:29:19.950 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:19.950 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:19.950 "prchk_reftag": false, 00:29:19.950 "prchk_guard": false, 00:29:19.950 "hdgst": false, 00:29:19.950 "ddgst": false, 00:29:19.950 "psk": "key0", 00:29:19.950 "method": "bdev_nvme_attach_controller", 00:29:19.950 "req_id": 1 00:29:19.950 } 00:29:19.950 Got JSON-RPC error response 00:29:19.950 response: 00:29:19.950 { 00:29:19.950 "code": -19, 00:29:19.950 "message": "No such device" 00:29:19.950 } 00:29:19.950 00:30:38 keyring_file -- common/autotest_common.sh@645 -- # es=1 00:29:19.950 00:30:38 keyring_file -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:29:19.950 00:30:38 keyring_file -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:29:19.950 00:30:38 keyring_file -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:29:19.950 00:30:38 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:29:19.950 00:30:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:29:20.209 00:30:38 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:29:20.209 00:30:38 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:29:20.209 00:30:38 keyring_file -- keyring/common.sh@17 -- # name=key0 00:29:20.209 00:30:38 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:29:20.209 00:30:38 keyring_file -- keyring/common.sh@17 -- # digest=0 00:29:20.209 00:30:38 keyring_file -- keyring/common.sh@18 -- # mktemp 00:29:20.209 00:30:38 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.UefNzeWP9i 00:29:20.209 00:30:38 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:29:20.209 00:30:38 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:29:20.209 00:30:38 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:29:20.209 00:30:38 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:29:20.209 00:30:38 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:29:20.209 00:30:38 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:29:20.209 00:30:38 keyring_file -- nvmf/common.sh@705 -- # python - 00:29:20.209 00:30:38 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.UefNzeWP9i 00:29:20.209 00:30:38 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.UefNzeWP9i 00:29:20.209 00:30:38 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.UefNzeWP9i 00:29:20.209 00:30:38 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.UefNzeWP9i 00:29:20.209 00:30:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.UefNzeWP9i 00:29:20.467 00:30:39 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:20.467 00:30:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:20.725 nvme0n1 00:29:20.725 00:30:39 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:29:20.725 00:30:39 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:20.725 00:30:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:20.725 00:30:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:20.725 00:30:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:20.725 00:30:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:20.725 00:30:39 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:29:20.725 00:30:39 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:29:20.725 00:30:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:29:20.983 00:30:39 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:29:20.983 00:30:39 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:29:20.983 00:30:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:20.983 00:30:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:20.983 00:30:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:21.242 00:30:39 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:29:21.242 00:30:39 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:29:21.242 00:30:39 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:21.242 00:30:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:21.242 00:30:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:21.242 00:30:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:21.242 00:30:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:21.242 00:30:40 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:29:21.242 00:30:40 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:29:21.242 00:30:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:29:21.502 00:30:40 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:29:21.502 00:30:40 keyring_file -- keyring/file.sh@104 -- # jq length 00:29:21.502 00:30:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:21.761 00:30:40 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:29:21.761 00:30:40 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.UefNzeWP9i 00:29:21.761 00:30:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.UefNzeWP9i 00:29:22.020 00:30:40 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.NL6j7IY9Hq 00:29:22.020 00:30:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.NL6j7IY9Hq 00:29:22.020 00:30:40 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:22.020 00:30:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:22.279 nvme0n1 00:29:22.279 00:30:41 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:29:22.279 00:30:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:29:22.538 00:30:41 keyring_file -- keyring/file.sh@112 -- # config='{ 00:29:22.538 "subsystems": [ 00:29:22.538 { 00:29:22.538 "subsystem": "keyring", 00:29:22.538 "config": [ 00:29:22.538 { 00:29:22.538 "method": "keyring_file_add_key", 00:29:22.538 "params": { 00:29:22.538 "name": "key0", 00:29:22.538 "path": "/tmp/tmp.UefNzeWP9i" 00:29:22.538 } 00:29:22.538 }, 00:29:22.538 { 00:29:22.538 "method": "keyring_file_add_key", 00:29:22.538 "params": { 00:29:22.538 "name": "key1", 00:29:22.538 "path": "/tmp/tmp.NL6j7IY9Hq" 00:29:22.538 } 00:29:22.538 } 00:29:22.538 ] 00:29:22.538 }, 00:29:22.538 { 00:29:22.538 "subsystem": "iobuf", 00:29:22.538 "config": [ 00:29:22.538 { 00:29:22.538 "method": "iobuf_set_options", 00:29:22.538 "params": { 00:29:22.538 "small_pool_count": 8192, 00:29:22.538 "large_pool_count": 1024, 00:29:22.538 "small_bufsize": 8192, 00:29:22.538 "large_bufsize": 135168 00:29:22.538 } 00:29:22.538 } 00:29:22.538 ] 00:29:22.538 }, 00:29:22.538 { 00:29:22.538 "subsystem": "sock", 00:29:22.538 "config": [ 00:29:22.538 { 00:29:22.538 "method": "sock_set_default_impl", 00:29:22.538 "params": { 00:29:22.538 "impl_name": "posix" 00:29:22.538 } 00:29:22.538 }, 00:29:22.538 { 00:29:22.538 "method": "sock_impl_set_options", 00:29:22.538 "params": { 00:29:22.538 "impl_name": "ssl", 00:29:22.538 "recv_buf_size": 4096, 00:29:22.538 "send_buf_size": 4096, 00:29:22.538 "enable_recv_pipe": true, 00:29:22.538 "enable_quickack": false, 00:29:22.538 "enable_placement_id": 0, 00:29:22.538 "enable_zerocopy_send_server": true, 00:29:22.538 "enable_zerocopy_send_client": false, 00:29:22.538 "zerocopy_threshold": 0, 00:29:22.538 "tls_version": 0, 00:29:22.538 "enable_ktls": false 00:29:22.538 } 00:29:22.538 }, 00:29:22.538 { 00:29:22.538 "method": "sock_impl_set_options", 00:29:22.538 "params": { 00:29:22.538 "impl_name": "posix", 00:29:22.538 "recv_buf_size": 2097152, 00:29:22.538 "send_buf_size": 2097152, 00:29:22.538 "enable_recv_pipe": true, 00:29:22.538 "enable_quickack": false, 00:29:22.538 "enable_placement_id": 0, 00:29:22.538 "enable_zerocopy_send_server": true, 00:29:22.538 "enable_zerocopy_send_client": false, 00:29:22.538 "zerocopy_threshold": 0, 00:29:22.538 "tls_version": 0, 00:29:22.538 "enable_ktls": false 00:29:22.538 } 00:29:22.538 } 00:29:22.538 ] 00:29:22.538 }, 00:29:22.538 { 00:29:22.538 "subsystem": "vmd", 00:29:22.538 "config": [] 00:29:22.538 }, 00:29:22.538 { 00:29:22.538 "subsystem": "accel", 00:29:22.538 "config": [ 00:29:22.538 { 00:29:22.538 "method": "accel_set_options", 00:29:22.538 "params": { 00:29:22.538 "small_cache_size": 128, 00:29:22.538 "large_cache_size": 16, 00:29:22.538 "task_count": 2048, 00:29:22.538 "sequence_count": 2048, 00:29:22.538 "buf_count": 2048 00:29:22.538 } 00:29:22.538 } 00:29:22.538 ] 00:29:22.538 }, 00:29:22.538 { 00:29:22.538 "subsystem": "bdev", 00:29:22.538 "config": [ 00:29:22.538 { 00:29:22.538 "method": "bdev_set_options", 00:29:22.538 "params": { 00:29:22.538 "bdev_io_pool_size": 65535, 00:29:22.538 "bdev_io_cache_size": 256, 00:29:22.538 "bdev_auto_examine": true, 00:29:22.538 "iobuf_small_cache_size": 128, 00:29:22.538 "iobuf_large_cache_size": 16 00:29:22.538 } 00:29:22.538 }, 00:29:22.538 { 00:29:22.538 "method": "bdev_raid_set_options", 00:29:22.538 "params": { 00:29:22.538 "process_window_size_kb": 1024 00:29:22.538 } 00:29:22.538 }, 00:29:22.538 { 00:29:22.538 "method": "bdev_iscsi_set_options", 00:29:22.538 "params": { 00:29:22.538 "timeout_sec": 30 00:29:22.538 } 00:29:22.538 }, 00:29:22.538 { 00:29:22.538 "method": "bdev_nvme_set_options", 00:29:22.538 "params": { 00:29:22.538 "action_on_timeout": "none", 00:29:22.538 "timeout_us": 0, 00:29:22.538 "timeout_admin_us": 0, 00:29:22.538 "keep_alive_timeout_ms": 10000, 00:29:22.538 "arbitration_burst": 0, 00:29:22.538 "low_priority_weight": 0, 00:29:22.538 "medium_priority_weight": 0, 00:29:22.538 "high_priority_weight": 0, 00:29:22.538 "nvme_adminq_poll_period_us": 10000, 00:29:22.538 "nvme_ioq_poll_period_us": 0, 00:29:22.538 "io_queue_requests": 512, 00:29:22.538 "delay_cmd_submit": true, 00:29:22.538 "transport_retry_count": 4, 00:29:22.538 "bdev_retry_count": 3, 00:29:22.538 "transport_ack_timeout": 0, 00:29:22.538 "ctrlr_loss_timeout_sec": 0, 00:29:22.538 "reconnect_delay_sec": 0, 00:29:22.538 "fast_io_fail_timeout_sec": 0, 00:29:22.538 "disable_auto_failback": false, 00:29:22.538 "generate_uuids": false, 00:29:22.538 "transport_tos": 0, 00:29:22.538 "nvme_error_stat": false, 00:29:22.538 "rdma_srq_size": 0, 00:29:22.538 "io_path_stat": false, 00:29:22.538 "allow_accel_sequence": false, 00:29:22.538 "rdma_max_cq_size": 0, 00:29:22.538 "rdma_cm_event_timeout_ms": 0, 00:29:22.538 "dhchap_digests": [ 00:29:22.538 "sha256", 00:29:22.538 "sha384", 00:29:22.538 "sha512" 00:29:22.538 ], 00:29:22.538 "dhchap_dhgroups": [ 00:29:22.538 "null", 00:29:22.538 "ffdhe2048", 00:29:22.538 "ffdhe3072", 00:29:22.538 "ffdhe4096", 00:29:22.538 "ffdhe6144", 00:29:22.538 "ffdhe8192" 00:29:22.538 ] 00:29:22.538 } 00:29:22.538 }, 00:29:22.538 { 00:29:22.538 "method": "bdev_nvme_attach_controller", 00:29:22.538 "params": { 00:29:22.538 "name": "nvme0", 00:29:22.538 "trtype": "TCP", 00:29:22.538 "adrfam": "IPv4", 00:29:22.538 "traddr": "127.0.0.1", 00:29:22.538 "trsvcid": "4420", 00:29:22.538 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:22.538 "prchk_reftag": false, 00:29:22.538 "prchk_guard": false, 00:29:22.538 "ctrlr_loss_timeout_sec": 0, 00:29:22.538 "reconnect_delay_sec": 0, 00:29:22.538 "fast_io_fail_timeout_sec": 0, 00:29:22.538 "psk": "key0", 00:29:22.538 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:22.538 "hdgst": false, 00:29:22.538 "ddgst": false 00:29:22.538 } 00:29:22.538 }, 00:29:22.538 { 00:29:22.538 "method": "bdev_nvme_set_hotplug", 00:29:22.538 "params": { 00:29:22.538 "period_us": 100000, 00:29:22.538 "enable": false 00:29:22.538 } 00:29:22.538 }, 00:29:22.538 { 00:29:22.538 "method": "bdev_wait_for_examine" 00:29:22.539 } 00:29:22.539 ] 00:29:22.539 }, 00:29:22.539 { 00:29:22.539 "subsystem": "nbd", 00:29:22.539 "config": [] 00:29:22.539 } 00:29:22.539 ] 00:29:22.539 }' 00:29:22.539 00:30:41 keyring_file -- keyring/file.sh@114 -- # killprocess 1705102 00:29:22.539 00:30:41 keyring_file -- common/autotest_common.sh@942 -- # '[' -z 1705102 ']' 00:29:22.539 00:30:41 keyring_file -- common/autotest_common.sh@946 -- # kill -0 1705102 00:29:22.539 00:30:41 keyring_file -- common/autotest_common.sh@947 -- # uname 00:29:22.539 00:30:41 keyring_file -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:29:22.539 00:30:41 keyring_file -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1705102 00:29:22.539 00:30:41 keyring_file -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:29:22.539 00:30:41 keyring_file -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:29:22.539 00:30:41 keyring_file -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1705102' 00:29:22.539 killing process with pid 1705102 00:29:22.539 00:30:41 keyring_file -- common/autotest_common.sh@961 -- # kill 1705102 00:29:22.539 Received shutdown signal, test time was about 1.000000 seconds 00:29:22.539 00:29:22.539 Latency(us) 00:29:22.539 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:22.539 =================================================================================================================== 00:29:22.539 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:22.539 00:30:41 keyring_file -- common/autotest_common.sh@966 -- # wait 1705102 00:29:22.797 00:30:41 keyring_file -- keyring/file.sh@117 -- # bperfpid=1706627 00:29:22.797 00:30:41 keyring_file -- keyring/file.sh@119 -- # waitforlisten 1706627 /var/tmp/bperf.sock 00:29:22.797 00:30:41 keyring_file -- common/autotest_common.sh@823 -- # '[' -z 1706627 ']' 00:29:22.797 00:30:41 keyring_file -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:22.797 00:30:41 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:29:22.797 00:30:41 keyring_file -- common/autotest_common.sh@828 -- # local max_retries=100 00:29:22.797 00:30:41 keyring_file -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:22.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:22.797 00:30:41 keyring_file -- common/autotest_common.sh@832 -- # xtrace_disable 00:29:22.797 00:30:41 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:29:22.797 "subsystems": [ 00:29:22.797 { 00:29:22.797 "subsystem": "keyring", 00:29:22.797 "config": [ 00:29:22.797 { 00:29:22.797 "method": "keyring_file_add_key", 00:29:22.797 "params": { 00:29:22.797 "name": "key0", 00:29:22.797 "path": "/tmp/tmp.UefNzeWP9i" 00:29:22.797 } 00:29:22.797 }, 00:29:22.797 { 00:29:22.797 "method": "keyring_file_add_key", 00:29:22.797 "params": { 00:29:22.797 "name": "key1", 00:29:22.797 "path": "/tmp/tmp.NL6j7IY9Hq" 00:29:22.797 } 00:29:22.798 } 00:29:22.798 ] 00:29:22.798 }, 00:29:22.798 { 00:29:22.798 "subsystem": "iobuf", 00:29:22.798 "config": [ 00:29:22.798 { 00:29:22.798 "method": "iobuf_set_options", 00:29:22.798 "params": { 00:29:22.798 "small_pool_count": 8192, 00:29:22.798 "large_pool_count": 1024, 00:29:22.798 "small_bufsize": 8192, 00:29:22.798 "large_bufsize": 135168 00:29:22.798 } 00:29:22.798 } 00:29:22.798 ] 00:29:22.798 }, 00:29:22.798 { 00:29:22.798 "subsystem": "sock", 00:29:22.798 "config": [ 00:29:22.798 { 00:29:22.798 "method": "sock_set_default_impl", 00:29:22.798 "params": { 00:29:22.798 "impl_name": "posix" 00:29:22.798 } 00:29:22.798 }, 00:29:22.798 { 00:29:22.798 "method": "sock_impl_set_options", 00:29:22.798 "params": { 00:29:22.798 "impl_name": "ssl", 00:29:22.798 "recv_buf_size": 4096, 00:29:22.798 "send_buf_size": 4096, 00:29:22.798 "enable_recv_pipe": true, 00:29:22.798 "enable_quickack": false, 00:29:22.798 "enable_placement_id": 0, 00:29:22.798 "enable_zerocopy_send_server": true, 00:29:22.798 "enable_zerocopy_send_client": false, 00:29:22.798 "zerocopy_threshold": 0, 00:29:22.798 "tls_version": 0, 00:29:22.798 "enable_ktls": false 00:29:22.798 } 00:29:22.798 }, 00:29:22.798 { 00:29:22.798 "method": "sock_impl_set_options", 00:29:22.798 "params": { 00:29:22.798 "impl_name": "posix", 00:29:22.798 "recv_buf_size": 2097152, 00:29:22.798 "send_buf_size": 2097152, 00:29:22.798 "enable_recv_pipe": true, 00:29:22.798 "enable_quickack": false, 00:29:22.798 "enable_placement_id": 0, 00:29:22.798 "enable_zerocopy_send_server": true, 00:29:22.798 "enable_zerocopy_send_client": false, 00:29:22.798 "zerocopy_threshold": 0, 00:29:22.798 "tls_version": 0, 00:29:22.798 "enable_ktls": false 00:29:22.798 } 00:29:22.798 } 00:29:22.798 ] 00:29:22.798 }, 00:29:22.798 { 00:29:22.798 "subsystem": "vmd", 00:29:22.798 "config": [] 00:29:22.798 }, 00:29:22.798 { 00:29:22.798 "subsystem": "accel", 00:29:22.798 "config": [ 00:29:22.798 { 00:29:22.798 "method": "accel_set_options", 00:29:22.798 "params": { 00:29:22.798 "small_cache_size": 128, 00:29:22.798 "large_cache_size": 16, 00:29:22.798 "task_count": 2048, 00:29:22.798 "sequence_count": 2048, 00:29:22.798 "buf_count": 2048 00:29:22.798 } 00:29:22.798 } 00:29:22.798 ] 00:29:22.798 }, 00:29:22.798 { 00:29:22.798 "subsystem": "bdev", 00:29:22.798 "config": [ 00:29:22.798 { 00:29:22.798 "method": "bdev_set_options", 00:29:22.798 "params": { 00:29:22.798 "bdev_io_pool_size": 65535, 00:29:22.798 "bdev_io_cache_size": 256, 00:29:22.798 "bdev_auto_examine": true, 00:29:22.798 "iobuf_small_cache_size": 128, 00:29:22.798 "iobuf_large_cache_size": 16 00:29:22.798 } 00:29:22.798 }, 00:29:22.798 { 00:29:22.798 "method": "bdev_raid_set_options", 00:29:22.798 "params": { 00:29:22.798 "process_window_size_kb": 1024 00:29:22.798 } 00:29:22.798 }, 00:29:22.798 { 00:29:22.798 "method": "bdev_iscsi_set_options", 00:29:22.798 "params": { 00:29:22.798 "timeout_sec": 30 00:29:22.798 } 00:29:22.798 }, 00:29:22.798 { 00:29:22.798 "method": "bdev_nvme_set_options", 00:29:22.798 "params": { 00:29:22.798 "action_on_timeout": "none", 00:29:22.798 "timeout_us": 0, 00:29:22.798 "timeout_admin_us": 0, 00:29:22.798 "keep_alive_timeout_ms": 10000, 00:29:22.798 "arbitration_burst": 0, 00:29:22.798 "low_priority_weight": 0, 00:29:22.798 "medium_priority_weight": 0, 00:29:22.798 "high_priority_weight": 0, 00:29:22.798 "nvme_adminq_poll_period_us": 10000, 00:29:22.798 "nvme_ioq_poll_period_us": 0, 00:29:22.798 "io_queue_requests": 512, 00:29:22.798 "delay_cmd_submit": true, 00:29:22.798 "transport_retry_count": 4, 00:29:22.798 "bdev_retry_count": 3, 00:29:22.798 "transport_ack_timeout": 0, 00:29:22.798 "ctrlr_loss_timeout_sec": 0, 00:29:22.798 "reconnect_delay_sec": 0, 00:29:22.798 "fast_io_fail_timeout_sec": 0, 00:29:22.798 "disable_auto_failback": false, 00:29:22.798 "generate_uuids": false, 00:29:22.798 "transport_tos": 0, 00:29:22.798 "nvme_error_stat": false, 00:29:22.798 "rdma_srq_size": 0, 00:29:22.798 "io_path_stat": false, 00:29:22.798 "allow_accel_sequence": false, 00:29:22.798 "rdma_max_cq_size": 0, 00:29:22.798 "rdma_cm_event_timeout_ms": 0, 00:29:22.798 "dhchap_digests": [ 00:29:22.798 "sha256", 00:29:22.798 "sha384", 00:29:22.798 "sha512" 00:29:22.798 ], 00:29:22.798 "dhchap_dhgroups": [ 00:29:22.798 "null", 00:29:22.798 "ffdhe2048", 00:29:22.798 "ffdhe3072", 00:29:22.798 "ffdhe4096", 00:29:22.798 "ffdhe6144", 00:29:22.798 "ffdhe8192" 00:29:22.798 ] 00:29:22.798 } 00:29:22.798 }, 00:29:22.798 { 00:29:22.798 "method": "bdev_nvme_attach_controller", 00:29:22.798 "params": { 00:29:22.798 "name": "nvme0", 00:29:22.798 "trtype": "TCP", 00:29:22.798 "adrfam": "IPv4", 00:29:22.798 "traddr": "127.0.0.1", 00:29:22.798 "trsvcid": "4420", 00:29:22.798 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:22.798 "prchk_reftag": false, 00:29:22.798 "prchk_guard": false, 00:29:22.798 "ctrlr_loss_timeout_sec": 0, 00:29:22.798 "reconnect_delay_sec": 0, 00:29:22.798 "fast_io_fail_timeout_sec": 0, 00:29:22.798 "psk": "key0", 00:29:22.798 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:22.798 "hdgst": false, 00:29:22.798 "ddgst": false 00:29:22.798 } 00:29:22.798 }, 00:29:22.798 { 00:29:22.798 "method": "bdev_nvme_set_hotplug", 00:29:22.798 "params": { 00:29:22.798 "period_us": 100000, 00:29:22.798 "enable": false 00:29:22.798 } 00:29:22.798 }, 00:29:22.798 { 00:29:22.798 "method": "bdev_wait_for_examine" 00:29:22.798 } 00:29:22.798 ] 00:29:22.798 }, 00:29:22.798 { 00:29:22.798 "subsystem": "nbd", 00:29:22.798 "config": [] 00:29:22.798 } 00:29:22.798 ] 00:29:22.798 }' 00:29:22.798 00:30:41 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:22.798 [2024-07-16 00:30:41.563031] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:29:22.798 [2024-07-16 00:30:41.563078] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1706627 ] 00:29:22.798 [2024-07-16 00:30:41.617671] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:23.057 [2024-07-16 00:30:41.694804] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:23.057 [2024-07-16 00:30:41.853991] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:23.625 00:30:42 keyring_file -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:29:23.625 00:30:42 keyring_file -- common/autotest_common.sh@856 -- # return 0 00:29:23.625 00:30:42 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:29:23.625 00:30:42 keyring_file -- keyring/file.sh@120 -- # jq length 00:29:23.625 00:30:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:23.884 00:30:42 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:29:23.884 00:30:42 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:29:23.884 00:30:42 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:23.884 00:30:42 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:23.884 00:30:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:23.884 00:30:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:23.884 00:30:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:23.884 00:30:42 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:29:23.884 00:30:42 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:29:23.884 00:30:42 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:29:23.884 00:30:42 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:23.884 00:30:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:23.884 00:30:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:23.884 00:30:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:24.143 00:30:42 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:29:24.143 00:30:42 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:29:24.143 00:30:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:29:24.143 00:30:42 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:29:24.402 00:30:43 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:29:24.402 00:30:43 keyring_file -- keyring/file.sh@1 -- # cleanup 00:29:24.402 00:30:43 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.UefNzeWP9i /tmp/tmp.NL6j7IY9Hq 00:29:24.402 00:30:43 keyring_file -- keyring/file.sh@20 -- # killprocess 1706627 00:29:24.402 00:30:43 keyring_file -- common/autotest_common.sh@942 -- # '[' -z 1706627 ']' 00:29:24.402 00:30:43 keyring_file -- common/autotest_common.sh@946 -- # kill -0 1706627 00:29:24.402 00:30:43 keyring_file -- common/autotest_common.sh@947 -- # uname 00:29:24.402 00:30:43 keyring_file -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:29:24.402 00:30:43 keyring_file -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1706627 00:29:24.402 00:30:43 keyring_file -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:29:24.402 00:30:43 keyring_file -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:29:24.402 00:30:43 keyring_file -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1706627' 00:29:24.402 killing process with pid 1706627 00:29:24.402 00:30:43 keyring_file -- common/autotest_common.sh@961 -- # kill 1706627 00:29:24.402 Received shutdown signal, test time was about 1.000000 seconds 00:29:24.402 00:29:24.402 Latency(us) 00:29:24.402 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:24.402 =================================================================================================================== 00:29:24.402 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:29:24.402 00:30:43 keyring_file -- common/autotest_common.sh@966 -- # wait 1706627 00:29:24.665 00:30:43 keyring_file -- keyring/file.sh@21 -- # killprocess 1704932 00:29:24.665 00:30:43 keyring_file -- common/autotest_common.sh@942 -- # '[' -z 1704932 ']' 00:29:24.665 00:30:43 keyring_file -- common/autotest_common.sh@946 -- # kill -0 1704932 00:29:24.665 00:30:43 keyring_file -- common/autotest_common.sh@947 -- # uname 00:29:24.665 00:30:43 keyring_file -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:29:24.665 00:30:43 keyring_file -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1704932 00:29:24.665 00:30:43 keyring_file -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:29:24.665 00:30:43 keyring_file -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:29:24.665 00:30:43 keyring_file -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1704932' 00:29:24.665 killing process with pid 1704932 00:29:24.665 00:30:43 keyring_file -- common/autotest_common.sh@961 -- # kill 1704932 00:29:24.665 [2024-07-16 00:30:43.345397] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:29:24.665 00:30:43 keyring_file -- common/autotest_common.sh@966 -- # wait 1704932 00:29:24.922 00:29:24.922 real 0m11.377s 00:29:24.922 user 0m27.043s 00:29:24.922 sys 0m2.655s 00:29:24.922 00:30:43 keyring_file -- common/autotest_common.sh@1118 -- # xtrace_disable 00:29:24.922 00:30:43 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:24.922 ************************************ 00:29:24.922 END TEST keyring_file 00:29:24.922 ************************************ 00:29:24.922 00:30:43 -- common/autotest_common.sh@1136 -- # return 0 00:29:24.922 00:30:43 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:29:24.922 00:30:43 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:29:24.923 00:30:43 -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:29:24.923 00:30:43 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:29:24.923 00:30:43 -- common/autotest_common.sh@10 -- # set +x 00:29:24.923 ************************************ 00:29:24.923 START TEST keyring_linux 00:29:24.923 ************************************ 00:29:24.923 00:30:43 keyring_linux -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:29:25.181 * Looking for test storage... 00:29:25.181 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:29:25.181 00:30:43 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:29:25.181 00:30:43 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:25.181 00:30:43 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:29:25.181 00:30:43 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:25.181 00:30:43 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:25.181 00:30:43 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:25.181 00:30:43 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:25.181 00:30:43 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:25.181 00:30:43 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:25.181 00:30:43 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:25.181 00:30:43 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:25.182 00:30:43 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:25.182 00:30:43 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:25.182 00:30:43 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:25.182 00:30:43 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:25.182 00:30:43 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:25.182 00:30:43 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:25.182 00:30:43 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:25.182 00:30:43 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:25.182 00:30:43 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:25.182 00:30:43 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:25.182 00:30:43 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:25.182 00:30:43 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:25.182 00:30:43 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:25.182 00:30:43 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:25.182 00:30:43 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:25.182 00:30:43 keyring_linux -- paths/export.sh@5 -- # export PATH 00:29:25.182 00:30:43 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:25.182 00:30:43 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:29:25.182 00:30:43 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:25.182 00:30:43 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:25.182 00:30:43 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:25.182 00:30:43 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:25.182 00:30:43 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:25.182 00:30:43 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:25.182 00:30:43 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:25.182 00:30:43 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:25.182 00:30:43 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:29:25.182 00:30:43 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:29:25.182 00:30:43 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:29:25.182 00:30:43 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:29:25.182 00:30:43 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:29:25.182 00:30:43 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:29:25.182 00:30:43 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:29:25.182 00:30:43 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:29:25.182 00:30:43 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:29:25.182 00:30:43 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:29:25.182 00:30:43 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:29:25.182 00:30:43 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:29:25.182 00:30:43 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:29:25.182 00:30:43 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:29:25.182 00:30:43 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:29:25.182 00:30:43 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:29:25.182 00:30:43 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:29:25.182 00:30:43 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:29:25.182 00:30:43 keyring_linux -- nvmf/common.sh@705 -- # python - 00:29:25.182 00:30:43 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:29:25.182 00:30:43 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:29:25.182 /tmp/:spdk-test:key0 00:29:25.182 00:30:43 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:29:25.182 00:30:43 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:29:25.182 00:30:43 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:29:25.182 00:30:43 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:29:25.182 00:30:43 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:29:25.182 00:30:43 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:29:25.182 00:30:43 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:29:25.182 00:30:43 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:29:25.182 00:30:43 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:29:25.182 00:30:43 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:29:25.182 00:30:43 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:29:25.182 00:30:43 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:29:25.182 00:30:43 keyring_linux -- nvmf/common.sh@705 -- # python - 00:29:25.182 00:30:43 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:29:25.182 00:30:43 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:29:25.182 /tmp/:spdk-test:key1 00:29:25.182 00:30:43 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=1706966 00:29:25.182 00:30:43 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 1706966 00:29:25.182 00:30:43 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:29:25.182 00:30:43 keyring_linux -- common/autotest_common.sh@823 -- # '[' -z 1706966 ']' 00:29:25.182 00:30:43 keyring_linux -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:25.182 00:30:43 keyring_linux -- common/autotest_common.sh@828 -- # local max_retries=100 00:29:25.182 00:30:43 keyring_linux -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:25.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:25.182 00:30:43 keyring_linux -- common/autotest_common.sh@832 -- # xtrace_disable 00:29:25.182 00:30:43 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:29:25.182 [2024-07-16 00:30:43.961709] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:29:25.182 [2024-07-16 00:30:43.961759] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1706966 ] 00:29:25.182 [2024-07-16 00:30:44.013466] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:25.442 [2024-07-16 00:30:44.092511] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:26.009 00:30:44 keyring_linux -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:29:26.009 00:30:44 keyring_linux -- common/autotest_common.sh@856 -- # return 0 00:29:26.009 00:30:44 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:29:26.009 00:30:44 keyring_linux -- common/autotest_common.sh@553 -- # xtrace_disable 00:29:26.009 00:30:44 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:29:26.009 [2024-07-16 00:30:44.775217] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:26.009 null0 00:29:26.009 [2024-07-16 00:30:44.807271] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:29:26.009 [2024-07-16 00:30:44.807589] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:26.009 00:30:44 keyring_linux -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:29:26.009 00:30:44 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:29:26.009 339524075 00:29:26.009 00:30:44 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:29:26.009 456100937 00:29:26.009 00:30:44 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=1707184 00:29:26.009 00:30:44 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 1707184 /var/tmp/bperf.sock 00:29:26.009 00:30:44 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:29:26.009 00:30:44 keyring_linux -- common/autotest_common.sh@823 -- # '[' -z 1707184 ']' 00:29:26.009 00:30:44 keyring_linux -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:26.009 00:30:44 keyring_linux -- common/autotest_common.sh@828 -- # local max_retries=100 00:29:26.009 00:30:44 keyring_linux -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:26.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:26.009 00:30:44 keyring_linux -- common/autotest_common.sh@832 -- # xtrace_disable 00:29:26.009 00:30:44 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:29:26.268 [2024-07-16 00:30:44.878269] Starting SPDK v24.09-pre git sha1 ba0567a82 / DPDK 24.03.0 initialization... 00:29:26.268 [2024-07-16 00:30:44.878319] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1707184 ] 00:29:26.268 [2024-07-16 00:30:44.932203] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:26.268 [2024-07-16 00:30:45.011400] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:26.834 00:30:45 keyring_linux -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:29:26.834 00:30:45 keyring_linux -- common/autotest_common.sh@856 -- # return 0 00:29:26.834 00:30:45 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:29:26.834 00:30:45 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:29:27.092 00:30:45 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:29:27.092 00:30:45 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:27.350 00:30:46 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:29:27.350 00:30:46 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:29:27.609 [2024-07-16 00:30:46.230905] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:27.609 nvme0n1 00:29:27.609 00:30:46 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:29:27.609 00:30:46 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:29:27.609 00:30:46 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:29:27.609 00:30:46 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:29:27.609 00:30:46 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:29:27.609 00:30:46 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:27.868 00:30:46 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:29:27.868 00:30:46 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:29:27.868 00:30:46 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:29:27.868 00:30:46 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:29:27.868 00:30:46 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:29:27.868 00:30:46 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:27.868 00:30:46 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:27.868 00:30:46 keyring_linux -- keyring/linux.sh@25 -- # sn=339524075 00:29:27.868 00:30:46 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:29:27.868 00:30:46 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:29:27.868 00:30:46 keyring_linux -- keyring/linux.sh@26 -- # [[ 339524075 == \3\3\9\5\2\4\0\7\5 ]] 00:29:27.868 00:30:46 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 339524075 00:29:27.868 00:30:46 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:29:27.868 00:30:46 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:28.127 Running I/O for 1 seconds... 00:29:29.064 00:29:29.064 Latency(us) 00:29:29.064 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:29.064 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:29.064 nvme0n1 : 1.01 11206.31 43.77 0.00 0.00 11373.81 3234.06 14303.94 00:29:29.064 =================================================================================================================== 00:29:29.064 Total : 11206.31 43.77 0.00 0.00 11373.81 3234.06 14303.94 00:29:29.064 0 00:29:29.064 00:30:47 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:29:29.064 00:30:47 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:29:29.323 00:30:47 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:29:29.323 00:30:47 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:29:29.323 00:30:47 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:29:29.323 00:30:47 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:29:29.323 00:30:47 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:29:29.323 00:30:47 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:29.323 00:30:48 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:29:29.323 00:30:48 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:29:29.323 00:30:48 keyring_linux -- keyring/linux.sh@23 -- # return 00:29:29.323 00:30:48 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:29:29.323 00:30:48 keyring_linux -- common/autotest_common.sh@642 -- # local es=0 00:29:29.323 00:30:48 keyring_linux -- common/autotest_common.sh@644 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:29:29.323 00:30:48 keyring_linux -- common/autotest_common.sh@630 -- # local arg=bperf_cmd 00:29:29.323 00:30:48 keyring_linux -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:29:29.323 00:30:48 keyring_linux -- common/autotest_common.sh@634 -- # type -t bperf_cmd 00:29:29.323 00:30:48 keyring_linux -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:29:29.323 00:30:48 keyring_linux -- common/autotest_common.sh@645 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:29:29.323 00:30:48 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:29:29.581 [2024-07-16 00:30:48.317618] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:29:29.582 [2024-07-16 00:30:48.317698] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1febfd0 (107): Transport endpoint is not connected 00:29:29.582 [2024-07-16 00:30:48.318691] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1febfd0 (9): Bad file descriptor 00:29:29.582 [2024-07-16 00:30:48.319698] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:29.582 [2024-07-16 00:30:48.319709] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:29:29.582 [2024-07-16 00:30:48.319715] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:29.582 request: 00:29:29.582 { 00:29:29.582 "name": "nvme0", 00:29:29.582 "trtype": "tcp", 00:29:29.582 "traddr": "127.0.0.1", 00:29:29.582 "adrfam": "ipv4", 00:29:29.582 "trsvcid": "4420", 00:29:29.582 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:29.582 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:29.582 "prchk_reftag": false, 00:29:29.582 "prchk_guard": false, 00:29:29.582 "hdgst": false, 00:29:29.582 "ddgst": false, 00:29:29.582 "psk": ":spdk-test:key1", 00:29:29.582 "method": "bdev_nvme_attach_controller", 00:29:29.582 "req_id": 1 00:29:29.582 } 00:29:29.582 Got JSON-RPC error response 00:29:29.582 response: 00:29:29.582 { 00:29:29.582 "code": -5, 00:29:29.582 "message": "Input/output error" 00:29:29.582 } 00:29:29.582 00:30:48 keyring_linux -- common/autotest_common.sh@645 -- # es=1 00:29:29.582 00:30:48 keyring_linux -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:29:29.582 00:30:48 keyring_linux -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:29:29.582 00:30:48 keyring_linux -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:29:29.582 00:30:48 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:29:29.582 00:30:48 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:29:29.582 00:30:48 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:29:29.582 00:30:48 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:29:29.582 00:30:48 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:29:29.582 00:30:48 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:29:29.582 00:30:48 keyring_linux -- keyring/linux.sh@33 -- # sn=339524075 00:29:29.582 00:30:48 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 339524075 00:29:29.582 1 links removed 00:29:29.582 00:30:48 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:29:29.582 00:30:48 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:29:29.582 00:30:48 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:29:29.582 00:30:48 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:29:29.582 00:30:48 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:29:29.582 00:30:48 keyring_linux -- keyring/linux.sh@33 -- # sn=456100937 00:29:29.582 00:30:48 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 456100937 00:29:29.582 1 links removed 00:29:29.582 00:30:48 keyring_linux -- keyring/linux.sh@41 -- # killprocess 1707184 00:29:29.582 00:30:48 keyring_linux -- common/autotest_common.sh@942 -- # '[' -z 1707184 ']' 00:29:29.582 00:30:48 keyring_linux -- common/autotest_common.sh@946 -- # kill -0 1707184 00:29:29.582 00:30:48 keyring_linux -- common/autotest_common.sh@947 -- # uname 00:29:29.582 00:30:48 keyring_linux -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:29:29.582 00:30:48 keyring_linux -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1707184 00:29:29.582 00:30:48 keyring_linux -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:29:29.582 00:30:48 keyring_linux -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:29:29.582 00:30:48 keyring_linux -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1707184' 00:29:29.582 killing process with pid 1707184 00:29:29.582 00:30:48 keyring_linux -- common/autotest_common.sh@961 -- # kill 1707184 00:29:29.582 Received shutdown signal, test time was about 1.000000 seconds 00:29:29.582 00:29:29.582 Latency(us) 00:29:29.582 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:29.582 =================================================================================================================== 00:29:29.582 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:29.582 00:30:48 keyring_linux -- common/autotest_common.sh@966 -- # wait 1707184 00:29:29.841 00:30:48 keyring_linux -- keyring/linux.sh@42 -- # killprocess 1706966 00:29:29.841 00:30:48 keyring_linux -- common/autotest_common.sh@942 -- # '[' -z 1706966 ']' 00:29:29.841 00:30:48 keyring_linux -- common/autotest_common.sh@946 -- # kill -0 1706966 00:29:29.841 00:30:48 keyring_linux -- common/autotest_common.sh@947 -- # uname 00:29:29.841 00:30:48 keyring_linux -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:29:29.841 00:30:48 keyring_linux -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1706966 00:29:29.841 00:30:48 keyring_linux -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:29:29.841 00:30:48 keyring_linux -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:29:29.841 00:30:48 keyring_linux -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1706966' 00:29:29.841 killing process with pid 1706966 00:29:29.841 00:30:48 keyring_linux -- common/autotest_common.sh@961 -- # kill 1706966 00:29:29.841 00:30:48 keyring_linux -- common/autotest_common.sh@966 -- # wait 1706966 00:29:30.100 00:29:30.100 real 0m5.201s 00:29:30.100 user 0m9.234s 00:29:30.100 sys 0m1.302s 00:29:30.100 00:30:48 keyring_linux -- common/autotest_common.sh@1118 -- # xtrace_disable 00:29:30.100 00:30:48 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:29:30.100 ************************************ 00:29:30.100 END TEST keyring_linux 00:29:30.100 ************************************ 00:29:30.100 00:30:48 -- common/autotest_common.sh@1136 -- # return 0 00:29:30.100 00:30:48 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:29:30.100 00:30:48 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:29:30.100 00:30:48 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:29:30.100 00:30:48 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:29:30.100 00:30:48 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:29:30.100 00:30:48 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:29:30.100 00:30:48 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:29:30.100 00:30:48 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:29:30.100 00:30:48 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:29:30.100 00:30:48 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:29:30.100 00:30:48 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:29:30.100 00:30:48 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:29:30.100 00:30:48 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:29:30.100 00:30:48 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:29:30.100 00:30:48 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:29:30.100 00:30:48 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:29:30.101 00:30:48 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:29:30.101 00:30:48 -- common/autotest_common.sh@716 -- # xtrace_disable 00:29:30.101 00:30:48 -- common/autotest_common.sh@10 -- # set +x 00:29:30.101 00:30:48 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:29:30.101 00:30:48 -- common/autotest_common.sh@1386 -- # local autotest_es=0 00:29:30.101 00:30:48 -- common/autotest_common.sh@1387 -- # xtrace_disable 00:29:30.359 00:30:48 -- common/autotest_common.sh@10 -- # set +x 00:29:38.547 INFO: APP EXITING 00:29:38.547 INFO: killing all VMs 00:29:38.547 INFO: killing vhost app 00:29:38.547 INFO: EXIT DONE 00:29:40.454 Waiting for block devices as requested 00:29:40.454 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:29:40.713 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:29:40.713 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:29:40.713 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:29:40.713 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:29:40.971 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:29:40.971 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:29:40.971 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:29:41.230 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:29:41.230 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:29:41.230 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:29:41.230 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:29:41.489 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:29:41.489 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:29:41.489 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:29:41.489 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:29:41.749 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:29:44.279 Cleaning 00:29:44.279 Removing: /var/run/dpdk/spdk0/config 00:29:44.279 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:29:44.279 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:29:44.279 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:29:44.280 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:29:44.280 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:29:44.280 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:29:44.280 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:29:44.280 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:29:44.280 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:29:44.280 Removing: /var/run/dpdk/spdk0/hugepage_info 00:29:44.280 Removing: /var/run/dpdk/spdk1/config 00:29:44.280 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:29:44.280 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:29:44.280 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:29:44.280 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:29:44.280 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:29:44.280 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:29:44.280 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:29:44.280 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:29:44.280 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:29:44.280 Removing: /var/run/dpdk/spdk1/hugepage_info 00:29:44.280 Removing: /var/run/dpdk/spdk1/mp_socket 00:29:44.280 Removing: /var/run/dpdk/spdk2/config 00:29:44.280 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:29:44.280 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:29:44.280 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:29:44.280 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:29:44.280 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:29:44.280 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:29:44.280 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:29:44.280 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:29:44.280 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:29:44.280 Removing: /var/run/dpdk/spdk2/hugepage_info 00:29:44.280 Removing: /var/run/dpdk/spdk3/config 00:29:44.280 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:29:44.280 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:29:44.280 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:29:44.280 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:29:44.280 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:29:44.280 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:29:44.280 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:29:44.280 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:29:44.280 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:29:44.280 Removing: /var/run/dpdk/spdk3/hugepage_info 00:29:44.280 Removing: /var/run/dpdk/spdk4/config 00:29:44.280 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:29:44.280 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:29:44.280 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:29:44.280 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:29:44.280 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:29:44.280 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:29:44.280 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:29:44.280 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:29:44.280 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:29:44.280 Removing: /var/run/dpdk/spdk4/hugepage_info 00:29:44.280 Removing: /dev/shm/bdev_svc_trace.1 00:29:44.280 Removing: /dev/shm/nvmf_trace.0 00:29:44.280 Removing: /dev/shm/spdk_tgt_trace.pid1323572 00:29:44.280 Removing: /var/run/dpdk/spdk0 00:29:44.280 Removing: /var/run/dpdk/spdk1 00:29:44.280 Removing: /var/run/dpdk/spdk2 00:29:44.280 Removing: /var/run/dpdk/spdk3 00:29:44.280 Removing: /var/run/dpdk/spdk4 00:29:44.280 Removing: /var/run/dpdk/spdk_pid1321451 00:29:44.280 Removing: /var/run/dpdk/spdk_pid1322503 00:29:44.280 Removing: /var/run/dpdk/spdk_pid1323572 00:29:44.280 Removing: /var/run/dpdk/spdk_pid1324203 00:29:44.280 Removing: /var/run/dpdk/spdk_pid1325148 00:29:44.280 Removing: /var/run/dpdk/spdk_pid1325396 00:29:44.280 Removing: /var/run/dpdk/spdk_pid1326360 00:29:44.280 Removing: /var/run/dpdk/spdk_pid1326595 00:29:44.280 Removing: /var/run/dpdk/spdk_pid1326732 00:29:44.280 Removing: /var/run/dpdk/spdk_pid1328330 00:29:44.280 Removing: /var/run/dpdk/spdk_pid1329494 00:29:44.280 Removing: /var/run/dpdk/spdk_pid1329776 00:29:44.280 Removing: /var/run/dpdk/spdk_pid1330067 00:29:44.280 Removing: /var/run/dpdk/spdk_pid1330393 00:29:44.280 Removing: /var/run/dpdk/spdk_pid1330752 00:29:44.280 Removing: /var/run/dpdk/spdk_pid1330983 00:29:44.280 Removing: /var/run/dpdk/spdk_pid1331206 00:29:44.280 Removing: /var/run/dpdk/spdk_pid1331485 00:29:44.280 Removing: /var/run/dpdk/spdk_pid1332404 00:29:44.280 Removing: /var/run/dpdk/spdk_pid1335740 00:29:44.280 Removing: /var/run/dpdk/spdk_pid1336080 00:29:44.280 Removing: /var/run/dpdk/spdk_pid1336348 00:29:44.280 Removing: /var/run/dpdk/spdk_pid1336436 00:29:44.280 Removing: /var/run/dpdk/spdk_pid1336926 00:29:44.280 Removing: /var/run/dpdk/spdk_pid1336999 00:29:44.280 Removing: /var/run/dpdk/spdk_pid1337429 00:29:44.280 Removing: /var/run/dpdk/spdk_pid1337656 00:29:44.280 Removing: /var/run/dpdk/spdk_pid1337925 00:29:44.280 Removing: /var/run/dpdk/spdk_pid1338158 00:29:44.280 Removing: /var/run/dpdk/spdk_pid1338254 00:29:44.280 Removing: /var/run/dpdk/spdk_pid1338435 00:29:44.280 Removing: /var/run/dpdk/spdk_pid1338982 00:29:44.280 Removing: /var/run/dpdk/spdk_pid1339217 00:29:44.280 Removing: /var/run/dpdk/spdk_pid1339515 00:29:44.280 Removing: /var/run/dpdk/spdk_pid1339790 00:29:44.280 Removing: /var/run/dpdk/spdk_pid1339815 00:29:44.280 Removing: /var/run/dpdk/spdk_pid1339880 00:29:44.280 Removing: /var/run/dpdk/spdk_pid1340136 00:29:44.280 Removing: /var/run/dpdk/spdk_pid1340389 00:29:44.280 Removing: /var/run/dpdk/spdk_pid1340642 00:29:44.280 Removing: /var/run/dpdk/spdk_pid1340892 00:29:44.280 Removing: /var/run/dpdk/spdk_pid1341153 00:29:44.280 Removing: /var/run/dpdk/spdk_pid1341405 00:29:44.280 Removing: /var/run/dpdk/spdk_pid1341657 00:29:44.280 Removing: /var/run/dpdk/spdk_pid1341929 00:29:44.280 Removing: /var/run/dpdk/spdk_pid1342187 00:29:44.280 Removing: /var/run/dpdk/spdk_pid1342435 00:29:44.280 Removing: /var/run/dpdk/spdk_pid1342693 00:29:44.280 Removing: /var/run/dpdk/spdk_pid1342951 00:29:44.280 Removing: /var/run/dpdk/spdk_pid1343207 00:29:44.280 Removing: /var/run/dpdk/spdk_pid1343461 00:29:44.280 Removing: /var/run/dpdk/spdk_pid1343720 00:29:44.280 Removing: /var/run/dpdk/spdk_pid1343989 00:29:44.280 Removing: /var/run/dpdk/spdk_pid1344265 00:29:44.280 Removing: /var/run/dpdk/spdk_pid1344543 00:29:44.280 Removing: /var/run/dpdk/spdk_pid1344803 00:29:44.280 Removing: /var/run/dpdk/spdk_pid1345074 00:29:44.280 Removing: /var/run/dpdk/spdk_pid1345172 00:29:44.280 Removing: /var/run/dpdk/spdk_pid1345476 00:29:44.280 Removing: /var/run/dpdk/spdk_pid1349124 00:29:44.280 Removing: /var/run/dpdk/spdk_pid1392985 00:29:44.280 Removing: /var/run/dpdk/spdk_pid1397230 00:29:44.280 Removing: /var/run/dpdk/spdk_pid1407219 00:29:44.280 Removing: /var/run/dpdk/spdk_pid1412469 00:29:44.280 Removing: /var/run/dpdk/spdk_pid1416462 00:29:44.539 Removing: /var/run/dpdk/spdk_pid1416989 00:29:44.539 Removing: /var/run/dpdk/spdk_pid1422938 00:29:44.539 Removing: /var/run/dpdk/spdk_pid1429129 00:29:44.539 Removing: /var/run/dpdk/spdk_pid1429230 00:29:44.539 Removing: /var/run/dpdk/spdk_pid1430001 00:29:44.539 Removing: /var/run/dpdk/spdk_pid1431299 00:29:44.539 Removing: /var/run/dpdk/spdk_pid1432213 00:29:44.539 Removing: /var/run/dpdk/spdk_pid1432686 00:29:44.539 Removing: /var/run/dpdk/spdk_pid1432868 00:29:44.539 Removing: /var/run/dpdk/spdk_pid1433132 00:29:44.539 Removing: /var/run/dpdk/spdk_pid1433144 00:29:44.539 Removing: /var/run/dpdk/spdk_pid1433152 00:29:44.539 Removing: /var/run/dpdk/spdk_pid1434065 00:29:44.539 Removing: /var/run/dpdk/spdk_pid1434977 00:29:44.539 Removing: /var/run/dpdk/spdk_pid1435894 00:29:44.539 Removing: /var/run/dpdk/spdk_pid1436360 00:29:44.539 Removing: /var/run/dpdk/spdk_pid1436369 00:29:44.539 Removing: /var/run/dpdk/spdk_pid1436617 00:29:44.539 Removing: /var/run/dpdk/spdk_pid1437842 00:29:44.539 Removing: /var/run/dpdk/spdk_pid1439038 00:29:44.539 Removing: /var/run/dpdk/spdk_pid1447368 00:29:44.539 Removing: /var/run/dpdk/spdk_pid1447624 00:29:44.539 Removing: /var/run/dpdk/spdk_pid1451875 00:29:44.539 Removing: /var/run/dpdk/spdk_pid1457725 00:29:44.539 Removing: /var/run/dpdk/spdk_pid1460323 00:29:44.539 Removing: /var/run/dpdk/spdk_pid1470834 00:29:44.539 Removing: /var/run/dpdk/spdk_pid1480126 00:29:44.539 Removing: /var/run/dpdk/spdk_pid1481766 00:29:44.539 Removing: /var/run/dpdk/spdk_pid1482788 00:29:44.539 Removing: /var/run/dpdk/spdk_pid1499475 00:29:44.539 Removing: /var/run/dpdk/spdk_pid1503246 00:29:44.539 Removing: /var/run/dpdk/spdk_pid1528244 00:29:44.539 Removing: /var/run/dpdk/spdk_pid1532693 00:29:44.539 Removing: /var/run/dpdk/spdk_pid1534299 00:29:44.539 Removing: /var/run/dpdk/spdk_pid1536135 00:29:44.539 Removing: /var/run/dpdk/spdk_pid1536370 00:29:44.539 Removing: /var/run/dpdk/spdk_pid1536493 00:29:44.539 Removing: /var/run/dpdk/spdk_pid1536641 00:29:44.539 Removing: /var/run/dpdk/spdk_pid1537346 00:29:44.539 Removing: /var/run/dpdk/spdk_pid1539164 00:29:44.539 Removing: /var/run/dpdk/spdk_pid1540138 00:29:44.539 Removing: /var/run/dpdk/spdk_pid1540667 00:29:44.539 Removing: /var/run/dpdk/spdk_pid1542776 00:29:44.539 Removing: /var/run/dpdk/spdk_pid1543410 00:29:44.539 Removing: /var/run/dpdk/spdk_pid1544090 00:29:44.539 Removing: /var/run/dpdk/spdk_pid1548037 00:29:44.539 Removing: /var/run/dpdk/spdk_pid1558175 00:29:44.540 Removing: /var/run/dpdk/spdk_pid1562541 00:29:44.540 Removing: /var/run/dpdk/spdk_pid1568504 00:29:44.540 Removing: /var/run/dpdk/spdk_pid1569812 00:29:44.540 Removing: /var/run/dpdk/spdk_pid1571355 00:29:44.540 Removing: /var/run/dpdk/spdk_pid1575647 00:29:44.540 Removing: /var/run/dpdk/spdk_pid1579682 00:29:44.540 Removing: /var/run/dpdk/spdk_pid1587240 00:29:44.540 Removing: /var/run/dpdk/spdk_pid1587242 00:29:44.540 Removing: /var/run/dpdk/spdk_pid1591896 00:29:44.540 Removing: /var/run/dpdk/spdk_pid1592053 00:29:44.540 Removing: /var/run/dpdk/spdk_pid1592193 00:29:44.540 Removing: /var/run/dpdk/spdk_pid1592647 00:29:44.540 Removing: /var/run/dpdk/spdk_pid1592665 00:29:44.540 Removing: /var/run/dpdk/spdk_pid1597123 00:29:44.540 Removing: /var/run/dpdk/spdk_pid1597700 00:29:44.540 Removing: /var/run/dpdk/spdk_pid1602021 00:29:44.540 Removing: /var/run/dpdk/spdk_pid1604781 00:29:44.540 Removing: /var/run/dpdk/spdk_pid1610682 00:29:44.540 Removing: /var/run/dpdk/spdk_pid1616020 00:29:44.540 Removing: /var/run/dpdk/spdk_pid1624548 00:29:44.540 Removing: /var/run/dpdk/spdk_pid1631501 00:29:44.540 Removing: /var/run/dpdk/spdk_pid1631527 00:29:44.540 Removing: /var/run/dpdk/spdk_pid1649319 00:29:44.540 Removing: /var/run/dpdk/spdk_pid1649866 00:29:44.540 Removing: /var/run/dpdk/spdk_pid1650533 00:29:44.540 Removing: /var/run/dpdk/spdk_pid1651073 00:29:44.540 Removing: /var/run/dpdk/spdk_pid1651971 00:29:44.540 Removing: /var/run/dpdk/spdk_pid1652458 00:29:44.540 Removing: /var/run/dpdk/spdk_pid1653202 00:29:44.540 Removing: /var/run/dpdk/spdk_pid1653753 00:29:44.540 Removing: /var/run/dpdk/spdk_pid1658388 00:29:44.540 Removing: /var/run/dpdk/spdk_pid1658628 00:29:44.540 Removing: /var/run/dpdk/spdk_pid1664604 00:29:44.540 Removing: /var/run/dpdk/spdk_pid1664745 00:29:44.540 Removing: /var/run/dpdk/spdk_pid1666967 00:29:44.540 Removing: /var/run/dpdk/spdk_pid1674570 00:29:44.540 Removing: /var/run/dpdk/spdk_pid1674648 00:29:44.540 Removing: /var/run/dpdk/spdk_pid1679712 00:29:44.540 Removing: /var/run/dpdk/spdk_pid1681673 00:29:44.540 Removing: /var/run/dpdk/spdk_pid1683641 00:29:44.540 Removing: /var/run/dpdk/spdk_pid1684686 00:29:44.540 Removing: /var/run/dpdk/spdk_pid1686744 00:29:44.540 Removing: /var/run/dpdk/spdk_pid1687935 00:29:44.540 Removing: /var/run/dpdk/spdk_pid1696434 00:29:44.540 Removing: /var/run/dpdk/spdk_pid1696991 00:29:44.540 Removing: /var/run/dpdk/spdk_pid1697486 00:29:44.540 Removing: /var/run/dpdk/spdk_pid1700131 00:29:44.540 Removing: /var/run/dpdk/spdk_pid1700675 00:29:44.540 Removing: /var/run/dpdk/spdk_pid1701252 00:29:44.540 Removing: /var/run/dpdk/spdk_pid1704932 00:29:44.540 Removing: /var/run/dpdk/spdk_pid1705102 00:29:44.540 Removing: /var/run/dpdk/spdk_pid1706627 00:29:44.540 Removing: /var/run/dpdk/spdk_pid1706966 00:29:44.799 Removing: /var/run/dpdk/spdk_pid1707184 00:29:44.799 Clean 00:29:44.799 00:31:03 -- common/autotest_common.sh@1445 -- # return 0 00:29:44.799 00:31:03 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:29:44.799 00:31:03 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:44.799 00:31:03 -- common/autotest_common.sh@10 -- # set +x 00:29:44.799 00:31:03 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:29:44.799 00:31:03 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:44.799 00:31:03 -- common/autotest_common.sh@10 -- # set +x 00:29:44.799 00:31:03 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:29:44.799 00:31:03 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:29:44.799 00:31:03 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:29:44.799 00:31:03 -- spdk/autotest.sh@391 -- # hash lcov 00:29:44.799 00:31:03 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:29:44.799 00:31:03 -- spdk/autotest.sh@393 -- # hostname 00:29:44.799 00:31:03 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-08 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:29:45.057 geninfo: WARNING: invalid characters removed from testname! 00:30:06.988 00:31:22 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:06.988 00:31:24 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:07.925 00:31:26 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:09.825 00:31:28 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:11.745 00:31:30 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:13.650 00:31:32 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:15.553 00:31:33 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:30:15.553 00:31:33 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:15.554 00:31:33 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:30:15.554 00:31:33 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:15.554 00:31:33 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:15.554 00:31:33 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:15.554 00:31:33 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:15.554 00:31:33 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:15.554 00:31:33 -- paths/export.sh@5 -- $ export PATH 00:30:15.554 00:31:33 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:15.554 00:31:33 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:30:15.554 00:31:33 -- common/autobuild_common.sh@444 -- $ date +%s 00:30:15.554 00:31:33 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721082693.XXXXXX 00:30:15.554 00:31:33 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721082693.cmxkGc 00:30:15.554 00:31:33 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:30:15.554 00:31:33 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:30:15.554 00:31:33 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:30:15.554 00:31:33 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:30:15.554 00:31:33 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:30:15.554 00:31:33 -- common/autobuild_common.sh@460 -- $ get_config_params 00:30:15.554 00:31:33 -- common/autotest_common.sh@390 -- $ xtrace_disable 00:30:15.554 00:31:33 -- common/autotest_common.sh@10 -- $ set +x 00:30:15.554 00:31:33 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:30:15.554 00:31:33 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:30:15.554 00:31:33 -- pm/common@17 -- $ local monitor 00:30:15.554 00:31:33 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:15.554 00:31:33 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:15.554 00:31:33 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:15.554 00:31:33 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:15.554 00:31:33 -- pm/common@25 -- $ sleep 1 00:30:15.554 00:31:33 -- pm/common@21 -- $ date +%s 00:30:15.554 00:31:33 -- pm/common@21 -- $ date +%s 00:30:15.554 00:31:33 -- pm/common@21 -- $ date +%s 00:30:15.554 00:31:33 -- pm/common@21 -- $ date +%s 00:30:15.554 00:31:33 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721082693 00:30:15.554 00:31:33 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721082693 00:30:15.554 00:31:33 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721082693 00:30:15.554 00:31:33 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721082693 00:30:15.554 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721082693_collect-vmstat.pm.log 00:30:15.554 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721082693_collect-cpu-load.pm.log 00:30:15.554 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721082693_collect-cpu-temp.pm.log 00:30:15.554 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721082693_collect-bmc-pm.bmc.pm.log 00:30:16.492 00:31:34 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:30:16.492 00:31:34 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j96 00:30:16.492 00:31:34 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:16.492 00:31:34 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:30:16.492 00:31:34 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:30:16.492 00:31:34 -- spdk/autopackage.sh@19 -- $ timing_finish 00:30:16.492 00:31:34 -- common/autotest_common.sh@728 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:30:16.492 00:31:34 -- common/autotest_common.sh@729 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:30:16.492 00:31:34 -- common/autotest_common.sh@731 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:30:16.492 00:31:35 -- spdk/autopackage.sh@20 -- $ exit 0 00:30:16.492 00:31:35 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:30:16.492 00:31:35 -- pm/common@29 -- $ signal_monitor_resources TERM 00:30:16.492 00:31:35 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:30:16.492 00:31:35 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:16.492 00:31:35 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:30:16.492 00:31:35 -- pm/common@44 -- $ pid=1718590 00:30:16.492 00:31:35 -- pm/common@50 -- $ kill -TERM 1718590 00:30:16.492 00:31:35 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:16.492 00:31:35 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:30:16.492 00:31:35 -- pm/common@44 -- $ pid=1718591 00:30:16.492 00:31:35 -- pm/common@50 -- $ kill -TERM 1718591 00:30:16.492 00:31:35 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:16.492 00:31:35 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:30:16.492 00:31:35 -- pm/common@44 -- $ pid=1718592 00:30:16.492 00:31:35 -- pm/common@50 -- $ kill -TERM 1718592 00:30:16.492 00:31:35 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:16.492 00:31:35 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:30:16.492 00:31:35 -- pm/common@44 -- $ pid=1718613 00:30:16.492 00:31:35 -- pm/common@50 -- $ sudo -E kill -TERM 1718613 00:30:16.492 + [[ -n 1219002 ]] 00:30:16.492 + sudo kill 1219002 00:30:16.501 [Pipeline] } 00:30:16.520 [Pipeline] // stage 00:30:16.526 [Pipeline] } 00:30:16.544 [Pipeline] // timeout 00:30:16.550 [Pipeline] } 00:30:16.567 [Pipeline] // catchError 00:30:16.573 [Pipeline] } 00:30:16.589 [Pipeline] // wrap 00:30:16.594 [Pipeline] } 00:30:16.610 [Pipeline] // catchError 00:30:16.617 [Pipeline] stage 00:30:16.619 [Pipeline] { (Epilogue) 00:30:16.629 [Pipeline] catchError 00:30:16.630 [Pipeline] { 00:30:16.642 [Pipeline] echo 00:30:16.644 Cleanup processes 00:30:16.648 [Pipeline] sh 00:30:16.932 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:16.932 1718769 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:30:16.932 1719089 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:16.945 [Pipeline] sh 00:30:17.228 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:17.228 ++ grep -v 'sudo pgrep' 00:30:17.228 ++ awk '{print $1}' 00:30:17.228 + sudo kill -9 1718769 00:30:17.239 [Pipeline] sh 00:30:17.521 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:30:27.508 [Pipeline] sh 00:30:27.791 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:30:27.791 Artifacts sizes are good 00:30:27.805 [Pipeline] archiveArtifacts 00:30:27.812 Archiving artifacts 00:30:27.991 [Pipeline] sh 00:30:28.277 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:30:28.294 [Pipeline] cleanWs 00:30:28.306 [WS-CLEANUP] Deleting project workspace... 00:30:28.306 [WS-CLEANUP] Deferred wipeout is used... 00:30:28.313 [WS-CLEANUP] done 00:30:28.316 [Pipeline] } 00:30:28.345 [Pipeline] // catchError 00:30:28.360 [Pipeline] sh 00:30:28.640 + logger -p user.info -t JENKINS-CI 00:30:28.650 [Pipeline] } 00:30:28.669 [Pipeline] // stage 00:30:28.676 [Pipeline] } 00:30:28.697 [Pipeline] // node 00:30:28.703 [Pipeline] End of Pipeline 00:30:28.730 Finished: SUCCESS